##// END OF EJS Templates
bundle2: use HG2X in the header...
Pierre-Yves David -
r21144:7a20fe8d default
parent child Browse files
Show More
@@ -1,739 +1,739 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 :payload:
116 :payload:
117
117
118 payload is a series of `<chunksize><chunkdata>`.
118 payload is a series of `<chunksize><chunkdata>`.
119
119
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
120 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
121 `chunksize` says)` The payload part is concluded by a zero size chunk.
122
122
123 The current implementation always produces either zero or one chunk.
123 The current implementation always produces either zero or one chunk.
124 This is an implementation limitation that will ultimately be lifted.
124 This is an implementation limitation that will ultimately be lifted.
125
125
126 Bundle processing
126 Bundle processing
127 ============================
127 ============================
128
128
129 Each part is processed in order using a "part handler". Handler are registered
129 Each part is processed in order using a "part handler". Handler are registered
130 for a certain part type.
130 for a certain part type.
131
131
132 The matching of a part to its handler is case insensitive. The case of the
132 The matching of a part to its handler is case insensitive. The case of the
133 part type is used to know if a part is mandatory or advisory. If the Part type
133 part type is used to know if a part is mandatory or advisory. If the Part type
134 contains any uppercase char it is considered mandatory. When no handler is
134 contains any uppercase char it is considered mandatory. When no handler is
135 known for a Mandatory part, the process is aborted and an exception is raised.
135 known for a Mandatory part, the process is aborted and an exception is raised.
136 If the part is advisory and no handler is known, the part is ignored. When the
136 If the part is advisory and no handler is known, the part is ignored. When the
137 process is aborted, the full bundle is still read from the stream to keep the
137 process is aborted, the full bundle is still read from the stream to keep the
138 channel usable. But none of the part read from an abort are processed. In the
138 channel usable. But none of the part read from an abort are processed. In the
139 future, dropping the stream may become an option for channel we do not care to
139 future, dropping the stream may become an option for channel we do not care to
140 preserve.
140 preserve.
141 """
141 """
142
142
143 import util
143 import util
144 import struct
144 import struct
145 import urllib
145 import urllib
146 import string
146 import string
147
147
148 import changegroup
148 import changegroup
149 from i18n import _
149 from i18n import _
150
150
151 _pack = struct.pack
151 _pack = struct.pack
152 _unpack = struct.unpack
152 _unpack = struct.unpack
153
153
154 _magicstring = 'HG20'
154 _magicstring = 'HG2X'
155
155
156 _fstreamparamsize = '>H'
156 _fstreamparamsize = '>H'
157 _fpartheadersize = '>H'
157 _fpartheadersize = '>H'
158 _fparttypesize = '>B'
158 _fparttypesize = '>B'
159 _fpartid = '>I'
159 _fpartid = '>I'
160 _fpayloadsize = '>I'
160 _fpayloadsize = '>I'
161 _fpartparamcount = '>BB'
161 _fpartparamcount = '>BB'
162
162
163 preferedchunksize = 4096
163 preferedchunksize = 4096
164
164
165 def _makefpartparamsizes(nbparams):
165 def _makefpartparamsizes(nbparams):
166 """return a struct format to read part parameter sizes
166 """return a struct format to read part parameter sizes
167
167
168 The number parameters is variable so we need to build that format
168 The number parameters is variable so we need to build that format
169 dynamically.
169 dynamically.
170 """
170 """
171 return '>'+('BB'*nbparams)
171 return '>'+('BB'*nbparams)
172
172
173 parthandlermapping = {}
173 parthandlermapping = {}
174
174
175 def parthandler(parttype):
175 def parthandler(parttype):
176 """decorator that register a function as a bundle2 part handler
176 """decorator that register a function as a bundle2 part handler
177
177
178 eg::
178 eg::
179
179
180 @parthandler('myparttype')
180 @parthandler('myparttype')
181 def myparttypehandler(...):
181 def myparttypehandler(...):
182 '''process a part of type "my part".'''
182 '''process a part of type "my part".'''
183 ...
183 ...
184 """
184 """
185 def _decorator(func):
185 def _decorator(func):
186 lparttype = parttype.lower() # enforce lower case matching.
186 lparttype = parttype.lower() # enforce lower case matching.
187 assert lparttype not in parthandlermapping
187 assert lparttype not in parthandlermapping
188 parthandlermapping[lparttype] = func
188 parthandlermapping[lparttype] = func
189 return func
189 return func
190 return _decorator
190 return _decorator
191
191
192 class unbundlerecords(object):
192 class unbundlerecords(object):
193 """keep record of what happens during and unbundle
193 """keep record of what happens during and unbundle
194
194
195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
195 New records are added using `records.add('cat', obj)`. Where 'cat' is a
196 category of record and obj is an arbitrary object.
196 category of record and obj is an arbitrary object.
197
197
198 `records['cat']` will return all entries of this category 'cat'.
198 `records['cat']` will return all entries of this category 'cat'.
199
199
200 Iterating on the object itself will yield `('category', obj)` tuples
200 Iterating on the object itself will yield `('category', obj)` tuples
201 for all entries.
201 for all entries.
202
202
203 All iterations happens in chronological order.
203 All iterations happens in chronological order.
204 """
204 """
205
205
206 def __init__(self):
206 def __init__(self):
207 self._categories = {}
207 self._categories = {}
208 self._sequences = []
208 self._sequences = []
209 self._replies = {}
209 self._replies = {}
210
210
211 def add(self, category, entry, inreplyto=None):
211 def add(self, category, entry, inreplyto=None):
212 """add a new record of a given category.
212 """add a new record of a given category.
213
213
214 The entry can then be retrieved in the list returned by
214 The entry can then be retrieved in the list returned by
215 self['category']."""
215 self['category']."""
216 self._categories.setdefault(category, []).append(entry)
216 self._categories.setdefault(category, []).append(entry)
217 self._sequences.append((category, entry))
217 self._sequences.append((category, entry))
218 if inreplyto is not None:
218 if inreplyto is not None:
219 self.getreplies(inreplyto).add(category, entry)
219 self.getreplies(inreplyto).add(category, entry)
220
220
221 def getreplies(self, partid):
221 def getreplies(self, partid):
222 """get the subrecords that replies to a specific part"""
222 """get the subrecords that replies to a specific part"""
223 return self._replies.setdefault(partid, unbundlerecords())
223 return self._replies.setdefault(partid, unbundlerecords())
224
224
225 def __getitem__(self, cat):
225 def __getitem__(self, cat):
226 return tuple(self._categories.get(cat, ()))
226 return tuple(self._categories.get(cat, ()))
227
227
228 def __iter__(self):
228 def __iter__(self):
229 return iter(self._sequences)
229 return iter(self._sequences)
230
230
231 def __len__(self):
231 def __len__(self):
232 return len(self._sequences)
232 return len(self._sequences)
233
233
234 def __nonzero__(self):
234 def __nonzero__(self):
235 return bool(self._sequences)
235 return bool(self._sequences)
236
236
237 class bundleoperation(object):
237 class bundleoperation(object):
238 """an object that represents a single bundling process
238 """an object that represents a single bundling process
239
239
240 Its purpose is to carry unbundle-related objects and states.
240 Its purpose is to carry unbundle-related objects and states.
241
241
242 A new object should be created at the beginning of each bundle processing.
242 A new object should be created at the beginning of each bundle processing.
243 The object is to be returned by the processing function.
243 The object is to be returned by the processing function.
244
244
245 The object has very little content now it will ultimately contain:
245 The object has very little content now it will ultimately contain:
246 * an access to the repo the bundle is applied to,
246 * an access to the repo the bundle is applied to,
247 * a ui object,
247 * a ui object,
248 * a way to retrieve a transaction to add changes to the repo,
248 * a way to retrieve a transaction to add changes to the repo,
249 * a way to record the result of processing each part,
249 * a way to record the result of processing each part,
250 * a way to construct a bundle response when applicable.
250 * a way to construct a bundle response when applicable.
251 """
251 """
252
252
253 def __init__(self, repo, transactiongetter):
253 def __init__(self, repo, transactiongetter):
254 self.repo = repo
254 self.repo = repo
255 self.ui = repo.ui
255 self.ui = repo.ui
256 self.records = unbundlerecords()
256 self.records = unbundlerecords()
257 self.gettransaction = transactiongetter
257 self.gettransaction = transactiongetter
258 self.reply = None
258 self.reply = None
259
259
260 class TransactionUnavailable(RuntimeError):
260 class TransactionUnavailable(RuntimeError):
261 pass
261 pass
262
262
263 def _notransaction():
263 def _notransaction():
264 """default method to get a transaction while processing a bundle
264 """default method to get a transaction while processing a bundle
265
265
266 Raise an exception to highlight the fact that no transaction was expected
266 Raise an exception to highlight the fact that no transaction was expected
267 to be created"""
267 to be created"""
268 raise TransactionUnavailable()
268 raise TransactionUnavailable()
269
269
270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
270 def processbundle(repo, unbundler, transactiongetter=_notransaction):
271 """This function process a bundle, apply effect to/from a repo
271 """This function process a bundle, apply effect to/from a repo
272
272
273 It iterates over each part then searches for and uses the proper handling
273 It iterates over each part then searches for and uses the proper handling
274 code to process the part. Parts are processed in order.
274 code to process the part. Parts are processed in order.
275
275
276 This is very early version of this function that will be strongly reworked
276 This is very early version of this function that will be strongly reworked
277 before final usage.
277 before final usage.
278
278
279 Unknown Mandatory part will abort the process.
279 Unknown Mandatory part will abort the process.
280 """
280 """
281 op = bundleoperation(repo, transactiongetter)
281 op = bundleoperation(repo, transactiongetter)
282 # todo:
282 # todo:
283 # - replace this is a init function soon.
283 # - replace this is a init function soon.
284 # - exception catching
284 # - exception catching
285 unbundler.params
285 unbundler.params
286 iterparts = unbundler.iterparts()
286 iterparts = unbundler.iterparts()
287 part = None
287 part = None
288 try:
288 try:
289 for part in iterparts:
289 for part in iterparts:
290 parttype = part.type
290 parttype = part.type
291 # part key are matched lower case
291 # part key are matched lower case
292 key = parttype.lower()
292 key = parttype.lower()
293 try:
293 try:
294 handler = parthandlermapping[key]
294 handler = parthandlermapping[key]
295 op.ui.debug('found a handler for part %r\n' % parttype)
295 op.ui.debug('found a handler for part %r\n' % parttype)
296 except KeyError:
296 except KeyError:
297 if key != parttype: # mandatory parts
297 if key != parttype: # mandatory parts
298 # todo:
298 # todo:
299 # - use a more precise exception
299 # - use a more precise exception
300 raise
300 raise
301 op.ui.debug('ignoring unknown advisory part %r\n' % key)
301 op.ui.debug('ignoring unknown advisory part %r\n' % key)
302 # consuming the part
302 # consuming the part
303 part.read()
303 part.read()
304 continue
304 continue
305
305
306 # handler is called outside the above try block so that we don't
306 # handler is called outside the above try block so that we don't
307 # risk catching KeyErrors from anything other than the
307 # risk catching KeyErrors from anything other than the
308 # parthandlermapping lookup (any KeyError raised by handler()
308 # parthandlermapping lookup (any KeyError raised by handler()
309 # itself represents a defect of a different variety).
309 # itself represents a defect of a different variety).
310 output = None
310 output = None
311 if op.reply is not None:
311 if op.reply is not None:
312 op.ui.pushbuffer(error=True)
312 op.ui.pushbuffer(error=True)
313 output = ''
313 output = ''
314 try:
314 try:
315 handler(op, part)
315 handler(op, part)
316 finally:
316 finally:
317 if output is not None:
317 if output is not None:
318 output = op.ui.popbuffer()
318 output = op.ui.popbuffer()
319 if output:
319 if output:
320 outpart = bundlepart('output',
320 outpart = bundlepart('output',
321 advisoryparams=[('in-reply-to',
321 advisoryparams=[('in-reply-to',
322 str(part.id))],
322 str(part.id))],
323 data=output)
323 data=output)
324 op.reply.addpart(outpart)
324 op.reply.addpart(outpart)
325 part.read()
325 part.read()
326 except Exception:
326 except Exception:
327 if part is not None:
327 if part is not None:
328 # consume the bundle content
328 # consume the bundle content
329 part.read()
329 part.read()
330 for part in iterparts:
330 for part in iterparts:
331 # consume the bundle content
331 # consume the bundle content
332 part.read()
332 part.read()
333 raise
333 raise
334 return op
334 return op
335
335
336 def decodecaps(blob):
336 def decodecaps(blob):
337 """decode a bundle2 caps bytes blob into a dictionnary
337 """decode a bundle2 caps bytes blob into a dictionnary
338
338
339 The blob is a list of capabilities (one per line)
339 The blob is a list of capabilities (one per line)
340 Capabilities may have values using a line of the form::
340 Capabilities may have values using a line of the form::
341
341
342 capability=value1,value2,value3
342 capability=value1,value2,value3
343
343
344 The values are always a list."""
344 The values are always a list."""
345 caps = {}
345 caps = {}
346 for line in blob.splitlines():
346 for line in blob.splitlines():
347 if not line:
347 if not line:
348 continue
348 continue
349 if '=' not in line:
349 if '=' not in line:
350 key, vals = line, ()
350 key, vals = line, ()
351 else:
351 else:
352 key, vals = line.split('=', 1)
352 key, vals = line.split('=', 1)
353 vals = vals.split(',')
353 vals = vals.split(',')
354 key = urllib.unquote(key)
354 key = urllib.unquote(key)
355 vals = [urllib.unquote(v) for v in vals]
355 vals = [urllib.unquote(v) for v in vals]
356 caps[key] = vals
356 caps[key] = vals
357 return caps
357 return caps
358
358
359 def encodecaps(caps):
359 def encodecaps(caps):
360 """encode a bundle2 caps dictionary into a bytes blob"""
360 """encode a bundle2 caps dictionary into a bytes blob"""
361 chunks = []
361 chunks = []
362 for ca in sorted(caps):
362 for ca in sorted(caps):
363 vals = caps[ca]
363 vals = caps[ca]
364 ca = urllib.quote(ca)
364 ca = urllib.quote(ca)
365 vals = [urllib.quote(v) for v in vals]
365 vals = [urllib.quote(v) for v in vals]
366 if vals:
366 if vals:
367 ca = "%s=%s" % (ca, ','.join(vals))
367 ca = "%s=%s" % (ca, ','.join(vals))
368 chunks.append(ca)
368 chunks.append(ca)
369 return '\n'.join(chunks)
369 return '\n'.join(chunks)
370
370
371 class bundle20(object):
371 class bundle20(object):
372 """represent an outgoing bundle2 container
372 """represent an outgoing bundle2 container
373
373
374 Use the `addparam` method to add stream level parameter. and `addpart` to
374 Use the `addparam` method to add stream level parameter. and `addpart` to
375 populate it. Then call `getchunks` to retrieve all the binary chunks of
375 populate it. Then call `getchunks` to retrieve all the binary chunks of
376 data that compose the bundle2 container."""
376 data that compose the bundle2 container."""
377
377
378 def __init__(self, ui, capabilities=()):
378 def __init__(self, ui, capabilities=()):
379 self.ui = ui
379 self.ui = ui
380 self._params = []
380 self._params = []
381 self._parts = []
381 self._parts = []
382 self.capabilities = dict(capabilities)
382 self.capabilities = dict(capabilities)
383
383
384 def addparam(self, name, value=None):
384 def addparam(self, name, value=None):
385 """add a stream level parameter"""
385 """add a stream level parameter"""
386 if not name:
386 if not name:
387 raise ValueError('empty parameter name')
387 raise ValueError('empty parameter name')
388 if name[0] not in string.letters:
388 if name[0] not in string.letters:
389 raise ValueError('non letter first character: %r' % name)
389 raise ValueError('non letter first character: %r' % name)
390 self._params.append((name, value))
390 self._params.append((name, value))
391
391
392 def addpart(self, part):
392 def addpart(self, part):
393 """add a new part to the bundle2 container
393 """add a new part to the bundle2 container
394
394
395 Parts contains the actual applicative payload."""
395 Parts contains the actual applicative payload."""
396 assert part.id is None
396 assert part.id is None
397 part.id = len(self._parts) # very cheap counter
397 part.id = len(self._parts) # very cheap counter
398 self._parts.append(part)
398 self._parts.append(part)
399
399
400 def getchunks(self):
400 def getchunks(self):
401 self.ui.debug('start emission of %s stream\n' % _magicstring)
401 self.ui.debug('start emission of %s stream\n' % _magicstring)
402 yield _magicstring
402 yield _magicstring
403 param = self._paramchunk()
403 param = self._paramchunk()
404 self.ui.debug('bundle parameter: %s\n' % param)
404 self.ui.debug('bundle parameter: %s\n' % param)
405 yield _pack(_fstreamparamsize, len(param))
405 yield _pack(_fstreamparamsize, len(param))
406 if param:
406 if param:
407 yield param
407 yield param
408
408
409 self.ui.debug('start of parts\n')
409 self.ui.debug('start of parts\n')
410 for part in self._parts:
410 for part in self._parts:
411 self.ui.debug('bundle part: "%s"\n' % part.type)
411 self.ui.debug('bundle part: "%s"\n' % part.type)
412 for chunk in part.getchunks():
412 for chunk in part.getchunks():
413 yield chunk
413 yield chunk
414 self.ui.debug('end of bundle\n')
414 self.ui.debug('end of bundle\n')
415 yield '\0\0'
415 yield '\0\0'
416
416
417 def _paramchunk(self):
417 def _paramchunk(self):
418 """return a encoded version of all stream parameters"""
418 """return a encoded version of all stream parameters"""
419 blocks = []
419 blocks = []
420 for par, value in self._params:
420 for par, value in self._params:
421 par = urllib.quote(par)
421 par = urllib.quote(par)
422 if value is not None:
422 if value is not None:
423 value = urllib.quote(value)
423 value = urllib.quote(value)
424 par = '%s=%s' % (par, value)
424 par = '%s=%s' % (par, value)
425 blocks.append(par)
425 blocks.append(par)
426 return ' '.join(blocks)
426 return ' '.join(blocks)
427
427
428 class unpackermixin(object):
428 class unpackermixin(object):
429 """A mixin to extract bytes and struct data from a stream"""
429 """A mixin to extract bytes and struct data from a stream"""
430
430
431 def __init__(self, fp):
431 def __init__(self, fp):
432 self._fp = fp
432 self._fp = fp
433
433
434 def _unpack(self, format):
434 def _unpack(self, format):
435 """unpack this struct format from the stream"""
435 """unpack this struct format from the stream"""
436 data = self._readexact(struct.calcsize(format))
436 data = self._readexact(struct.calcsize(format))
437 return _unpack(format, data)
437 return _unpack(format, data)
438
438
439 def _readexact(self, size):
439 def _readexact(self, size):
440 """read exactly <size> bytes from the stream"""
440 """read exactly <size> bytes from the stream"""
441 return changegroup.readexactly(self._fp, size)
441 return changegroup.readexactly(self._fp, size)
442
442
443
443
444 class unbundle20(unpackermixin):
444 class unbundle20(unpackermixin):
445 """interpret a bundle2 stream
445 """interpret a bundle2 stream
446
446
447 This class is fed with a binary stream and yields parts through its
447 This class is fed with a binary stream and yields parts through its
448 `iterparts` methods."""
448 `iterparts` methods."""
449
449
450 def __init__(self, ui, fp, header=None):
450 def __init__(self, ui, fp, header=None):
451 """If header is specified, we do not read it out of the stream."""
451 """If header is specified, we do not read it out of the stream."""
452 self.ui = ui
452 self.ui = ui
453 super(unbundle20, self).__init__(fp)
453 super(unbundle20, self).__init__(fp)
454 if header is None:
454 if header is None:
455 header = self._readexact(4)
455 header = self._readexact(4)
456 magic, version = header[0:2], header[2:4]
456 magic, version = header[0:2], header[2:4]
457 if magic != 'HG':
457 if magic != 'HG':
458 raise util.Abort(_('not a Mercurial bundle'))
458 raise util.Abort(_('not a Mercurial bundle'))
459 if version != '20':
459 if version != '2X':
460 raise util.Abort(_('unknown bundle version %s') % version)
460 raise util.Abort(_('unknown bundle version %s') % version)
461 self.ui.debug('start processing of %s stream\n' % header)
461 self.ui.debug('start processing of %s stream\n' % header)
462
462
463 @util.propertycache
463 @util.propertycache
464 def params(self):
464 def params(self):
465 """dictionary of stream level parameters"""
465 """dictionary of stream level parameters"""
466 self.ui.debug('reading bundle2 stream parameters\n')
466 self.ui.debug('reading bundle2 stream parameters\n')
467 params = {}
467 params = {}
468 paramssize = self._unpack(_fstreamparamsize)[0]
468 paramssize = self._unpack(_fstreamparamsize)[0]
469 if paramssize:
469 if paramssize:
470 for p in self._readexact(paramssize).split(' '):
470 for p in self._readexact(paramssize).split(' '):
471 p = p.split('=', 1)
471 p = p.split('=', 1)
472 p = [urllib.unquote(i) for i in p]
472 p = [urllib.unquote(i) for i in p]
473 if len(p) < 2:
473 if len(p) < 2:
474 p.append(None)
474 p.append(None)
475 self._processparam(*p)
475 self._processparam(*p)
476 params[p[0]] = p[1]
476 params[p[0]] = p[1]
477 return params
477 return params
478
478
479 def _processparam(self, name, value):
479 def _processparam(self, name, value):
480 """process a parameter, applying its effect if needed
480 """process a parameter, applying its effect if needed
481
481
482 Parameter starting with a lower case letter are advisory and will be
482 Parameter starting with a lower case letter are advisory and will be
483 ignored when unknown. Those starting with an upper case letter are
483 ignored when unknown. Those starting with an upper case letter are
484 mandatory and will this function will raise a KeyError when unknown.
484 mandatory and will this function will raise a KeyError when unknown.
485
485
486 Note: no option are currently supported. Any input will be either
486 Note: no option are currently supported. Any input will be either
487 ignored or failing.
487 ignored or failing.
488 """
488 """
489 if not name:
489 if not name:
490 raise ValueError('empty parameter name')
490 raise ValueError('empty parameter name')
491 if name[0] not in string.letters:
491 if name[0] not in string.letters:
492 raise ValueError('non letter first character: %r' % name)
492 raise ValueError('non letter first character: %r' % name)
493 # Some logic will be later added here to try to process the option for
493 # Some logic will be later added here to try to process the option for
494 # a dict of known parameter.
494 # a dict of known parameter.
495 if name[0].islower():
495 if name[0].islower():
496 self.ui.debug("ignoring unknown parameter %r\n" % name)
496 self.ui.debug("ignoring unknown parameter %r\n" % name)
497 else:
497 else:
498 raise KeyError(name)
498 raise KeyError(name)
499
499
500
500
501 def iterparts(self):
501 def iterparts(self):
502 """yield all parts contained in the stream"""
502 """yield all parts contained in the stream"""
503 # make sure param have been loaded
503 # make sure param have been loaded
504 self.params
504 self.params
505 self.ui.debug('start extraction of bundle2 parts\n')
505 self.ui.debug('start extraction of bundle2 parts\n')
506 headerblock = self._readpartheader()
506 headerblock = self._readpartheader()
507 while headerblock is not None:
507 while headerblock is not None:
508 part = unbundlepart(self.ui, headerblock, self._fp)
508 part = unbundlepart(self.ui, headerblock, self._fp)
509 yield part
509 yield part
510 headerblock = self._readpartheader()
510 headerblock = self._readpartheader()
511 self.ui.debug('end of bundle2 stream\n')
511 self.ui.debug('end of bundle2 stream\n')
512
512
513 def _readpartheader(self):
513 def _readpartheader(self):
514 """reads a part header size and return the bytes blob
514 """reads a part header size and return the bytes blob
515
515
516 returns None if empty"""
516 returns None if empty"""
517 headersize = self._unpack(_fpartheadersize)[0]
517 headersize = self._unpack(_fpartheadersize)[0]
518 self.ui.debug('part header size: %i\n' % headersize)
518 self.ui.debug('part header size: %i\n' % headersize)
519 if headersize:
519 if headersize:
520 return self._readexact(headersize)
520 return self._readexact(headersize)
521 return None
521 return None
522
522
523
523
524 class bundlepart(object):
524 class bundlepart(object):
525 """A bundle2 part contains application level payload
525 """A bundle2 part contains application level payload
526
526
527 The part `type` is used to route the part to the application level
527 The part `type` is used to route the part to the application level
528 handler.
528 handler.
529 """
529 """
530
530
531 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
531 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
532 data=''):
532 data=''):
533 self.id = None
533 self.id = None
534 self.type = parttype
534 self.type = parttype
535 self.data = data
535 self.data = data
536 self.mandatoryparams = mandatoryparams
536 self.mandatoryparams = mandatoryparams
537 self.advisoryparams = advisoryparams
537 self.advisoryparams = advisoryparams
538
538
539 def getchunks(self):
539 def getchunks(self):
540 #### header
540 #### header
541 ## parttype
541 ## parttype
542 header = [_pack(_fparttypesize, len(self.type)),
542 header = [_pack(_fparttypesize, len(self.type)),
543 self.type, _pack(_fpartid, self.id),
543 self.type, _pack(_fpartid, self.id),
544 ]
544 ]
545 ## parameters
545 ## parameters
546 # count
546 # count
547 manpar = self.mandatoryparams
547 manpar = self.mandatoryparams
548 advpar = self.advisoryparams
548 advpar = self.advisoryparams
549 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
549 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
550 # size
550 # size
551 parsizes = []
551 parsizes = []
552 for key, value in manpar:
552 for key, value in manpar:
553 parsizes.append(len(key))
553 parsizes.append(len(key))
554 parsizes.append(len(value))
554 parsizes.append(len(value))
555 for key, value in advpar:
555 for key, value in advpar:
556 parsizes.append(len(key))
556 parsizes.append(len(key))
557 parsizes.append(len(value))
557 parsizes.append(len(value))
558 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
558 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
559 header.append(paramsizes)
559 header.append(paramsizes)
560 # key, value
560 # key, value
561 for key, value in manpar:
561 for key, value in manpar:
562 header.append(key)
562 header.append(key)
563 header.append(value)
563 header.append(value)
564 for key, value in advpar:
564 for key, value in advpar:
565 header.append(key)
565 header.append(key)
566 header.append(value)
566 header.append(value)
567 ## finalize header
567 ## finalize header
568 headerchunk = ''.join(header)
568 headerchunk = ''.join(header)
569 yield _pack(_fpartheadersize, len(headerchunk))
569 yield _pack(_fpartheadersize, len(headerchunk))
570 yield headerchunk
570 yield headerchunk
571 ## payload
571 ## payload
572 for chunk in self._payloadchunks():
572 for chunk in self._payloadchunks():
573 yield _pack(_fpayloadsize, len(chunk))
573 yield _pack(_fpayloadsize, len(chunk))
574 yield chunk
574 yield chunk
575 # end of payload
575 # end of payload
576 yield _pack(_fpayloadsize, 0)
576 yield _pack(_fpayloadsize, 0)
577
577
578 def _payloadchunks(self):
578 def _payloadchunks(self):
579 """yield chunks of a the part payload
579 """yield chunks of a the part payload
580
580
581 Exists to handle the different methods to provide data to a part."""
581 Exists to handle the different methods to provide data to a part."""
582 # we only support fixed size data now.
582 # we only support fixed size data now.
583 # This will be improved in the future.
583 # This will be improved in the future.
584 if util.safehasattr(self.data, 'next'):
584 if util.safehasattr(self.data, 'next'):
585 buff = util.chunkbuffer(self.data)
585 buff = util.chunkbuffer(self.data)
586 chunk = buff.read(preferedchunksize)
586 chunk = buff.read(preferedchunksize)
587 while chunk:
587 while chunk:
588 yield chunk
588 yield chunk
589 chunk = buff.read(preferedchunksize)
589 chunk = buff.read(preferedchunksize)
590 elif len(self.data):
590 elif len(self.data):
591 yield self.data
591 yield self.data
592
592
593 class unbundlepart(unpackermixin):
593 class unbundlepart(unpackermixin):
594 """a bundle part read from a bundle"""
594 """a bundle part read from a bundle"""
595
595
596 def __init__(self, ui, header, fp):
596 def __init__(self, ui, header, fp):
597 super(unbundlepart, self).__init__(fp)
597 super(unbundlepart, self).__init__(fp)
598 self.ui = ui
598 self.ui = ui
599 # unbundle state attr
599 # unbundle state attr
600 self._headerdata = header
600 self._headerdata = header
601 self._headeroffset = 0
601 self._headeroffset = 0
602 self._initialized = False
602 self._initialized = False
603 self.consumed = False
603 self.consumed = False
604 # part data
604 # part data
605 self.id = None
605 self.id = None
606 self.type = None
606 self.type = None
607 self.mandatoryparams = None
607 self.mandatoryparams = None
608 self.advisoryparams = None
608 self.advisoryparams = None
609 self._payloadstream = None
609 self._payloadstream = None
610 self._readheader()
610 self._readheader()
611
611
612 def _fromheader(self, size):
612 def _fromheader(self, size):
613 """return the next <size> byte from the header"""
613 """return the next <size> byte from the header"""
614 offset = self._headeroffset
614 offset = self._headeroffset
615 data = self._headerdata[offset:(offset + size)]
615 data = self._headerdata[offset:(offset + size)]
616 self._headeroffset = offset + size
616 self._headeroffset = offset + size
617 return data
617 return data
618
618
619 def _unpackheader(self, format):
619 def _unpackheader(self, format):
620 """read given format from header
620 """read given format from header
621
621
622 This automatically compute the size of the format to read."""
622 This automatically compute the size of the format to read."""
623 data = self._fromheader(struct.calcsize(format))
623 data = self._fromheader(struct.calcsize(format))
624 return _unpack(format, data)
624 return _unpack(format, data)
625
625
626 def _readheader(self):
626 def _readheader(self):
627 """read the header and setup the object"""
627 """read the header and setup the object"""
628 typesize = self._unpackheader(_fparttypesize)[0]
628 typesize = self._unpackheader(_fparttypesize)[0]
629 self.type = self._fromheader(typesize)
629 self.type = self._fromheader(typesize)
630 self.ui.debug('part type: "%s"\n' % self.type)
630 self.ui.debug('part type: "%s"\n' % self.type)
631 self.id = self._unpackheader(_fpartid)[0]
631 self.id = self._unpackheader(_fpartid)[0]
632 self.ui.debug('part id: "%s"\n' % self.id)
632 self.ui.debug('part id: "%s"\n' % self.id)
633 ## reading parameters
633 ## reading parameters
634 # param count
634 # param count
635 mancount, advcount = self._unpackheader(_fpartparamcount)
635 mancount, advcount = self._unpackheader(_fpartparamcount)
636 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
636 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
637 # param size
637 # param size
638 fparamsizes = _makefpartparamsizes(mancount + advcount)
638 fparamsizes = _makefpartparamsizes(mancount + advcount)
639 paramsizes = self._unpackheader(fparamsizes)
639 paramsizes = self._unpackheader(fparamsizes)
640 # make it a list of couple again
640 # make it a list of couple again
641 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
641 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
642 # split mandatory from advisory
642 # split mandatory from advisory
643 mansizes = paramsizes[:mancount]
643 mansizes = paramsizes[:mancount]
644 advsizes = paramsizes[mancount:]
644 advsizes = paramsizes[mancount:]
645 # retrive param value
645 # retrive param value
646 manparams = []
646 manparams = []
647 for key, value in mansizes:
647 for key, value in mansizes:
648 manparams.append((self._fromheader(key), self._fromheader(value)))
648 manparams.append((self._fromheader(key), self._fromheader(value)))
649 advparams = []
649 advparams = []
650 for key, value in advsizes:
650 for key, value in advsizes:
651 advparams.append((self._fromheader(key), self._fromheader(value)))
651 advparams.append((self._fromheader(key), self._fromheader(value)))
652 self.mandatoryparams = manparams
652 self.mandatoryparams = manparams
653 self.advisoryparams = advparams
653 self.advisoryparams = advparams
654 ## part payload
654 ## part payload
655 def payloadchunks():
655 def payloadchunks():
656 payloadsize = self._unpack(_fpayloadsize)[0]
656 payloadsize = self._unpack(_fpayloadsize)[0]
657 self.ui.debug('payload chunk size: %i\n' % payloadsize)
657 self.ui.debug('payload chunk size: %i\n' % payloadsize)
658 while payloadsize:
658 while payloadsize:
659 yield self._readexact(payloadsize)
659 yield self._readexact(payloadsize)
660 payloadsize = self._unpack(_fpayloadsize)[0]
660 payloadsize = self._unpack(_fpayloadsize)[0]
661 self.ui.debug('payload chunk size: %i\n' % payloadsize)
661 self.ui.debug('payload chunk size: %i\n' % payloadsize)
662 self._payloadstream = util.chunkbuffer(payloadchunks())
662 self._payloadstream = util.chunkbuffer(payloadchunks())
663 # we read the data, tell it
663 # we read the data, tell it
664 self._initialized = True
664 self._initialized = True
665
665
666 def read(self, size=None):
666 def read(self, size=None):
667 """read payload data"""
667 """read payload data"""
668 if not self._initialized:
668 if not self._initialized:
669 self._readheader()
669 self._readheader()
670 if size is None:
670 if size is None:
671 data = self._payloadstream.read()
671 data = self._payloadstream.read()
672 else:
672 else:
673 data = self._payloadstream.read(size)
673 data = self._payloadstream.read(size)
674 if size is None or len(data) < size:
674 if size is None or len(data) < size:
675 self.consumed = True
675 self.consumed = True
676 return data
676 return data
677
677
678
678
679 @parthandler('changegroup')
679 @parthandler('changegroup')
680 def handlechangegroup(op, inpart):
680 def handlechangegroup(op, inpart):
681 """apply a changegroup part on the repo
681 """apply a changegroup part on the repo
682
682
683 This is a very early implementation that will massive rework before being
683 This is a very early implementation that will massive rework before being
684 inflicted to any end-user.
684 inflicted to any end-user.
685 """
685 """
686 # Make sure we trigger a transaction creation
686 # Make sure we trigger a transaction creation
687 #
687 #
688 # The addchangegroup function will get a transaction object by itself, but
688 # The addchangegroup function will get a transaction object by itself, but
689 # we need to make sure we trigger the creation of a transaction object used
689 # we need to make sure we trigger the creation of a transaction object used
690 # for the whole processing scope.
690 # for the whole processing scope.
691 op.gettransaction()
691 op.gettransaction()
692 cg = changegroup.unbundle10(inpart, 'UN')
692 cg = changegroup.unbundle10(inpart, 'UN')
693 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
693 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
694 op.records.add('changegroup', {'return': ret})
694 op.records.add('changegroup', {'return': ret})
695 if op.reply is not None:
695 if op.reply is not None:
696 # This is definitly not the final form of this
696 # This is definitly not the final form of this
697 # return. But one need to start somewhere.
697 # return. But one need to start somewhere.
698 part = bundlepart('reply:changegroup', (),
698 part = bundlepart('reply:changegroup', (),
699 [('in-reply-to', str(inpart.id)),
699 [('in-reply-to', str(inpart.id)),
700 ('return', '%i' % ret)])
700 ('return', '%i' % ret)])
701 op.reply.addpart(part)
701 op.reply.addpart(part)
702 assert not inpart.read()
702 assert not inpart.read()
703
703
704 @parthandler('reply:changegroup')
704 @parthandler('reply:changegroup')
705 def handlechangegroup(op, inpart):
705 def handlechangegroup(op, inpart):
706 p = dict(inpart.advisoryparams)
706 p = dict(inpart.advisoryparams)
707 ret = int(p['return'])
707 ret = int(p['return'])
708 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
708 op.records.add('changegroup', {'return': ret}, int(p['in-reply-to']))
709
709
710 @parthandler('check:heads')
710 @parthandler('check:heads')
711 def handlechangegroup(op, inpart):
711 def handlechangegroup(op, inpart):
712 """check that head of the repo did not change
712 """check that head of the repo did not change
713
713
714 This is used to detect a push race when using unbundle.
714 This is used to detect a push race when using unbundle.
715 This replaces the "heads" argument of unbundle."""
715 This replaces the "heads" argument of unbundle."""
716 h = inpart.read(20)
716 h = inpart.read(20)
717 heads = []
717 heads = []
718 while len(h) == 20:
718 while len(h) == 20:
719 heads.append(h)
719 heads.append(h)
720 h = inpart.read(20)
720 h = inpart.read(20)
721 assert not h
721 assert not h
722 if heads != op.repo.heads():
722 if heads != op.repo.heads():
723 raise exchange.PushRaced()
723 raise exchange.PushRaced()
724
724
725 @parthandler('output')
725 @parthandler('output')
726 def handleoutput(op, inpart):
726 def handleoutput(op, inpart):
727 """forward output captured on the server to the client"""
727 """forward output captured on the server to the client"""
728 for line in inpart.read().splitlines():
728 for line in inpart.read().splitlines():
729 op.ui.write(('remote: %s\n' % line))
729 op.ui.write(('remote: %s\n' % line))
730
730
731 @parthandler('replycaps')
731 @parthandler('replycaps')
732 def handlereplycaps(op, inpart):
732 def handlereplycaps(op, inpart):
733 """Notify that a reply bundle should be created
733 """Notify that a reply bundle should be created
734
734
735 The payload contains the capabilities information for the reply"""
735 The payload contains the capabilities information for the reply"""
736 caps = decodecaps(inpart.read())
736 caps = decodecaps(inpart.read())
737 if op.reply is None:
737 if op.reply is None:
738 op.reply = bundle20(op.ui, caps)
738 op.reply = bundle20(op.ui, caps)
739
739
@@ -1,717 +1,717 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85
11 import util, scmutil, changegroup, base85
12 import discovery, phases, obsolete, bookmarks, bundle2
12 import discovery, phases, obsolete, bookmarks, bundle2
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.unbundle10(fh, alg)
34 return changegroup.unbundle10(fh, alg)
35 elif version == '20':
35 elif version == '2X':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40
40
41 class pushoperation(object):
41 class pushoperation(object):
42 """A object that represent a single push operation
42 """A object that represent a single push operation
43
43
44 It purpose is to carry push related state and very common operation.
44 It purpose is to carry push related state and very common operation.
45
45
46 A new should be created at the beginning of each push and discarded
46 A new should be created at the beginning of each push and discarded
47 afterward.
47 afterward.
48 """
48 """
49
49
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
51 # repo we push from
51 # repo we push from
52 self.repo = repo
52 self.repo = repo
53 self.ui = repo.ui
53 self.ui = repo.ui
54 # repo we push to
54 # repo we push to
55 self.remote = remote
55 self.remote = remote
56 # force option provided
56 # force option provided
57 self.force = force
57 self.force = force
58 # revs to be pushed (None is "all")
58 # revs to be pushed (None is "all")
59 self.revs = revs
59 self.revs = revs
60 # allow push of new branch
60 # allow push of new branch
61 self.newbranch = newbranch
61 self.newbranch = newbranch
62 # did a local lock get acquired?
62 # did a local lock get acquired?
63 self.locallocked = None
63 self.locallocked = None
64 # Integer version of the push result
64 # Integer version of the push result
65 # - None means nothing to push
65 # - None means nothing to push
66 # - 0 means HTTP error
66 # - 0 means HTTP error
67 # - 1 means we pushed and remote head count is unchanged *or*
67 # - 1 means we pushed and remote head count is unchanged *or*
68 # we have outgoing changesets but refused to push
68 # we have outgoing changesets but refused to push
69 # - other values as described by addchangegroup()
69 # - other values as described by addchangegroup()
70 self.ret = None
70 self.ret = None
71 # discover.outgoing object (contains common and outgoing data)
71 # discover.outgoing object (contains common and outgoing data)
72 self.outgoing = None
72 self.outgoing = None
73 # all remote heads before the push
73 # all remote heads before the push
74 self.remoteheads = None
74 self.remoteheads = None
75 # testable as a boolean indicating if any nodes are missing locally.
75 # testable as a boolean indicating if any nodes are missing locally.
76 self.incoming = None
76 self.incoming = None
77 # set of all heads common after changeset bundle push
77 # set of all heads common after changeset bundle push
78 self.commonheads = None
78 self.commonheads = None
79
79
80 def push(repo, remote, force=False, revs=None, newbranch=False):
80 def push(repo, remote, force=False, revs=None, newbranch=False):
81 '''Push outgoing changesets (limited by revs) from a local
81 '''Push outgoing changesets (limited by revs) from a local
82 repository to remote. Return an integer:
82 repository to remote. Return an integer:
83 - None means nothing to push
83 - None means nothing to push
84 - 0 means HTTP error
84 - 0 means HTTP error
85 - 1 means we pushed and remote head count is unchanged *or*
85 - 1 means we pushed and remote head count is unchanged *or*
86 we have outgoing changesets but refused to push
86 we have outgoing changesets but refused to push
87 - other values as described by addchangegroup()
87 - other values as described by addchangegroup()
88 '''
88 '''
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
89 pushop = pushoperation(repo, remote, force, revs, newbranch)
90 if pushop.remote.local():
90 if pushop.remote.local():
91 missing = (set(pushop.repo.requirements)
91 missing = (set(pushop.repo.requirements)
92 - pushop.remote.local().supported)
92 - pushop.remote.local().supported)
93 if missing:
93 if missing:
94 msg = _("required features are not"
94 msg = _("required features are not"
95 " supported in the destination:"
95 " supported in the destination:"
96 " %s") % (', '.join(sorted(missing)))
96 " %s") % (', '.join(sorted(missing)))
97 raise util.Abort(msg)
97 raise util.Abort(msg)
98
98
99 # there are two ways to push to remote repo:
99 # there are two ways to push to remote repo:
100 #
100 #
101 # addchangegroup assumes local user can lock remote
101 # addchangegroup assumes local user can lock remote
102 # repo (local filesystem, old ssh servers).
102 # repo (local filesystem, old ssh servers).
103 #
103 #
104 # unbundle assumes local user cannot lock remote repo (new ssh
104 # unbundle assumes local user cannot lock remote repo (new ssh
105 # servers, http servers).
105 # servers, http servers).
106
106
107 if not pushop.remote.canpush():
107 if not pushop.remote.canpush():
108 raise util.Abort(_("destination does not support push"))
108 raise util.Abort(_("destination does not support push"))
109 # get local lock as we might write phase data
109 # get local lock as we might write phase data
110 locallock = None
110 locallock = None
111 try:
111 try:
112 locallock = pushop.repo.lock()
112 locallock = pushop.repo.lock()
113 pushop.locallocked = True
113 pushop.locallocked = True
114 except IOError, err:
114 except IOError, err:
115 pushop.locallocked = False
115 pushop.locallocked = False
116 if err.errno != errno.EACCES:
116 if err.errno != errno.EACCES:
117 raise
117 raise
118 # source repo cannot be locked.
118 # source repo cannot be locked.
119 # We do not abort the push, but just disable the local phase
119 # We do not abort the push, but just disable the local phase
120 # synchronisation.
120 # synchronisation.
121 msg = 'cannot lock source repository: %s\n' % err
121 msg = 'cannot lock source repository: %s\n' % err
122 pushop.ui.debug(msg)
122 pushop.ui.debug(msg)
123 try:
123 try:
124 pushop.repo.checkpush(pushop)
124 pushop.repo.checkpush(pushop)
125 lock = None
125 lock = None
126 unbundle = pushop.remote.capable('unbundle')
126 unbundle = pushop.remote.capable('unbundle')
127 if not unbundle:
127 if not unbundle:
128 lock = pushop.remote.lock()
128 lock = pushop.remote.lock()
129 try:
129 try:
130 _pushdiscovery(pushop)
130 _pushdiscovery(pushop)
131 if _pushcheckoutgoing(pushop):
131 if _pushcheckoutgoing(pushop):
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
132 pushop.repo.prepushoutgoinghooks(pushop.repo,
133 pushop.remote,
133 pushop.remote,
134 pushop.outgoing)
134 pushop.outgoing)
135 if pushop.remote.capable('bundle2'):
135 if pushop.remote.capable('bundle2'):
136 _pushbundle2(pushop)
136 _pushbundle2(pushop)
137 else:
137 else:
138 _pushchangeset(pushop)
138 _pushchangeset(pushop)
139 _pushcomputecommonheads(pushop)
139 _pushcomputecommonheads(pushop)
140 _pushsyncphase(pushop)
140 _pushsyncphase(pushop)
141 _pushobsolete(pushop)
141 _pushobsolete(pushop)
142 finally:
142 finally:
143 if lock is not None:
143 if lock is not None:
144 lock.release()
144 lock.release()
145 finally:
145 finally:
146 if locallock is not None:
146 if locallock is not None:
147 locallock.release()
147 locallock.release()
148
148
149 _pushbookmark(pushop)
149 _pushbookmark(pushop)
150 return pushop.ret
150 return pushop.ret
151
151
152 def _pushdiscovery(pushop):
152 def _pushdiscovery(pushop):
153 # discovery
153 # discovery
154 unfi = pushop.repo.unfiltered()
154 unfi = pushop.repo.unfiltered()
155 fci = discovery.findcommonincoming
155 fci = discovery.findcommonincoming
156 commoninc = fci(unfi, pushop.remote, force=pushop.force)
156 commoninc = fci(unfi, pushop.remote, force=pushop.force)
157 common, inc, remoteheads = commoninc
157 common, inc, remoteheads = commoninc
158 fco = discovery.findcommonoutgoing
158 fco = discovery.findcommonoutgoing
159 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
159 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
160 commoninc=commoninc, force=pushop.force)
160 commoninc=commoninc, force=pushop.force)
161 pushop.outgoing = outgoing
161 pushop.outgoing = outgoing
162 pushop.remoteheads = remoteheads
162 pushop.remoteheads = remoteheads
163 pushop.incoming = inc
163 pushop.incoming = inc
164
164
165 def _pushcheckoutgoing(pushop):
165 def _pushcheckoutgoing(pushop):
166 outgoing = pushop.outgoing
166 outgoing = pushop.outgoing
167 unfi = pushop.repo.unfiltered()
167 unfi = pushop.repo.unfiltered()
168 if not outgoing.missing:
168 if not outgoing.missing:
169 # nothing to push
169 # nothing to push
170 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
170 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
171 return False
171 return False
172 # something to push
172 # something to push
173 if not pushop.force:
173 if not pushop.force:
174 # if repo.obsstore == False --> no obsolete
174 # if repo.obsstore == False --> no obsolete
175 # then, save the iteration
175 # then, save the iteration
176 if unfi.obsstore:
176 if unfi.obsstore:
177 # this message are here for 80 char limit reason
177 # this message are here for 80 char limit reason
178 mso = _("push includes obsolete changeset: %s!")
178 mso = _("push includes obsolete changeset: %s!")
179 mst = "push includes %s changeset: %s!"
179 mst = "push includes %s changeset: %s!"
180 # plain versions for i18n tool to detect them
180 # plain versions for i18n tool to detect them
181 _("push includes unstable changeset: %s!")
181 _("push includes unstable changeset: %s!")
182 _("push includes bumped changeset: %s!")
182 _("push includes bumped changeset: %s!")
183 _("push includes divergent changeset: %s!")
183 _("push includes divergent changeset: %s!")
184 # If we are to push if there is at least one
184 # If we are to push if there is at least one
185 # obsolete or unstable changeset in missing, at
185 # obsolete or unstable changeset in missing, at
186 # least one of the missinghead will be obsolete or
186 # least one of the missinghead will be obsolete or
187 # unstable. So checking heads only is ok
187 # unstable. So checking heads only is ok
188 for node in outgoing.missingheads:
188 for node in outgoing.missingheads:
189 ctx = unfi[node]
189 ctx = unfi[node]
190 if ctx.obsolete():
190 if ctx.obsolete():
191 raise util.Abort(mso % ctx)
191 raise util.Abort(mso % ctx)
192 elif ctx.troubled():
192 elif ctx.troubled():
193 raise util.Abort(_(mst)
193 raise util.Abort(_(mst)
194 % (ctx.troubles()[0],
194 % (ctx.troubles()[0],
195 ctx))
195 ctx))
196 newbm = pushop.ui.configlist('bookmarks', 'pushing')
196 newbm = pushop.ui.configlist('bookmarks', 'pushing')
197 discovery.checkheads(unfi, pushop.remote, outgoing,
197 discovery.checkheads(unfi, pushop.remote, outgoing,
198 pushop.remoteheads,
198 pushop.remoteheads,
199 pushop.newbranch,
199 pushop.newbranch,
200 bool(pushop.incoming),
200 bool(pushop.incoming),
201 newbm)
201 newbm)
202 return True
202 return True
203
203
204 def _pushbundle2(pushop):
204 def _pushbundle2(pushop):
205 """push data to the remote using bundle2
205 """push data to the remote using bundle2
206
206
207 The only currently supported type of data is changegroup but this will
207 The only currently supported type of data is changegroup but this will
208 evolve in the future."""
208 evolve in the future."""
209 # Send known head to the server for race detection.
209 # Send known head to the server for race detection.
210 capsblob = urllib.unquote(pushop.remote.capable('bundle2'))
210 capsblob = urllib.unquote(pushop.remote.capable('bundle2'))
211 caps = bundle2.decodecaps(capsblob)
211 caps = bundle2.decodecaps(capsblob)
212 bundler = bundle2.bundle20(pushop.ui, caps)
212 bundler = bundle2.bundle20(pushop.ui, caps)
213 # create reply capability
213 # create reply capability
214 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
214 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
215 bundler.addpart(bundle2.bundlepart('replycaps', data=capsblob))
215 bundler.addpart(bundle2.bundlepart('replycaps', data=capsblob))
216 if not pushop.force:
216 if not pushop.force:
217 part = bundle2.bundlepart('CHECK:HEADS', data=iter(pushop.remoteheads))
217 part = bundle2.bundlepart('CHECK:HEADS', data=iter(pushop.remoteheads))
218 bundler.addpart(part)
218 bundler.addpart(part)
219 # add the changegroup bundle
219 # add the changegroup bundle
220 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
220 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
221 cgpart = bundle2.bundlepart('CHANGEGROUP', data=cg.getchunks())
221 cgpart = bundle2.bundlepart('CHANGEGROUP', data=cg.getchunks())
222 bundler.addpart(cgpart)
222 bundler.addpart(cgpart)
223 stream = util.chunkbuffer(bundler.getchunks())
223 stream = util.chunkbuffer(bundler.getchunks())
224 reply = pushop.remote.unbundle(stream, ['force'], 'push')
224 reply = pushop.remote.unbundle(stream, ['force'], 'push')
225 try:
225 try:
226 op = bundle2.processbundle(pushop.repo, reply)
226 op = bundle2.processbundle(pushop.repo, reply)
227 except KeyError, exc:
227 except KeyError, exc:
228 raise util.Abort('missing support for %s' % exc)
228 raise util.Abort('missing support for %s' % exc)
229 cgreplies = op.records.getreplies(cgpart.id)
229 cgreplies = op.records.getreplies(cgpart.id)
230 assert len(cgreplies['changegroup']) == 1
230 assert len(cgreplies['changegroup']) == 1
231 pushop.ret = cgreplies['changegroup'][0]['return']
231 pushop.ret = cgreplies['changegroup'][0]['return']
232
232
233 def _pushchangeset(pushop):
233 def _pushchangeset(pushop):
234 """Make the actual push of changeset bundle to remote repo"""
234 """Make the actual push of changeset bundle to remote repo"""
235 outgoing = pushop.outgoing
235 outgoing = pushop.outgoing
236 unbundle = pushop.remote.capable('unbundle')
236 unbundle = pushop.remote.capable('unbundle')
237 # TODO: get bundlecaps from remote
237 # TODO: get bundlecaps from remote
238 bundlecaps = None
238 bundlecaps = None
239 # create a changegroup from local
239 # create a changegroup from local
240 if pushop.revs is None and not (outgoing.excluded
240 if pushop.revs is None and not (outgoing.excluded
241 or pushop.repo.changelog.filteredrevs):
241 or pushop.repo.changelog.filteredrevs):
242 # push everything,
242 # push everything,
243 # use the fast path, no race possible on push
243 # use the fast path, no race possible on push
244 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
244 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
245 cg = changegroup.getsubset(pushop.repo,
245 cg = changegroup.getsubset(pushop.repo,
246 outgoing,
246 outgoing,
247 bundler,
247 bundler,
248 'push',
248 'push',
249 fastpath=True)
249 fastpath=True)
250 else:
250 else:
251 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
251 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
252 bundlecaps)
252 bundlecaps)
253
253
254 # apply changegroup to remote
254 # apply changegroup to remote
255 if unbundle:
255 if unbundle:
256 # local repo finds heads on server, finds out what
256 # local repo finds heads on server, finds out what
257 # revs it must push. once revs transferred, if server
257 # revs it must push. once revs transferred, if server
258 # finds it has different heads (someone else won
258 # finds it has different heads (someone else won
259 # commit/push race), server aborts.
259 # commit/push race), server aborts.
260 if pushop.force:
260 if pushop.force:
261 remoteheads = ['force']
261 remoteheads = ['force']
262 else:
262 else:
263 remoteheads = pushop.remoteheads
263 remoteheads = pushop.remoteheads
264 # ssh: return remote's addchangegroup()
264 # ssh: return remote's addchangegroup()
265 # http: return remote's addchangegroup() or 0 for error
265 # http: return remote's addchangegroup() or 0 for error
266 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
266 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
267 'push')
267 'push')
268 else:
268 else:
269 # we return an integer indicating remote head count
269 # we return an integer indicating remote head count
270 # change
270 # change
271 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
271 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
272
272
273 def _pushcomputecommonheads(pushop):
273 def _pushcomputecommonheads(pushop):
274 unfi = pushop.repo.unfiltered()
274 unfi = pushop.repo.unfiltered()
275 if pushop.ret:
275 if pushop.ret:
276 # push succeed, synchronize target of the push
276 # push succeed, synchronize target of the push
277 cheads = pushop.outgoing.missingheads
277 cheads = pushop.outgoing.missingheads
278 elif pushop.revs is None:
278 elif pushop.revs is None:
279 # All out push fails. synchronize all common
279 # All out push fails. synchronize all common
280 cheads = pushop.outgoing.commonheads
280 cheads = pushop.outgoing.commonheads
281 else:
281 else:
282 # I want cheads = heads(::missingheads and ::commonheads)
282 # I want cheads = heads(::missingheads and ::commonheads)
283 # (missingheads is revs with secret changeset filtered out)
283 # (missingheads is revs with secret changeset filtered out)
284 #
284 #
285 # This can be expressed as:
285 # This can be expressed as:
286 # cheads = ( (missingheads and ::commonheads)
286 # cheads = ( (missingheads and ::commonheads)
287 # + (commonheads and ::missingheads))"
287 # + (commonheads and ::missingheads))"
288 # )
288 # )
289 #
289 #
290 # while trying to push we already computed the following:
290 # while trying to push we already computed the following:
291 # common = (::commonheads)
291 # common = (::commonheads)
292 # missing = ((commonheads::missingheads) - commonheads)
292 # missing = ((commonheads::missingheads) - commonheads)
293 #
293 #
294 # We can pick:
294 # We can pick:
295 # * missingheads part of common (::commonheads)
295 # * missingheads part of common (::commonheads)
296 common = set(pushop.outgoing.common)
296 common = set(pushop.outgoing.common)
297 nm = pushop.repo.changelog.nodemap
297 nm = pushop.repo.changelog.nodemap
298 cheads = [node for node in pushop.revs if nm[node] in common]
298 cheads = [node for node in pushop.revs if nm[node] in common]
299 # and
299 # and
300 # * commonheads parents on missing
300 # * commonheads parents on missing
301 revset = unfi.set('%ln and parents(roots(%ln))',
301 revset = unfi.set('%ln and parents(roots(%ln))',
302 pushop.outgoing.commonheads,
302 pushop.outgoing.commonheads,
303 pushop.outgoing.missing)
303 pushop.outgoing.missing)
304 cheads.extend(c.node() for c in revset)
304 cheads.extend(c.node() for c in revset)
305 pushop.commonheads = cheads
305 pushop.commonheads = cheads
306
306
307 def _pushsyncphase(pushop):
307 def _pushsyncphase(pushop):
308 """synchronise phase information locally and remotely"""
308 """synchronise phase information locally and remotely"""
309 unfi = pushop.repo.unfiltered()
309 unfi = pushop.repo.unfiltered()
310 cheads = pushop.commonheads
310 cheads = pushop.commonheads
311 if pushop.ret:
311 if pushop.ret:
312 # push succeed, synchronize target of the push
312 # push succeed, synchronize target of the push
313 cheads = pushop.outgoing.missingheads
313 cheads = pushop.outgoing.missingheads
314 elif pushop.revs is None:
314 elif pushop.revs is None:
315 # All out push fails. synchronize all common
315 # All out push fails. synchronize all common
316 cheads = pushop.outgoing.commonheads
316 cheads = pushop.outgoing.commonheads
317 else:
317 else:
318 # I want cheads = heads(::missingheads and ::commonheads)
318 # I want cheads = heads(::missingheads and ::commonheads)
319 # (missingheads is revs with secret changeset filtered out)
319 # (missingheads is revs with secret changeset filtered out)
320 #
320 #
321 # This can be expressed as:
321 # This can be expressed as:
322 # cheads = ( (missingheads and ::commonheads)
322 # cheads = ( (missingheads and ::commonheads)
323 # + (commonheads and ::missingheads))"
323 # + (commonheads and ::missingheads))"
324 # )
324 # )
325 #
325 #
326 # while trying to push we already computed the following:
326 # while trying to push we already computed the following:
327 # common = (::commonheads)
327 # common = (::commonheads)
328 # missing = ((commonheads::missingheads) - commonheads)
328 # missing = ((commonheads::missingheads) - commonheads)
329 #
329 #
330 # We can pick:
330 # We can pick:
331 # * missingheads part of common (::commonheads)
331 # * missingheads part of common (::commonheads)
332 common = set(pushop.outgoing.common)
332 common = set(pushop.outgoing.common)
333 nm = pushop.repo.changelog.nodemap
333 nm = pushop.repo.changelog.nodemap
334 cheads = [node for node in pushop.revs if nm[node] in common]
334 cheads = [node for node in pushop.revs if nm[node] in common]
335 # and
335 # and
336 # * commonheads parents on missing
336 # * commonheads parents on missing
337 revset = unfi.set('%ln and parents(roots(%ln))',
337 revset = unfi.set('%ln and parents(roots(%ln))',
338 pushop.outgoing.commonheads,
338 pushop.outgoing.commonheads,
339 pushop.outgoing.missing)
339 pushop.outgoing.missing)
340 cheads.extend(c.node() for c in revset)
340 cheads.extend(c.node() for c in revset)
341 pushop.commonheads = cheads
341 pushop.commonheads = cheads
342 # even when we don't push, exchanging phase data is useful
342 # even when we don't push, exchanging phase data is useful
343 remotephases = pushop.remote.listkeys('phases')
343 remotephases = pushop.remote.listkeys('phases')
344 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
344 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
345 and remotephases # server supports phases
345 and remotephases # server supports phases
346 and pushop.ret is None # nothing was pushed
346 and pushop.ret is None # nothing was pushed
347 and remotephases.get('publishing', False)):
347 and remotephases.get('publishing', False)):
348 # When:
348 # When:
349 # - this is a subrepo push
349 # - this is a subrepo push
350 # - and remote support phase
350 # - and remote support phase
351 # - and no changeset was pushed
351 # - and no changeset was pushed
352 # - and remote is publishing
352 # - and remote is publishing
353 # We may be in issue 3871 case!
353 # We may be in issue 3871 case!
354 # We drop the possible phase synchronisation done by
354 # We drop the possible phase synchronisation done by
355 # courtesy to publish changesets possibly locally draft
355 # courtesy to publish changesets possibly locally draft
356 # on the remote.
356 # on the remote.
357 remotephases = {'publishing': 'True'}
357 remotephases = {'publishing': 'True'}
358 if not remotephases: # old server or public only reply from non-publishing
358 if not remotephases: # old server or public only reply from non-publishing
359 _localphasemove(pushop, cheads)
359 _localphasemove(pushop, cheads)
360 # don't push any phase data as there is nothing to push
360 # don't push any phase data as there is nothing to push
361 else:
361 else:
362 ana = phases.analyzeremotephases(pushop.repo, cheads,
362 ana = phases.analyzeremotephases(pushop.repo, cheads,
363 remotephases)
363 remotephases)
364 pheads, droots = ana
364 pheads, droots = ana
365 ### Apply remote phase on local
365 ### Apply remote phase on local
366 if remotephases.get('publishing', False):
366 if remotephases.get('publishing', False):
367 _localphasemove(pushop, cheads)
367 _localphasemove(pushop, cheads)
368 else: # publish = False
368 else: # publish = False
369 _localphasemove(pushop, pheads)
369 _localphasemove(pushop, pheads)
370 _localphasemove(pushop, cheads, phases.draft)
370 _localphasemove(pushop, cheads, phases.draft)
371 ### Apply local phase on remote
371 ### Apply local phase on remote
372
372
373 # Get the list of all revs draft on remote by public here.
373 # Get the list of all revs draft on remote by public here.
374 # XXX Beware that revset break if droots is not strictly
374 # XXX Beware that revset break if droots is not strictly
375 # XXX root we may want to ensure it is but it is costly
375 # XXX root we may want to ensure it is but it is costly
376 outdated = unfi.set('heads((%ln::%ln) and public())',
376 outdated = unfi.set('heads((%ln::%ln) and public())',
377 droots, cheads)
377 droots, cheads)
378 for newremotehead in outdated:
378 for newremotehead in outdated:
379 r = pushop.remote.pushkey('phases',
379 r = pushop.remote.pushkey('phases',
380 newremotehead.hex(),
380 newremotehead.hex(),
381 str(phases.draft),
381 str(phases.draft),
382 str(phases.public))
382 str(phases.public))
383 if not r:
383 if not r:
384 pushop.ui.warn(_('updating %s to public failed!\n')
384 pushop.ui.warn(_('updating %s to public failed!\n')
385 % newremotehead)
385 % newremotehead)
386
386
387 def _localphasemove(pushop, nodes, phase=phases.public):
387 def _localphasemove(pushop, nodes, phase=phases.public):
388 """move <nodes> to <phase> in the local source repo"""
388 """move <nodes> to <phase> in the local source repo"""
389 if pushop.locallocked:
389 if pushop.locallocked:
390 phases.advanceboundary(pushop.repo, phase, nodes)
390 phases.advanceboundary(pushop.repo, phase, nodes)
391 else:
391 else:
392 # repo is not locked, do not change any phases!
392 # repo is not locked, do not change any phases!
393 # Informs the user that phases should have been moved when
393 # Informs the user that phases should have been moved when
394 # applicable.
394 # applicable.
395 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
395 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
396 phasestr = phases.phasenames[phase]
396 phasestr = phases.phasenames[phase]
397 if actualmoves:
397 if actualmoves:
398 pushop.ui.status(_('cannot lock source repo, skipping '
398 pushop.ui.status(_('cannot lock source repo, skipping '
399 'local %s phase update\n') % phasestr)
399 'local %s phase update\n') % phasestr)
400
400
401 def _pushobsolete(pushop):
401 def _pushobsolete(pushop):
402 """utility function to push obsolete markers to a remote"""
402 """utility function to push obsolete markers to a remote"""
403 pushop.ui.debug('try to push obsolete markers to remote\n')
403 pushop.ui.debug('try to push obsolete markers to remote\n')
404 repo = pushop.repo
404 repo = pushop.repo
405 remote = pushop.remote
405 remote = pushop.remote
406 if (obsolete._enabled and repo.obsstore and
406 if (obsolete._enabled and repo.obsstore and
407 'obsolete' in remote.listkeys('namespaces')):
407 'obsolete' in remote.listkeys('namespaces')):
408 rslts = []
408 rslts = []
409 remotedata = repo.listkeys('obsolete')
409 remotedata = repo.listkeys('obsolete')
410 for key in sorted(remotedata, reverse=True):
410 for key in sorted(remotedata, reverse=True):
411 # reverse sort to ensure we end with dump0
411 # reverse sort to ensure we end with dump0
412 data = remotedata[key]
412 data = remotedata[key]
413 rslts.append(remote.pushkey('obsolete', key, '', data))
413 rslts.append(remote.pushkey('obsolete', key, '', data))
414 if [r for r in rslts if not r]:
414 if [r for r in rslts if not r]:
415 msg = _('failed to push some obsolete markers!\n')
415 msg = _('failed to push some obsolete markers!\n')
416 repo.ui.warn(msg)
416 repo.ui.warn(msg)
417
417
418 def _pushbookmark(pushop):
418 def _pushbookmark(pushop):
419 """Update bookmark position on remote"""
419 """Update bookmark position on remote"""
420 ui = pushop.ui
420 ui = pushop.ui
421 repo = pushop.repo.unfiltered()
421 repo = pushop.repo.unfiltered()
422 remote = pushop.remote
422 remote = pushop.remote
423 ui.debug("checking for updated bookmarks\n")
423 ui.debug("checking for updated bookmarks\n")
424 revnums = map(repo.changelog.rev, pushop.revs or [])
424 revnums = map(repo.changelog.rev, pushop.revs or [])
425 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
425 ancestors = [a for a in repo.changelog.ancestors(revnums, inclusive=True)]
426 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
426 (addsrc, adddst, advsrc, advdst, diverge, differ, invalid
427 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
427 ) = bookmarks.compare(repo, repo._bookmarks, remote.listkeys('bookmarks'),
428 srchex=hex)
428 srchex=hex)
429
429
430 for b, scid, dcid in advsrc:
430 for b, scid, dcid in advsrc:
431 if ancestors and repo[scid].rev() not in ancestors:
431 if ancestors and repo[scid].rev() not in ancestors:
432 continue
432 continue
433 if remote.pushkey('bookmarks', b, dcid, scid):
433 if remote.pushkey('bookmarks', b, dcid, scid):
434 ui.status(_("updating bookmark %s\n") % b)
434 ui.status(_("updating bookmark %s\n") % b)
435 else:
435 else:
436 ui.warn(_('updating bookmark %s failed!\n') % b)
436 ui.warn(_('updating bookmark %s failed!\n') % b)
437
437
438 class pulloperation(object):
438 class pulloperation(object):
439 """A object that represent a single pull operation
439 """A object that represent a single pull operation
440
440
441 It purpose is to carry push related state and very common operation.
441 It purpose is to carry push related state and very common operation.
442
442
443 A new should be created at the beginning of each pull and discarded
443 A new should be created at the beginning of each pull and discarded
444 afterward.
444 afterward.
445 """
445 """
446
446
447 def __init__(self, repo, remote, heads=None, force=False):
447 def __init__(self, repo, remote, heads=None, force=False):
448 # repo we pull into
448 # repo we pull into
449 self.repo = repo
449 self.repo = repo
450 # repo we pull from
450 # repo we pull from
451 self.remote = remote
451 self.remote = remote
452 # revision we try to pull (None is "all")
452 # revision we try to pull (None is "all")
453 self.heads = heads
453 self.heads = heads
454 # do we force pull?
454 # do we force pull?
455 self.force = force
455 self.force = force
456 # the name the pull transaction
456 # the name the pull transaction
457 self._trname = 'pull\n' + util.hidepassword(remote.url())
457 self._trname = 'pull\n' + util.hidepassword(remote.url())
458 # hold the transaction once created
458 # hold the transaction once created
459 self._tr = None
459 self._tr = None
460 # set of common changeset between local and remote before pull
460 # set of common changeset between local and remote before pull
461 self.common = None
461 self.common = None
462 # set of pulled head
462 # set of pulled head
463 self.rheads = None
463 self.rheads = None
464 # list of missing changeset to fetch remotely
464 # list of missing changeset to fetch remotely
465 self.fetch = None
465 self.fetch = None
466 # result of changegroup pulling (used as return code by pull)
466 # result of changegroup pulling (used as return code by pull)
467 self.cgresult = None
467 self.cgresult = None
468 # list of step remaining todo (related to future bundle2 usage)
468 # list of step remaining todo (related to future bundle2 usage)
469 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
469 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
470
470
471 @util.propertycache
471 @util.propertycache
472 def pulledsubset(self):
472 def pulledsubset(self):
473 """heads of the set of changeset target by the pull"""
473 """heads of the set of changeset target by the pull"""
474 # compute target subset
474 # compute target subset
475 if self.heads is None:
475 if self.heads is None:
476 # We pulled every thing possible
476 # We pulled every thing possible
477 # sync on everything common
477 # sync on everything common
478 c = set(self.common)
478 c = set(self.common)
479 ret = list(self.common)
479 ret = list(self.common)
480 for n in self.rheads:
480 for n in self.rheads:
481 if n not in c:
481 if n not in c:
482 ret.append(n)
482 ret.append(n)
483 return ret
483 return ret
484 else:
484 else:
485 # We pulled a specific subset
485 # We pulled a specific subset
486 # sync on this subset
486 # sync on this subset
487 return self.heads
487 return self.heads
488
488
489 def gettransaction(self):
489 def gettransaction(self):
490 """get appropriate pull transaction, creating it if needed"""
490 """get appropriate pull transaction, creating it if needed"""
491 if self._tr is None:
491 if self._tr is None:
492 self._tr = self.repo.transaction(self._trname)
492 self._tr = self.repo.transaction(self._trname)
493 return self._tr
493 return self._tr
494
494
495 def closetransaction(self):
495 def closetransaction(self):
496 """close transaction if created"""
496 """close transaction if created"""
497 if self._tr is not None:
497 if self._tr is not None:
498 self._tr.close()
498 self._tr.close()
499
499
500 def releasetransaction(self):
500 def releasetransaction(self):
501 """release transaction if created"""
501 """release transaction if created"""
502 if self._tr is not None:
502 if self._tr is not None:
503 self._tr.release()
503 self._tr.release()
504
504
505 def pull(repo, remote, heads=None, force=False):
505 def pull(repo, remote, heads=None, force=False):
506 pullop = pulloperation(repo, remote, heads, force)
506 pullop = pulloperation(repo, remote, heads, force)
507 if pullop.remote.local():
507 if pullop.remote.local():
508 missing = set(pullop.remote.requirements) - pullop.repo.supported
508 missing = set(pullop.remote.requirements) - pullop.repo.supported
509 if missing:
509 if missing:
510 msg = _("required features are not"
510 msg = _("required features are not"
511 " supported in the destination:"
511 " supported in the destination:"
512 " %s") % (', '.join(sorted(missing)))
512 " %s") % (', '.join(sorted(missing)))
513 raise util.Abort(msg)
513 raise util.Abort(msg)
514
514
515 lock = pullop.repo.lock()
515 lock = pullop.repo.lock()
516 try:
516 try:
517 _pulldiscovery(pullop)
517 _pulldiscovery(pullop)
518 if pullop.remote.capable('bundle2'):
518 if pullop.remote.capable('bundle2'):
519 _pullbundle2(pullop)
519 _pullbundle2(pullop)
520 if 'changegroup' in pullop.todosteps:
520 if 'changegroup' in pullop.todosteps:
521 _pullchangeset(pullop)
521 _pullchangeset(pullop)
522 if 'phases' in pullop.todosteps:
522 if 'phases' in pullop.todosteps:
523 _pullphase(pullop)
523 _pullphase(pullop)
524 if 'obsmarkers' in pullop.todosteps:
524 if 'obsmarkers' in pullop.todosteps:
525 _pullobsolete(pullop)
525 _pullobsolete(pullop)
526 pullop.closetransaction()
526 pullop.closetransaction()
527 finally:
527 finally:
528 pullop.releasetransaction()
528 pullop.releasetransaction()
529 lock.release()
529 lock.release()
530
530
531 return pullop.cgresult
531 return pullop.cgresult
532
532
533 def _pulldiscovery(pullop):
533 def _pulldiscovery(pullop):
534 """discovery phase for the pull
534 """discovery phase for the pull
535
535
536 Current handle changeset discovery only, will change handle all discovery
536 Current handle changeset discovery only, will change handle all discovery
537 at some point."""
537 at some point."""
538 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
538 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
539 pullop.remote,
539 pullop.remote,
540 heads=pullop.heads,
540 heads=pullop.heads,
541 force=pullop.force)
541 force=pullop.force)
542 pullop.common, pullop.fetch, pullop.rheads = tmp
542 pullop.common, pullop.fetch, pullop.rheads = tmp
543
543
544 def _pullbundle2(pullop):
544 def _pullbundle2(pullop):
545 """pull data using bundle2
545 """pull data using bundle2
546
546
547 For now, the only supported data are changegroup."""
547 For now, the only supported data are changegroup."""
548 kwargs = {'bundlecaps': set(['HG20'])}
548 kwargs = {'bundlecaps': set(['HG2X'])}
549 capsblob = bundle2.encodecaps(pullop.repo.bundle2caps)
549 capsblob = bundle2.encodecaps(pullop.repo.bundle2caps)
550 kwargs['bundlecaps'].add('bundle2=' + urllib.quote(capsblob))
550 kwargs['bundlecaps'].add('bundle2=' + urllib.quote(capsblob))
551 # pulling changegroup
551 # pulling changegroup
552 pullop.todosteps.remove('changegroup')
552 pullop.todosteps.remove('changegroup')
553 if not pullop.fetch:
553 if not pullop.fetch:
554 pullop.repo.ui.status(_("no changes found\n"))
554 pullop.repo.ui.status(_("no changes found\n"))
555 pullop.cgresult = 0
555 pullop.cgresult = 0
556 else:
556 else:
557 kwargs['common'] = pullop.common
557 kwargs['common'] = pullop.common
558 kwargs['heads'] = pullop.heads or pullop.rheads
558 kwargs['heads'] = pullop.heads or pullop.rheads
559 if pullop.heads is None and list(pullop.common) == [nullid]:
559 if pullop.heads is None and list(pullop.common) == [nullid]:
560 pullop.repo.ui.status(_("requesting all changes\n"))
560 pullop.repo.ui.status(_("requesting all changes\n"))
561 if kwargs.keys() == ['format']:
561 if kwargs.keys() == ['format']:
562 return # nothing to pull
562 return # nothing to pull
563 bundle = pullop.remote.getbundle('pull', **kwargs)
563 bundle = pullop.remote.getbundle('pull', **kwargs)
564 try:
564 try:
565 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
565 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
566 except KeyError, exc:
566 except KeyError, exc:
567 raise util.Abort('missing support for %s' % exc)
567 raise util.Abort('missing support for %s' % exc)
568 assert len(op.records['changegroup']) == 1
568 assert len(op.records['changegroup']) == 1
569 pullop.cgresult = op.records['changegroup'][0]['return']
569 pullop.cgresult = op.records['changegroup'][0]['return']
570
570
571 def _pullchangeset(pullop):
571 def _pullchangeset(pullop):
572 """pull changeset from unbundle into the local repo"""
572 """pull changeset from unbundle into the local repo"""
573 # We delay the open of the transaction as late as possible so we
573 # We delay the open of the transaction as late as possible so we
574 # don't open transaction for nothing or you break future useful
574 # don't open transaction for nothing or you break future useful
575 # rollback call
575 # rollback call
576 pullop.todosteps.remove('changegroup')
576 pullop.todosteps.remove('changegroup')
577 if not pullop.fetch:
577 if not pullop.fetch:
578 pullop.repo.ui.status(_("no changes found\n"))
578 pullop.repo.ui.status(_("no changes found\n"))
579 pullop.cgresult = 0
579 pullop.cgresult = 0
580 return
580 return
581 pullop.gettransaction()
581 pullop.gettransaction()
582 if pullop.heads is None and list(pullop.common) == [nullid]:
582 if pullop.heads is None and list(pullop.common) == [nullid]:
583 pullop.repo.ui.status(_("requesting all changes\n"))
583 pullop.repo.ui.status(_("requesting all changes\n"))
584 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
584 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
585 # issue1320, avoid a race if remote changed after discovery
585 # issue1320, avoid a race if remote changed after discovery
586 pullop.heads = pullop.rheads
586 pullop.heads = pullop.rheads
587
587
588 if pullop.remote.capable('getbundle'):
588 if pullop.remote.capable('getbundle'):
589 # TODO: get bundlecaps from remote
589 # TODO: get bundlecaps from remote
590 cg = pullop.remote.getbundle('pull', common=pullop.common,
590 cg = pullop.remote.getbundle('pull', common=pullop.common,
591 heads=pullop.heads or pullop.rheads)
591 heads=pullop.heads or pullop.rheads)
592 elif pullop.heads is None:
592 elif pullop.heads is None:
593 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
593 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
594 elif not pullop.remote.capable('changegroupsubset'):
594 elif not pullop.remote.capable('changegroupsubset'):
595 raise util.Abort(_("partial pull cannot be done because "
595 raise util.Abort(_("partial pull cannot be done because "
596 "other repository doesn't support "
596 "other repository doesn't support "
597 "changegroupsubset."))
597 "changegroupsubset."))
598 else:
598 else:
599 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
599 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
600 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
600 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
601 pullop.remote.url())
601 pullop.remote.url())
602
602
603 def _pullphase(pullop):
603 def _pullphase(pullop):
604 # Get remote phases data from remote
604 # Get remote phases data from remote
605 pullop.todosteps.remove('phases')
605 pullop.todosteps.remove('phases')
606 remotephases = pullop.remote.listkeys('phases')
606 remotephases = pullop.remote.listkeys('phases')
607 publishing = bool(remotephases.get('publishing', False))
607 publishing = bool(remotephases.get('publishing', False))
608 if remotephases and not publishing:
608 if remotephases and not publishing:
609 # remote is new and unpublishing
609 # remote is new and unpublishing
610 pheads, _dr = phases.analyzeremotephases(pullop.repo,
610 pheads, _dr = phases.analyzeremotephases(pullop.repo,
611 pullop.pulledsubset,
611 pullop.pulledsubset,
612 remotephases)
612 remotephases)
613 phases.advanceboundary(pullop.repo, phases.public, pheads)
613 phases.advanceboundary(pullop.repo, phases.public, pheads)
614 phases.advanceboundary(pullop.repo, phases.draft,
614 phases.advanceboundary(pullop.repo, phases.draft,
615 pullop.pulledsubset)
615 pullop.pulledsubset)
616 else:
616 else:
617 # Remote is old or publishing all common changesets
617 # Remote is old or publishing all common changesets
618 # should be seen as public
618 # should be seen as public
619 phases.advanceboundary(pullop.repo, phases.public,
619 phases.advanceboundary(pullop.repo, phases.public,
620 pullop.pulledsubset)
620 pullop.pulledsubset)
621
621
622 def _pullobsolete(pullop):
622 def _pullobsolete(pullop):
623 """utility function to pull obsolete markers from a remote
623 """utility function to pull obsolete markers from a remote
624
624
625 The `gettransaction` is function that return the pull transaction, creating
625 The `gettransaction` is function that return the pull transaction, creating
626 one if necessary. We return the transaction to inform the calling code that
626 one if necessary. We return the transaction to inform the calling code that
627 a new transaction have been created (when applicable).
627 a new transaction have been created (when applicable).
628
628
629 Exists mostly to allow overriding for experimentation purpose"""
629 Exists mostly to allow overriding for experimentation purpose"""
630 pullop.todosteps.remove('obsmarkers')
630 pullop.todosteps.remove('obsmarkers')
631 tr = None
631 tr = None
632 if obsolete._enabled:
632 if obsolete._enabled:
633 pullop.repo.ui.debug('fetching remote obsolete markers\n')
633 pullop.repo.ui.debug('fetching remote obsolete markers\n')
634 remoteobs = pullop.remote.listkeys('obsolete')
634 remoteobs = pullop.remote.listkeys('obsolete')
635 if 'dump0' in remoteobs:
635 if 'dump0' in remoteobs:
636 tr = pullop.gettransaction()
636 tr = pullop.gettransaction()
637 for key in sorted(remoteobs, reverse=True):
637 for key in sorted(remoteobs, reverse=True):
638 if key.startswith('dump'):
638 if key.startswith('dump'):
639 data = base85.b85decode(remoteobs[key])
639 data = base85.b85decode(remoteobs[key])
640 pullop.repo.obsstore.mergemarkers(tr, data)
640 pullop.repo.obsstore.mergemarkers(tr, data)
641 pullop.repo.invalidatevolatilesets()
641 pullop.repo.invalidatevolatilesets()
642 return tr
642 return tr
643
643
644 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
644 def getbundle(repo, source, heads=None, common=None, bundlecaps=None):
645 """return a full bundle (with potentially multiple kind of parts)
645 """return a full bundle (with potentially multiple kind of parts)
646
646
647 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
647 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
648 passed. For now, the bundle can contain only changegroup, but this will
648 passed. For now, the bundle can contain only changegroup, but this will
649 changes when more part type will be available for bundle2.
649 changes when more part type will be available for bundle2.
650
650
651 This is different from changegroup.getbundle that only returns an HG10
651 This is different from changegroup.getbundle that only returns an HG10
652 changegroup bundle. They may eventually get reunited in the future when we
652 changegroup bundle. They may eventually get reunited in the future when we
653 have a clearer idea of the API we what to query different data.
653 have a clearer idea of the API we what to query different data.
654
654
655 The implementation is at a very early stage and will get massive rework
655 The implementation is at a very early stage and will get massive rework
656 when the API of bundle is refined.
656 when the API of bundle is refined.
657 """
657 """
658 # build bundle here.
658 # build bundle here.
659 cg = changegroup.getbundle(repo, source, heads=heads,
659 cg = changegroup.getbundle(repo, source, heads=heads,
660 common=common, bundlecaps=bundlecaps)
660 common=common, bundlecaps=bundlecaps)
661 if bundlecaps is None or 'HG20' not in bundlecaps:
661 if bundlecaps is None or 'HG2X' not in bundlecaps:
662 return cg
662 return cg
663 # very crude first implementation,
663 # very crude first implementation,
664 # the bundle API will change and the generation will be done lazily.
664 # the bundle API will change and the generation will be done lazily.
665 b2caps = {}
665 b2caps = {}
666 for bcaps in bundlecaps:
666 for bcaps in bundlecaps:
667 if bcaps.startswith('bundle2='):
667 if bcaps.startswith('bundle2='):
668 blob = urllib.unquote(bcaps[len('bundle2='):])
668 blob = urllib.unquote(bcaps[len('bundle2='):])
669 b2caps.update(bundle2.decodecaps(blob))
669 b2caps.update(bundle2.decodecaps(blob))
670 bundler = bundle2.bundle20(repo.ui, b2caps)
670 bundler = bundle2.bundle20(repo.ui, b2caps)
671 part = bundle2.bundlepart('changegroup', data=cg.getchunks())
671 part = bundle2.bundlepart('changegroup', data=cg.getchunks())
672 bundler.addpart(part)
672 bundler.addpart(part)
673 return util.chunkbuffer(bundler.getchunks())
673 return util.chunkbuffer(bundler.getchunks())
674
674
675 class PushRaced(RuntimeError):
675 class PushRaced(RuntimeError):
676 """An exception raised during unbundling that indicate a push race"""
676 """An exception raised during unbundling that indicate a push race"""
677
677
678 def check_heads(repo, their_heads, context):
678 def check_heads(repo, their_heads, context):
679 """check if the heads of a repo have been modified
679 """check if the heads of a repo have been modified
680
680
681 Used by peer for unbundling.
681 Used by peer for unbundling.
682 """
682 """
683 heads = repo.heads()
683 heads = repo.heads()
684 heads_hash = util.sha1(''.join(sorted(heads))).digest()
684 heads_hash = util.sha1(''.join(sorted(heads))).digest()
685 if not (their_heads == ['force'] or their_heads == heads or
685 if not (their_heads == ['force'] or their_heads == heads or
686 their_heads == ['hashed', heads_hash]):
686 their_heads == ['hashed', heads_hash]):
687 # someone else committed/pushed/unbundled while we
687 # someone else committed/pushed/unbundled while we
688 # were transferring data
688 # were transferring data
689 raise PushRaced('repository changed while %s - '
689 raise PushRaced('repository changed while %s - '
690 'please try again' % context)
690 'please try again' % context)
691
691
692 def unbundle(repo, cg, heads, source, url):
692 def unbundle(repo, cg, heads, source, url):
693 """Apply a bundle to a repo.
693 """Apply a bundle to a repo.
694
694
695 this function makes sure the repo is locked during the application and have
695 this function makes sure the repo is locked during the application and have
696 mechanism to check that no push race occurred between the creation of the
696 mechanism to check that no push race occurred between the creation of the
697 bundle and its application.
697 bundle and its application.
698
698
699 If the push was raced as PushRaced exception is raised."""
699 If the push was raced as PushRaced exception is raised."""
700 r = 0
700 r = 0
701 # need a transaction when processing a bundle2 stream
701 # need a transaction when processing a bundle2 stream
702 tr = None
702 tr = None
703 lock = repo.lock()
703 lock = repo.lock()
704 try:
704 try:
705 check_heads(repo, heads, 'uploading changes')
705 check_heads(repo, heads, 'uploading changes')
706 # push can proceed
706 # push can proceed
707 if util.safehasattr(cg, 'params'):
707 if util.safehasattr(cg, 'params'):
708 tr = repo.transaction('unbundle')
708 tr = repo.transaction('unbundle')
709 r = bundle2.processbundle(repo, cg, lambda: tr).reply
709 r = bundle2.processbundle(repo, cg, lambda: tr).reply
710 tr.close()
710 tr.close()
711 else:
711 else:
712 r = changegroup.addchangegroup(repo, cg, source, url)
712 r = changegroup.addchangegroup(repo, cg, source, url)
713 finally:
713 finally:
714 if tr is not None:
714 if tr is not None:
715 tr.release()
715 tr.release()
716 lock.release()
716 lock.release()
717 return r
717 return r
@@ -1,1910 +1,1910 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from node import hex, nullid, short
7 from node import hex, nullid, short
8 from i18n import _
8 from i18n import _
9 import urllib
9 import urllib
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
12 import lock as lockmod
12 import lock as lockmod
13 import transaction, store, encoding, exchange, bundle2
13 import transaction, store, encoding, exchange, bundle2
14 import scmutil, util, extensions, hook, error, revset
14 import scmutil, util, extensions, hook, error, revset
15 import match as matchmod
15 import match as matchmod
16 import merge as mergemod
16 import merge as mergemod
17 import tags as tagsmod
17 import tags as tagsmod
18 from lock import release
18 from lock import release
19 import weakref, errno, os, time, inspect
19 import weakref, errno, os, time, inspect
20 import branchmap, pathutil
20 import branchmap, pathutil
21 propertycache = util.propertycache
21 propertycache = util.propertycache
22 filecache = scmutil.filecache
22 filecache = scmutil.filecache
23
23
24 class repofilecache(filecache):
24 class repofilecache(filecache):
25 """All filecache usage on repo are done for logic that should be unfiltered
25 """All filecache usage on repo are done for logic that should be unfiltered
26 """
26 """
27
27
28 def __get__(self, repo, type=None):
28 def __get__(self, repo, type=None):
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
30 def __set__(self, repo, value):
30 def __set__(self, repo, value):
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
32 def __delete__(self, repo):
32 def __delete__(self, repo):
33 return super(repofilecache, self).__delete__(repo.unfiltered())
33 return super(repofilecache, self).__delete__(repo.unfiltered())
34
34
35 class storecache(repofilecache):
35 class storecache(repofilecache):
36 """filecache for files in the store"""
36 """filecache for files in the store"""
37 def join(self, obj, fname):
37 def join(self, obj, fname):
38 return obj.sjoin(fname)
38 return obj.sjoin(fname)
39
39
40 class unfilteredpropertycache(propertycache):
40 class unfilteredpropertycache(propertycache):
41 """propertycache that apply to unfiltered repo only"""
41 """propertycache that apply to unfiltered repo only"""
42
42
43 def __get__(self, repo, type=None):
43 def __get__(self, repo, type=None):
44 unfi = repo.unfiltered()
44 unfi = repo.unfiltered()
45 if unfi is repo:
45 if unfi is repo:
46 return super(unfilteredpropertycache, self).__get__(unfi)
46 return super(unfilteredpropertycache, self).__get__(unfi)
47 return getattr(unfi, self.name)
47 return getattr(unfi, self.name)
48
48
49 class filteredpropertycache(propertycache):
49 class filteredpropertycache(propertycache):
50 """propertycache that must take filtering in account"""
50 """propertycache that must take filtering in account"""
51
51
52 def cachevalue(self, obj, value):
52 def cachevalue(self, obj, value):
53 object.__setattr__(obj, self.name, value)
53 object.__setattr__(obj, self.name, value)
54
54
55
55
56 def hasunfilteredcache(repo, name):
56 def hasunfilteredcache(repo, name):
57 """check if a repo has an unfilteredpropertycache value for <name>"""
57 """check if a repo has an unfilteredpropertycache value for <name>"""
58 return name in vars(repo.unfiltered())
58 return name in vars(repo.unfiltered())
59
59
60 def unfilteredmethod(orig):
60 def unfilteredmethod(orig):
61 """decorate method that always need to be run on unfiltered version"""
61 """decorate method that always need to be run on unfiltered version"""
62 def wrapper(repo, *args, **kwargs):
62 def wrapper(repo, *args, **kwargs):
63 return orig(repo.unfiltered(), *args, **kwargs)
63 return orig(repo.unfiltered(), *args, **kwargs)
64 return wrapper
64 return wrapper
65
65
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
67 'unbundle'))
67 'unbundle'))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
69
69
70 class localpeer(peer.peerrepository):
70 class localpeer(peer.peerrepository):
71 '''peer for a local repo; reflects only the most recent API'''
71 '''peer for a local repo; reflects only the most recent API'''
72
72
73 def __init__(self, repo, caps=moderncaps):
73 def __init__(self, repo, caps=moderncaps):
74 peer.peerrepository.__init__(self)
74 peer.peerrepository.__init__(self)
75 self._repo = repo.filtered('served')
75 self._repo = repo.filtered('served')
76 self.ui = repo.ui
76 self.ui = repo.ui
77 self._caps = repo._restrictcapabilities(caps)
77 self._caps = repo._restrictcapabilities(caps)
78 self.requirements = repo.requirements
78 self.requirements = repo.requirements
79 self.supportedformats = repo.supportedformats
79 self.supportedformats = repo.supportedformats
80
80
81 def close(self):
81 def close(self):
82 self._repo.close()
82 self._repo.close()
83
83
84 def _capabilities(self):
84 def _capabilities(self):
85 return self._caps
85 return self._caps
86
86
87 def local(self):
87 def local(self):
88 return self._repo
88 return self._repo
89
89
90 def canpush(self):
90 def canpush(self):
91 return True
91 return True
92
92
93 def url(self):
93 def url(self):
94 return self._repo.url()
94 return self._repo.url()
95
95
96 def lookup(self, key):
96 def lookup(self, key):
97 return self._repo.lookup(key)
97 return self._repo.lookup(key)
98
98
99 def branchmap(self):
99 def branchmap(self):
100 return self._repo.branchmap()
100 return self._repo.branchmap()
101
101
102 def heads(self):
102 def heads(self):
103 return self._repo.heads()
103 return self._repo.heads()
104
104
105 def known(self, nodes):
105 def known(self, nodes):
106 return self._repo.known(nodes)
106 return self._repo.known(nodes)
107
107
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
109 format='HG10'):
109 format='HG10'):
110 cg = exchange.getbundle(self._repo, source, heads=heads,
110 cg = exchange.getbundle(self._repo, source, heads=heads,
111 common=common, bundlecaps=bundlecaps)
111 common=common, bundlecaps=bundlecaps)
112 if bundlecaps is not None and 'HG20' in bundlecaps:
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
113 # When requesting a bundle2, getbundle returns a stream to make the
113 # When requesting a bundle2, getbundle returns a stream to make the
114 # wire level function happier. We need to build a proper object
114 # wire level function happier. We need to build a proper object
115 # from it in local peer.
115 # from it in local peer.
116 cg = bundle2.unbundle20(self.ui, cg)
116 cg = bundle2.unbundle20(self.ui, cg)
117 return cg
117 return cg
118
118
119 # TODO We might want to move the next two calls into legacypeer and add
119 # TODO We might want to move the next two calls into legacypeer and add
120 # unbundle instead.
120 # unbundle instead.
121
121
122 def unbundle(self, cg, heads, url):
122 def unbundle(self, cg, heads, url):
123 """apply a bundle on a repo
123 """apply a bundle on a repo
124
124
125 This function handles the repo locking itself."""
125 This function handles the repo locking itself."""
126 try:
126 try:
127 cg = exchange.readbundle(self.ui, cg, None)
127 cg = exchange.readbundle(self.ui, cg, None)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
129 if util.safehasattr(ret, 'getchunks'):
129 if util.safehasattr(ret, 'getchunks'):
130 # This is a bundle20 object, turn it into an unbundler.
130 # This is a bundle20 object, turn it into an unbundler.
131 # This little dance should be dropped eventually when the API
131 # This little dance should be dropped eventually when the API
132 # is finally improved.
132 # is finally improved.
133 stream = util.chunkbuffer(ret.getchunks())
133 stream = util.chunkbuffer(ret.getchunks())
134 ret = bundle2.unbundle20(self.ui, stream)
134 ret = bundle2.unbundle20(self.ui, stream)
135 return ret
135 return ret
136 except exchange.PushRaced, exc:
136 except exchange.PushRaced, exc:
137 raise error.ResponseError(_('push failed:'), exc.message)
137 raise error.ResponseError(_('push failed:'), exc.message)
138
138
139 def lock(self):
139 def lock(self):
140 return self._repo.lock()
140 return self._repo.lock()
141
141
142 def addchangegroup(self, cg, source, url):
142 def addchangegroup(self, cg, source, url):
143 return changegroup.addchangegroup(self._repo, cg, source, url)
143 return changegroup.addchangegroup(self._repo, cg, source, url)
144
144
145 def pushkey(self, namespace, key, old, new):
145 def pushkey(self, namespace, key, old, new):
146 return self._repo.pushkey(namespace, key, old, new)
146 return self._repo.pushkey(namespace, key, old, new)
147
147
148 def listkeys(self, namespace):
148 def listkeys(self, namespace):
149 return self._repo.listkeys(namespace)
149 return self._repo.listkeys(namespace)
150
150
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
152 '''used to test argument passing over the wire'''
152 '''used to test argument passing over the wire'''
153 return "%s %s %s %s %s" % (one, two, three, four, five)
153 return "%s %s %s %s %s" % (one, two, three, four, five)
154
154
155 class locallegacypeer(localpeer):
155 class locallegacypeer(localpeer):
156 '''peer extension which implements legacy methods too; used for tests with
156 '''peer extension which implements legacy methods too; used for tests with
157 restricted capabilities'''
157 restricted capabilities'''
158
158
159 def __init__(self, repo):
159 def __init__(self, repo):
160 localpeer.__init__(self, repo, caps=legacycaps)
160 localpeer.__init__(self, repo, caps=legacycaps)
161
161
162 def branches(self, nodes):
162 def branches(self, nodes):
163 return self._repo.branches(nodes)
163 return self._repo.branches(nodes)
164
164
165 def between(self, pairs):
165 def between(self, pairs):
166 return self._repo.between(pairs)
166 return self._repo.between(pairs)
167
167
168 def changegroup(self, basenodes, source):
168 def changegroup(self, basenodes, source):
169 return changegroup.changegroup(self._repo, basenodes, source)
169 return changegroup.changegroup(self._repo, basenodes, source)
170
170
171 def changegroupsubset(self, bases, heads, source):
171 def changegroupsubset(self, bases, heads, source):
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
173
173
174 class localrepository(object):
174 class localrepository(object):
175
175
176 supportedformats = set(('revlogv1', 'generaldelta'))
176 supportedformats = set(('revlogv1', 'generaldelta'))
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
178 'dotencode'))
178 'dotencode'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
180 requirements = ['revlogv1']
180 requirements = ['revlogv1']
181 filtername = None
181 filtername = None
182
182
183 bundle2caps = {'HG20': ()}
183 bundle2caps = {'HG2X': ()}
184
184
185 # a list of (ui, featureset) functions.
185 # a list of (ui, featureset) functions.
186 # only functions defined in module of enabled extensions are invoked
186 # only functions defined in module of enabled extensions are invoked
187 featuresetupfuncs = set()
187 featuresetupfuncs = set()
188
188
189 def _baserequirements(self, create):
189 def _baserequirements(self, create):
190 return self.requirements[:]
190 return self.requirements[:]
191
191
192 def __init__(self, baseui, path=None, create=False):
192 def __init__(self, baseui, path=None, create=False):
193 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
193 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
194 self.wopener = self.wvfs
194 self.wopener = self.wvfs
195 self.root = self.wvfs.base
195 self.root = self.wvfs.base
196 self.path = self.wvfs.join(".hg")
196 self.path = self.wvfs.join(".hg")
197 self.origroot = path
197 self.origroot = path
198 self.auditor = pathutil.pathauditor(self.root, self._checknested)
198 self.auditor = pathutil.pathauditor(self.root, self._checknested)
199 self.vfs = scmutil.vfs(self.path)
199 self.vfs = scmutil.vfs(self.path)
200 self.opener = self.vfs
200 self.opener = self.vfs
201 self.baseui = baseui
201 self.baseui = baseui
202 self.ui = baseui.copy()
202 self.ui = baseui.copy()
203 self.ui.copy = baseui.copy # prevent copying repo configuration
203 self.ui.copy = baseui.copy # prevent copying repo configuration
204 # A list of callback to shape the phase if no data were found.
204 # A list of callback to shape the phase if no data were found.
205 # Callback are in the form: func(repo, roots) --> processed root.
205 # Callback are in the form: func(repo, roots) --> processed root.
206 # This list it to be filled by extension during repo setup
206 # This list it to be filled by extension during repo setup
207 self._phasedefaults = []
207 self._phasedefaults = []
208 try:
208 try:
209 self.ui.readconfig(self.join("hgrc"), self.root)
209 self.ui.readconfig(self.join("hgrc"), self.root)
210 extensions.loadall(self.ui)
210 extensions.loadall(self.ui)
211 except IOError:
211 except IOError:
212 pass
212 pass
213
213
214 if self.featuresetupfuncs:
214 if self.featuresetupfuncs:
215 self.supported = set(self._basesupported) # use private copy
215 self.supported = set(self._basesupported) # use private copy
216 extmods = set(m.__name__ for n, m
216 extmods = set(m.__name__ for n, m
217 in extensions.extensions(self.ui))
217 in extensions.extensions(self.ui))
218 for setupfunc in self.featuresetupfuncs:
218 for setupfunc in self.featuresetupfuncs:
219 if setupfunc.__module__ in extmods:
219 if setupfunc.__module__ in extmods:
220 setupfunc(self.ui, self.supported)
220 setupfunc(self.ui, self.supported)
221 else:
221 else:
222 self.supported = self._basesupported
222 self.supported = self._basesupported
223
223
224 if not self.vfs.isdir():
224 if not self.vfs.isdir():
225 if create:
225 if create:
226 if not self.wvfs.exists():
226 if not self.wvfs.exists():
227 self.wvfs.makedirs()
227 self.wvfs.makedirs()
228 self.vfs.makedir(notindexed=True)
228 self.vfs.makedir(notindexed=True)
229 requirements = self._baserequirements(create)
229 requirements = self._baserequirements(create)
230 if self.ui.configbool('format', 'usestore', True):
230 if self.ui.configbool('format', 'usestore', True):
231 self.vfs.mkdir("store")
231 self.vfs.mkdir("store")
232 requirements.append("store")
232 requirements.append("store")
233 if self.ui.configbool('format', 'usefncache', True):
233 if self.ui.configbool('format', 'usefncache', True):
234 requirements.append("fncache")
234 requirements.append("fncache")
235 if self.ui.configbool('format', 'dotencode', True):
235 if self.ui.configbool('format', 'dotencode', True):
236 requirements.append('dotencode')
236 requirements.append('dotencode')
237 # create an invalid changelog
237 # create an invalid changelog
238 self.vfs.append(
238 self.vfs.append(
239 "00changelog.i",
239 "00changelog.i",
240 '\0\0\0\2' # represents revlogv2
240 '\0\0\0\2' # represents revlogv2
241 ' dummy changelog to prevent using the old repo layout'
241 ' dummy changelog to prevent using the old repo layout'
242 )
242 )
243 if self.ui.configbool('format', 'generaldelta', False):
243 if self.ui.configbool('format', 'generaldelta', False):
244 requirements.append("generaldelta")
244 requirements.append("generaldelta")
245 requirements = set(requirements)
245 requirements = set(requirements)
246 else:
246 else:
247 raise error.RepoError(_("repository %s not found") % path)
247 raise error.RepoError(_("repository %s not found") % path)
248 elif create:
248 elif create:
249 raise error.RepoError(_("repository %s already exists") % path)
249 raise error.RepoError(_("repository %s already exists") % path)
250 else:
250 else:
251 try:
251 try:
252 requirements = scmutil.readrequires(self.vfs, self.supported)
252 requirements = scmutil.readrequires(self.vfs, self.supported)
253 except IOError, inst:
253 except IOError, inst:
254 if inst.errno != errno.ENOENT:
254 if inst.errno != errno.ENOENT:
255 raise
255 raise
256 requirements = set()
256 requirements = set()
257
257
258 self.sharedpath = self.path
258 self.sharedpath = self.path
259 try:
259 try:
260 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
260 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
261 realpath=True)
261 realpath=True)
262 s = vfs.base
262 s = vfs.base
263 if not vfs.exists():
263 if not vfs.exists():
264 raise error.RepoError(
264 raise error.RepoError(
265 _('.hg/sharedpath points to nonexistent directory %s') % s)
265 _('.hg/sharedpath points to nonexistent directory %s') % s)
266 self.sharedpath = s
266 self.sharedpath = s
267 except IOError, inst:
267 except IOError, inst:
268 if inst.errno != errno.ENOENT:
268 if inst.errno != errno.ENOENT:
269 raise
269 raise
270
270
271 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
271 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
272 self.spath = self.store.path
272 self.spath = self.store.path
273 self.svfs = self.store.vfs
273 self.svfs = self.store.vfs
274 self.sopener = self.svfs
274 self.sopener = self.svfs
275 self.sjoin = self.store.join
275 self.sjoin = self.store.join
276 self.vfs.createmode = self.store.createmode
276 self.vfs.createmode = self.store.createmode
277 self._applyrequirements(requirements)
277 self._applyrequirements(requirements)
278 if create:
278 if create:
279 self._writerequirements()
279 self._writerequirements()
280
280
281
281
282 self._branchcaches = {}
282 self._branchcaches = {}
283 self.filterpats = {}
283 self.filterpats = {}
284 self._datafilters = {}
284 self._datafilters = {}
285 self._transref = self._lockref = self._wlockref = None
285 self._transref = self._lockref = self._wlockref = None
286
286
287 # A cache for various files under .hg/ that tracks file changes,
287 # A cache for various files under .hg/ that tracks file changes,
288 # (used by the filecache decorator)
288 # (used by the filecache decorator)
289 #
289 #
290 # Maps a property name to its util.filecacheentry
290 # Maps a property name to its util.filecacheentry
291 self._filecache = {}
291 self._filecache = {}
292
292
293 # hold sets of revision to be filtered
293 # hold sets of revision to be filtered
294 # should be cleared when something might have changed the filter value:
294 # should be cleared when something might have changed the filter value:
295 # - new changesets,
295 # - new changesets,
296 # - phase change,
296 # - phase change,
297 # - new obsolescence marker,
297 # - new obsolescence marker,
298 # - working directory parent change,
298 # - working directory parent change,
299 # - bookmark changes
299 # - bookmark changes
300 self.filteredrevcache = {}
300 self.filteredrevcache = {}
301
301
302 def close(self):
302 def close(self):
303 pass
303 pass
304
304
305 def _restrictcapabilities(self, caps):
305 def _restrictcapabilities(self, caps):
306 # bundle2 is not ready for prime time, drop it unless explicitly
306 # bundle2 is not ready for prime time, drop it unless explicitly
307 # required by the tests (or some brave tester)
307 # required by the tests (or some brave tester)
308 if self.ui.configbool('server', 'bundle2', False):
308 if self.ui.configbool('server', 'bundle2', False):
309 caps = set(caps)
309 caps = set(caps)
310 capsblob = bundle2.encodecaps(self.bundle2caps)
310 capsblob = bundle2.encodecaps(self.bundle2caps)
311 caps.add('bundle2=' + urllib.quote(capsblob))
311 caps.add('bundle2=' + urllib.quote(capsblob))
312 return caps
312 return caps
313
313
314 def _applyrequirements(self, requirements):
314 def _applyrequirements(self, requirements):
315 self.requirements = requirements
315 self.requirements = requirements
316 self.sopener.options = dict((r, 1) for r in requirements
316 self.sopener.options = dict((r, 1) for r in requirements
317 if r in self.openerreqs)
317 if r in self.openerreqs)
318 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
318 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
319 if chunkcachesize is not None:
319 if chunkcachesize is not None:
320 self.sopener.options['chunkcachesize'] = chunkcachesize
320 self.sopener.options['chunkcachesize'] = chunkcachesize
321
321
322 def _writerequirements(self):
322 def _writerequirements(self):
323 reqfile = self.opener("requires", "w")
323 reqfile = self.opener("requires", "w")
324 for r in sorted(self.requirements):
324 for r in sorted(self.requirements):
325 reqfile.write("%s\n" % r)
325 reqfile.write("%s\n" % r)
326 reqfile.close()
326 reqfile.close()
327
327
328 def _checknested(self, path):
328 def _checknested(self, path):
329 """Determine if path is a legal nested repository."""
329 """Determine if path is a legal nested repository."""
330 if not path.startswith(self.root):
330 if not path.startswith(self.root):
331 return False
331 return False
332 subpath = path[len(self.root) + 1:]
332 subpath = path[len(self.root) + 1:]
333 normsubpath = util.pconvert(subpath)
333 normsubpath = util.pconvert(subpath)
334
334
335 # XXX: Checking against the current working copy is wrong in
335 # XXX: Checking against the current working copy is wrong in
336 # the sense that it can reject things like
336 # the sense that it can reject things like
337 #
337 #
338 # $ hg cat -r 10 sub/x.txt
338 # $ hg cat -r 10 sub/x.txt
339 #
339 #
340 # if sub/ is no longer a subrepository in the working copy
340 # if sub/ is no longer a subrepository in the working copy
341 # parent revision.
341 # parent revision.
342 #
342 #
343 # However, it can of course also allow things that would have
343 # However, it can of course also allow things that would have
344 # been rejected before, such as the above cat command if sub/
344 # been rejected before, such as the above cat command if sub/
345 # is a subrepository now, but was a normal directory before.
345 # is a subrepository now, but was a normal directory before.
346 # The old path auditor would have rejected by mistake since it
346 # The old path auditor would have rejected by mistake since it
347 # panics when it sees sub/.hg/.
347 # panics when it sees sub/.hg/.
348 #
348 #
349 # All in all, checking against the working copy seems sensible
349 # All in all, checking against the working copy seems sensible
350 # since we want to prevent access to nested repositories on
350 # since we want to prevent access to nested repositories on
351 # the filesystem *now*.
351 # the filesystem *now*.
352 ctx = self[None]
352 ctx = self[None]
353 parts = util.splitpath(subpath)
353 parts = util.splitpath(subpath)
354 while parts:
354 while parts:
355 prefix = '/'.join(parts)
355 prefix = '/'.join(parts)
356 if prefix in ctx.substate:
356 if prefix in ctx.substate:
357 if prefix == normsubpath:
357 if prefix == normsubpath:
358 return True
358 return True
359 else:
359 else:
360 sub = ctx.sub(prefix)
360 sub = ctx.sub(prefix)
361 return sub.checknested(subpath[len(prefix) + 1:])
361 return sub.checknested(subpath[len(prefix) + 1:])
362 else:
362 else:
363 parts.pop()
363 parts.pop()
364 return False
364 return False
365
365
366 def peer(self):
366 def peer(self):
367 return localpeer(self) # not cached to avoid reference cycle
367 return localpeer(self) # not cached to avoid reference cycle
368
368
369 def unfiltered(self):
369 def unfiltered(self):
370 """Return unfiltered version of the repository
370 """Return unfiltered version of the repository
371
371
372 Intended to be overwritten by filtered repo."""
372 Intended to be overwritten by filtered repo."""
373 return self
373 return self
374
374
375 def filtered(self, name):
375 def filtered(self, name):
376 """Return a filtered version of a repository"""
376 """Return a filtered version of a repository"""
377 # build a new class with the mixin and the current class
377 # build a new class with the mixin and the current class
378 # (possibly subclass of the repo)
378 # (possibly subclass of the repo)
379 class proxycls(repoview.repoview, self.unfiltered().__class__):
379 class proxycls(repoview.repoview, self.unfiltered().__class__):
380 pass
380 pass
381 return proxycls(self, name)
381 return proxycls(self, name)
382
382
383 @repofilecache('bookmarks')
383 @repofilecache('bookmarks')
384 def _bookmarks(self):
384 def _bookmarks(self):
385 return bookmarks.bmstore(self)
385 return bookmarks.bmstore(self)
386
386
387 @repofilecache('bookmarks.current')
387 @repofilecache('bookmarks.current')
388 def _bookmarkcurrent(self):
388 def _bookmarkcurrent(self):
389 return bookmarks.readcurrent(self)
389 return bookmarks.readcurrent(self)
390
390
391 def bookmarkheads(self, bookmark):
391 def bookmarkheads(self, bookmark):
392 name = bookmark.split('@', 1)[0]
392 name = bookmark.split('@', 1)[0]
393 heads = []
393 heads = []
394 for mark, n in self._bookmarks.iteritems():
394 for mark, n in self._bookmarks.iteritems():
395 if mark.split('@', 1)[0] == name:
395 if mark.split('@', 1)[0] == name:
396 heads.append(n)
396 heads.append(n)
397 return heads
397 return heads
398
398
399 @storecache('phaseroots')
399 @storecache('phaseroots')
400 def _phasecache(self):
400 def _phasecache(self):
401 return phases.phasecache(self, self._phasedefaults)
401 return phases.phasecache(self, self._phasedefaults)
402
402
403 @storecache('obsstore')
403 @storecache('obsstore')
404 def obsstore(self):
404 def obsstore(self):
405 store = obsolete.obsstore(self.sopener)
405 store = obsolete.obsstore(self.sopener)
406 if store and not obsolete._enabled:
406 if store and not obsolete._enabled:
407 # message is rare enough to not be translated
407 # message is rare enough to not be translated
408 msg = 'obsolete feature not enabled but %i markers found!\n'
408 msg = 'obsolete feature not enabled but %i markers found!\n'
409 self.ui.warn(msg % len(list(store)))
409 self.ui.warn(msg % len(list(store)))
410 return store
410 return store
411
411
412 @storecache('00changelog.i')
412 @storecache('00changelog.i')
413 def changelog(self):
413 def changelog(self):
414 c = changelog.changelog(self.sopener)
414 c = changelog.changelog(self.sopener)
415 if 'HG_PENDING' in os.environ:
415 if 'HG_PENDING' in os.environ:
416 p = os.environ['HG_PENDING']
416 p = os.environ['HG_PENDING']
417 if p.startswith(self.root):
417 if p.startswith(self.root):
418 c.readpending('00changelog.i.a')
418 c.readpending('00changelog.i.a')
419 return c
419 return c
420
420
421 @storecache('00manifest.i')
421 @storecache('00manifest.i')
422 def manifest(self):
422 def manifest(self):
423 return manifest.manifest(self.sopener)
423 return manifest.manifest(self.sopener)
424
424
425 @repofilecache('dirstate')
425 @repofilecache('dirstate')
426 def dirstate(self):
426 def dirstate(self):
427 warned = [0]
427 warned = [0]
428 def validate(node):
428 def validate(node):
429 try:
429 try:
430 self.changelog.rev(node)
430 self.changelog.rev(node)
431 return node
431 return node
432 except error.LookupError:
432 except error.LookupError:
433 if not warned[0]:
433 if not warned[0]:
434 warned[0] = True
434 warned[0] = True
435 self.ui.warn(_("warning: ignoring unknown"
435 self.ui.warn(_("warning: ignoring unknown"
436 " working parent %s!\n") % short(node))
436 " working parent %s!\n") % short(node))
437 return nullid
437 return nullid
438
438
439 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
439 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
440
440
441 def __getitem__(self, changeid):
441 def __getitem__(self, changeid):
442 if changeid is None:
442 if changeid is None:
443 return context.workingctx(self)
443 return context.workingctx(self)
444 return context.changectx(self, changeid)
444 return context.changectx(self, changeid)
445
445
446 def __contains__(self, changeid):
446 def __contains__(self, changeid):
447 try:
447 try:
448 return bool(self.lookup(changeid))
448 return bool(self.lookup(changeid))
449 except error.RepoLookupError:
449 except error.RepoLookupError:
450 return False
450 return False
451
451
452 def __nonzero__(self):
452 def __nonzero__(self):
453 return True
453 return True
454
454
455 def __len__(self):
455 def __len__(self):
456 return len(self.changelog)
456 return len(self.changelog)
457
457
458 def __iter__(self):
458 def __iter__(self):
459 return iter(self.changelog)
459 return iter(self.changelog)
460
460
461 def revs(self, expr, *args):
461 def revs(self, expr, *args):
462 '''Return a list of revisions matching the given revset'''
462 '''Return a list of revisions matching the given revset'''
463 expr = revset.formatspec(expr, *args)
463 expr = revset.formatspec(expr, *args)
464 m = revset.match(None, expr)
464 m = revset.match(None, expr)
465 return m(self, revset.spanset(self))
465 return m(self, revset.spanset(self))
466
466
467 def set(self, expr, *args):
467 def set(self, expr, *args):
468 '''
468 '''
469 Yield a context for each matching revision, after doing arg
469 Yield a context for each matching revision, after doing arg
470 replacement via revset.formatspec
470 replacement via revset.formatspec
471 '''
471 '''
472 for r in self.revs(expr, *args):
472 for r in self.revs(expr, *args):
473 yield self[r]
473 yield self[r]
474
474
475 def url(self):
475 def url(self):
476 return 'file:' + self.root
476 return 'file:' + self.root
477
477
478 def hook(self, name, throw=False, **args):
478 def hook(self, name, throw=False, **args):
479 return hook.hook(self.ui, self, name, throw, **args)
479 return hook.hook(self.ui, self, name, throw, **args)
480
480
481 @unfilteredmethod
481 @unfilteredmethod
482 def _tag(self, names, node, message, local, user, date, extra={}):
482 def _tag(self, names, node, message, local, user, date, extra={}):
483 if isinstance(names, str):
483 if isinstance(names, str):
484 names = (names,)
484 names = (names,)
485
485
486 branches = self.branchmap()
486 branches = self.branchmap()
487 for name in names:
487 for name in names:
488 self.hook('pretag', throw=True, node=hex(node), tag=name,
488 self.hook('pretag', throw=True, node=hex(node), tag=name,
489 local=local)
489 local=local)
490 if name in branches:
490 if name in branches:
491 self.ui.warn(_("warning: tag %s conflicts with existing"
491 self.ui.warn(_("warning: tag %s conflicts with existing"
492 " branch name\n") % name)
492 " branch name\n") % name)
493
493
494 def writetags(fp, names, munge, prevtags):
494 def writetags(fp, names, munge, prevtags):
495 fp.seek(0, 2)
495 fp.seek(0, 2)
496 if prevtags and prevtags[-1] != '\n':
496 if prevtags and prevtags[-1] != '\n':
497 fp.write('\n')
497 fp.write('\n')
498 for name in names:
498 for name in names:
499 m = munge and munge(name) or name
499 m = munge and munge(name) or name
500 if (self._tagscache.tagtypes and
500 if (self._tagscache.tagtypes and
501 name in self._tagscache.tagtypes):
501 name in self._tagscache.tagtypes):
502 old = self.tags().get(name, nullid)
502 old = self.tags().get(name, nullid)
503 fp.write('%s %s\n' % (hex(old), m))
503 fp.write('%s %s\n' % (hex(old), m))
504 fp.write('%s %s\n' % (hex(node), m))
504 fp.write('%s %s\n' % (hex(node), m))
505 fp.close()
505 fp.close()
506
506
507 prevtags = ''
507 prevtags = ''
508 if local:
508 if local:
509 try:
509 try:
510 fp = self.opener('localtags', 'r+')
510 fp = self.opener('localtags', 'r+')
511 except IOError:
511 except IOError:
512 fp = self.opener('localtags', 'a')
512 fp = self.opener('localtags', 'a')
513 else:
513 else:
514 prevtags = fp.read()
514 prevtags = fp.read()
515
515
516 # local tags are stored in the current charset
516 # local tags are stored in the current charset
517 writetags(fp, names, None, prevtags)
517 writetags(fp, names, None, prevtags)
518 for name in names:
518 for name in names:
519 self.hook('tag', node=hex(node), tag=name, local=local)
519 self.hook('tag', node=hex(node), tag=name, local=local)
520 return
520 return
521
521
522 try:
522 try:
523 fp = self.wfile('.hgtags', 'rb+')
523 fp = self.wfile('.hgtags', 'rb+')
524 except IOError, e:
524 except IOError, e:
525 if e.errno != errno.ENOENT:
525 if e.errno != errno.ENOENT:
526 raise
526 raise
527 fp = self.wfile('.hgtags', 'ab')
527 fp = self.wfile('.hgtags', 'ab')
528 else:
528 else:
529 prevtags = fp.read()
529 prevtags = fp.read()
530
530
531 # committed tags are stored in UTF-8
531 # committed tags are stored in UTF-8
532 writetags(fp, names, encoding.fromlocal, prevtags)
532 writetags(fp, names, encoding.fromlocal, prevtags)
533
533
534 fp.close()
534 fp.close()
535
535
536 self.invalidatecaches()
536 self.invalidatecaches()
537
537
538 if '.hgtags' not in self.dirstate:
538 if '.hgtags' not in self.dirstate:
539 self[None].add(['.hgtags'])
539 self[None].add(['.hgtags'])
540
540
541 m = matchmod.exact(self.root, '', ['.hgtags'])
541 m = matchmod.exact(self.root, '', ['.hgtags'])
542 tagnode = self.commit(message, user, date, extra=extra, match=m)
542 tagnode = self.commit(message, user, date, extra=extra, match=m)
543
543
544 for name in names:
544 for name in names:
545 self.hook('tag', node=hex(node), tag=name, local=local)
545 self.hook('tag', node=hex(node), tag=name, local=local)
546
546
547 return tagnode
547 return tagnode
548
548
549 def tag(self, names, node, message, local, user, date):
549 def tag(self, names, node, message, local, user, date):
550 '''tag a revision with one or more symbolic names.
550 '''tag a revision with one or more symbolic names.
551
551
552 names is a list of strings or, when adding a single tag, names may be a
552 names is a list of strings or, when adding a single tag, names may be a
553 string.
553 string.
554
554
555 if local is True, the tags are stored in a per-repository file.
555 if local is True, the tags are stored in a per-repository file.
556 otherwise, they are stored in the .hgtags file, and a new
556 otherwise, they are stored in the .hgtags file, and a new
557 changeset is committed with the change.
557 changeset is committed with the change.
558
558
559 keyword arguments:
559 keyword arguments:
560
560
561 local: whether to store tags in non-version-controlled file
561 local: whether to store tags in non-version-controlled file
562 (default False)
562 (default False)
563
563
564 message: commit message to use if committing
564 message: commit message to use if committing
565
565
566 user: name of user to use if committing
566 user: name of user to use if committing
567
567
568 date: date tuple to use if committing'''
568 date: date tuple to use if committing'''
569
569
570 if not local:
570 if not local:
571 for x in self.status()[:5]:
571 for x in self.status()[:5]:
572 if '.hgtags' in x:
572 if '.hgtags' in x:
573 raise util.Abort(_('working copy of .hgtags is changed '
573 raise util.Abort(_('working copy of .hgtags is changed '
574 '(please commit .hgtags manually)'))
574 '(please commit .hgtags manually)'))
575
575
576 self.tags() # instantiate the cache
576 self.tags() # instantiate the cache
577 self._tag(names, node, message, local, user, date)
577 self._tag(names, node, message, local, user, date)
578
578
579 @filteredpropertycache
579 @filteredpropertycache
580 def _tagscache(self):
580 def _tagscache(self):
581 '''Returns a tagscache object that contains various tags related
581 '''Returns a tagscache object that contains various tags related
582 caches.'''
582 caches.'''
583
583
584 # This simplifies its cache management by having one decorated
584 # This simplifies its cache management by having one decorated
585 # function (this one) and the rest simply fetch things from it.
585 # function (this one) and the rest simply fetch things from it.
586 class tagscache(object):
586 class tagscache(object):
587 def __init__(self):
587 def __init__(self):
588 # These two define the set of tags for this repository. tags
588 # These two define the set of tags for this repository. tags
589 # maps tag name to node; tagtypes maps tag name to 'global' or
589 # maps tag name to node; tagtypes maps tag name to 'global' or
590 # 'local'. (Global tags are defined by .hgtags across all
590 # 'local'. (Global tags are defined by .hgtags across all
591 # heads, and local tags are defined in .hg/localtags.)
591 # heads, and local tags are defined in .hg/localtags.)
592 # They constitute the in-memory cache of tags.
592 # They constitute the in-memory cache of tags.
593 self.tags = self.tagtypes = None
593 self.tags = self.tagtypes = None
594
594
595 self.nodetagscache = self.tagslist = None
595 self.nodetagscache = self.tagslist = None
596
596
597 cache = tagscache()
597 cache = tagscache()
598 cache.tags, cache.tagtypes = self._findtags()
598 cache.tags, cache.tagtypes = self._findtags()
599
599
600 return cache
600 return cache
601
601
602 def tags(self):
602 def tags(self):
603 '''return a mapping of tag to node'''
603 '''return a mapping of tag to node'''
604 t = {}
604 t = {}
605 if self.changelog.filteredrevs:
605 if self.changelog.filteredrevs:
606 tags, tt = self._findtags()
606 tags, tt = self._findtags()
607 else:
607 else:
608 tags = self._tagscache.tags
608 tags = self._tagscache.tags
609 for k, v in tags.iteritems():
609 for k, v in tags.iteritems():
610 try:
610 try:
611 # ignore tags to unknown nodes
611 # ignore tags to unknown nodes
612 self.changelog.rev(v)
612 self.changelog.rev(v)
613 t[k] = v
613 t[k] = v
614 except (error.LookupError, ValueError):
614 except (error.LookupError, ValueError):
615 pass
615 pass
616 return t
616 return t
617
617
618 def _findtags(self):
618 def _findtags(self):
619 '''Do the hard work of finding tags. Return a pair of dicts
619 '''Do the hard work of finding tags. Return a pair of dicts
620 (tags, tagtypes) where tags maps tag name to node, and tagtypes
620 (tags, tagtypes) where tags maps tag name to node, and tagtypes
621 maps tag name to a string like \'global\' or \'local\'.
621 maps tag name to a string like \'global\' or \'local\'.
622 Subclasses or extensions are free to add their own tags, but
622 Subclasses or extensions are free to add their own tags, but
623 should be aware that the returned dicts will be retained for the
623 should be aware that the returned dicts will be retained for the
624 duration of the localrepo object.'''
624 duration of the localrepo object.'''
625
625
626 # XXX what tagtype should subclasses/extensions use? Currently
626 # XXX what tagtype should subclasses/extensions use? Currently
627 # mq and bookmarks add tags, but do not set the tagtype at all.
627 # mq and bookmarks add tags, but do not set the tagtype at all.
628 # Should each extension invent its own tag type? Should there
628 # Should each extension invent its own tag type? Should there
629 # be one tagtype for all such "virtual" tags? Or is the status
629 # be one tagtype for all such "virtual" tags? Or is the status
630 # quo fine?
630 # quo fine?
631
631
632 alltags = {} # map tag name to (node, hist)
632 alltags = {} # map tag name to (node, hist)
633 tagtypes = {}
633 tagtypes = {}
634
634
635 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
635 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
636 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
636 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
637
637
638 # Build the return dicts. Have to re-encode tag names because
638 # Build the return dicts. Have to re-encode tag names because
639 # the tags module always uses UTF-8 (in order not to lose info
639 # the tags module always uses UTF-8 (in order not to lose info
640 # writing to the cache), but the rest of Mercurial wants them in
640 # writing to the cache), but the rest of Mercurial wants them in
641 # local encoding.
641 # local encoding.
642 tags = {}
642 tags = {}
643 for (name, (node, hist)) in alltags.iteritems():
643 for (name, (node, hist)) in alltags.iteritems():
644 if node != nullid:
644 if node != nullid:
645 tags[encoding.tolocal(name)] = node
645 tags[encoding.tolocal(name)] = node
646 tags['tip'] = self.changelog.tip()
646 tags['tip'] = self.changelog.tip()
647 tagtypes = dict([(encoding.tolocal(name), value)
647 tagtypes = dict([(encoding.tolocal(name), value)
648 for (name, value) in tagtypes.iteritems()])
648 for (name, value) in tagtypes.iteritems()])
649 return (tags, tagtypes)
649 return (tags, tagtypes)
650
650
651 def tagtype(self, tagname):
651 def tagtype(self, tagname):
652 '''
652 '''
653 return the type of the given tag. result can be:
653 return the type of the given tag. result can be:
654
654
655 'local' : a local tag
655 'local' : a local tag
656 'global' : a global tag
656 'global' : a global tag
657 None : tag does not exist
657 None : tag does not exist
658 '''
658 '''
659
659
660 return self._tagscache.tagtypes.get(tagname)
660 return self._tagscache.tagtypes.get(tagname)
661
661
662 def tagslist(self):
662 def tagslist(self):
663 '''return a list of tags ordered by revision'''
663 '''return a list of tags ordered by revision'''
664 if not self._tagscache.tagslist:
664 if not self._tagscache.tagslist:
665 l = []
665 l = []
666 for t, n in self.tags().iteritems():
666 for t, n in self.tags().iteritems():
667 r = self.changelog.rev(n)
667 r = self.changelog.rev(n)
668 l.append((r, t, n))
668 l.append((r, t, n))
669 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
669 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
670
670
671 return self._tagscache.tagslist
671 return self._tagscache.tagslist
672
672
673 def nodetags(self, node):
673 def nodetags(self, node):
674 '''return the tags associated with a node'''
674 '''return the tags associated with a node'''
675 if not self._tagscache.nodetagscache:
675 if not self._tagscache.nodetagscache:
676 nodetagscache = {}
676 nodetagscache = {}
677 for t, n in self._tagscache.tags.iteritems():
677 for t, n in self._tagscache.tags.iteritems():
678 nodetagscache.setdefault(n, []).append(t)
678 nodetagscache.setdefault(n, []).append(t)
679 for tags in nodetagscache.itervalues():
679 for tags in nodetagscache.itervalues():
680 tags.sort()
680 tags.sort()
681 self._tagscache.nodetagscache = nodetagscache
681 self._tagscache.nodetagscache = nodetagscache
682 return self._tagscache.nodetagscache.get(node, [])
682 return self._tagscache.nodetagscache.get(node, [])
683
683
684 def nodebookmarks(self, node):
684 def nodebookmarks(self, node):
685 marks = []
685 marks = []
686 for bookmark, n in self._bookmarks.iteritems():
686 for bookmark, n in self._bookmarks.iteritems():
687 if n == node:
687 if n == node:
688 marks.append(bookmark)
688 marks.append(bookmark)
689 return sorted(marks)
689 return sorted(marks)
690
690
691 def branchmap(self):
691 def branchmap(self):
692 '''returns a dictionary {branch: [branchheads]} with branchheads
692 '''returns a dictionary {branch: [branchheads]} with branchheads
693 ordered by increasing revision number'''
693 ordered by increasing revision number'''
694 branchmap.updatecache(self)
694 branchmap.updatecache(self)
695 return self._branchcaches[self.filtername]
695 return self._branchcaches[self.filtername]
696
696
697 def branchtip(self, branch):
697 def branchtip(self, branch):
698 '''return the tip node for a given branch'''
698 '''return the tip node for a given branch'''
699 try:
699 try:
700 return self.branchmap().branchtip(branch)
700 return self.branchmap().branchtip(branch)
701 except KeyError:
701 except KeyError:
702 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
702 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
703
703
704 def lookup(self, key):
704 def lookup(self, key):
705 return self[key].node()
705 return self[key].node()
706
706
707 def lookupbranch(self, key, remote=None):
707 def lookupbranch(self, key, remote=None):
708 repo = remote or self
708 repo = remote or self
709 if key in repo.branchmap():
709 if key in repo.branchmap():
710 return key
710 return key
711
711
712 repo = (remote and remote.local()) and remote or self
712 repo = (remote and remote.local()) and remote or self
713 return repo[key].branch()
713 return repo[key].branch()
714
714
715 def known(self, nodes):
715 def known(self, nodes):
716 nm = self.changelog.nodemap
716 nm = self.changelog.nodemap
717 pc = self._phasecache
717 pc = self._phasecache
718 result = []
718 result = []
719 for n in nodes:
719 for n in nodes:
720 r = nm.get(n)
720 r = nm.get(n)
721 resp = not (r is None or pc.phase(self, r) >= phases.secret)
721 resp = not (r is None or pc.phase(self, r) >= phases.secret)
722 result.append(resp)
722 result.append(resp)
723 return result
723 return result
724
724
725 def local(self):
725 def local(self):
726 return self
726 return self
727
727
728 def cancopy(self):
728 def cancopy(self):
729 # so statichttprepo's override of local() works
729 # so statichttprepo's override of local() works
730 if not self.local():
730 if not self.local():
731 return False
731 return False
732 if not self.ui.configbool('phases', 'publish', True):
732 if not self.ui.configbool('phases', 'publish', True):
733 return True
733 return True
734 # if publishing we can't copy if there is filtered content
734 # if publishing we can't copy if there is filtered content
735 return not self.filtered('visible').changelog.filteredrevs
735 return not self.filtered('visible').changelog.filteredrevs
736
736
737 def join(self, f):
737 def join(self, f):
738 return os.path.join(self.path, f)
738 return os.path.join(self.path, f)
739
739
740 def wjoin(self, f):
740 def wjoin(self, f):
741 return os.path.join(self.root, f)
741 return os.path.join(self.root, f)
742
742
743 def file(self, f):
743 def file(self, f):
744 if f[0] == '/':
744 if f[0] == '/':
745 f = f[1:]
745 f = f[1:]
746 return filelog.filelog(self.sopener, f)
746 return filelog.filelog(self.sopener, f)
747
747
748 def changectx(self, changeid):
748 def changectx(self, changeid):
749 return self[changeid]
749 return self[changeid]
750
750
751 def parents(self, changeid=None):
751 def parents(self, changeid=None):
752 '''get list of changectxs for parents of changeid'''
752 '''get list of changectxs for parents of changeid'''
753 return self[changeid].parents()
753 return self[changeid].parents()
754
754
755 def setparents(self, p1, p2=nullid):
755 def setparents(self, p1, p2=nullid):
756 copies = self.dirstate.setparents(p1, p2)
756 copies = self.dirstate.setparents(p1, p2)
757 pctx = self[p1]
757 pctx = self[p1]
758 if copies:
758 if copies:
759 # Adjust copy records, the dirstate cannot do it, it
759 # Adjust copy records, the dirstate cannot do it, it
760 # requires access to parents manifests. Preserve them
760 # requires access to parents manifests. Preserve them
761 # only for entries added to first parent.
761 # only for entries added to first parent.
762 for f in copies:
762 for f in copies:
763 if f not in pctx and copies[f] in pctx:
763 if f not in pctx and copies[f] in pctx:
764 self.dirstate.copy(copies[f], f)
764 self.dirstate.copy(copies[f], f)
765 if p2 == nullid:
765 if p2 == nullid:
766 for f, s in sorted(self.dirstate.copies().items()):
766 for f, s in sorted(self.dirstate.copies().items()):
767 if f not in pctx and s not in pctx:
767 if f not in pctx and s not in pctx:
768 self.dirstate.copy(None, f)
768 self.dirstate.copy(None, f)
769
769
770 def filectx(self, path, changeid=None, fileid=None):
770 def filectx(self, path, changeid=None, fileid=None):
771 """changeid can be a changeset revision, node, or tag.
771 """changeid can be a changeset revision, node, or tag.
772 fileid can be a file revision or node."""
772 fileid can be a file revision or node."""
773 return context.filectx(self, path, changeid, fileid)
773 return context.filectx(self, path, changeid, fileid)
774
774
775 def getcwd(self):
775 def getcwd(self):
776 return self.dirstate.getcwd()
776 return self.dirstate.getcwd()
777
777
778 def pathto(self, f, cwd=None):
778 def pathto(self, f, cwd=None):
779 return self.dirstate.pathto(f, cwd)
779 return self.dirstate.pathto(f, cwd)
780
780
781 def wfile(self, f, mode='r'):
781 def wfile(self, f, mode='r'):
782 return self.wopener(f, mode)
782 return self.wopener(f, mode)
783
783
784 def _link(self, f):
784 def _link(self, f):
785 return self.wvfs.islink(f)
785 return self.wvfs.islink(f)
786
786
787 def _loadfilter(self, filter):
787 def _loadfilter(self, filter):
788 if filter not in self.filterpats:
788 if filter not in self.filterpats:
789 l = []
789 l = []
790 for pat, cmd in self.ui.configitems(filter):
790 for pat, cmd in self.ui.configitems(filter):
791 if cmd == '!':
791 if cmd == '!':
792 continue
792 continue
793 mf = matchmod.match(self.root, '', [pat])
793 mf = matchmod.match(self.root, '', [pat])
794 fn = None
794 fn = None
795 params = cmd
795 params = cmd
796 for name, filterfn in self._datafilters.iteritems():
796 for name, filterfn in self._datafilters.iteritems():
797 if cmd.startswith(name):
797 if cmd.startswith(name):
798 fn = filterfn
798 fn = filterfn
799 params = cmd[len(name):].lstrip()
799 params = cmd[len(name):].lstrip()
800 break
800 break
801 if not fn:
801 if not fn:
802 fn = lambda s, c, **kwargs: util.filter(s, c)
802 fn = lambda s, c, **kwargs: util.filter(s, c)
803 # Wrap old filters not supporting keyword arguments
803 # Wrap old filters not supporting keyword arguments
804 if not inspect.getargspec(fn)[2]:
804 if not inspect.getargspec(fn)[2]:
805 oldfn = fn
805 oldfn = fn
806 fn = lambda s, c, **kwargs: oldfn(s, c)
806 fn = lambda s, c, **kwargs: oldfn(s, c)
807 l.append((mf, fn, params))
807 l.append((mf, fn, params))
808 self.filterpats[filter] = l
808 self.filterpats[filter] = l
809 return self.filterpats[filter]
809 return self.filterpats[filter]
810
810
811 def _filter(self, filterpats, filename, data):
811 def _filter(self, filterpats, filename, data):
812 for mf, fn, cmd in filterpats:
812 for mf, fn, cmd in filterpats:
813 if mf(filename):
813 if mf(filename):
814 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
814 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
815 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
815 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
816 break
816 break
817
817
818 return data
818 return data
819
819
820 @unfilteredpropertycache
820 @unfilteredpropertycache
821 def _encodefilterpats(self):
821 def _encodefilterpats(self):
822 return self._loadfilter('encode')
822 return self._loadfilter('encode')
823
823
824 @unfilteredpropertycache
824 @unfilteredpropertycache
825 def _decodefilterpats(self):
825 def _decodefilterpats(self):
826 return self._loadfilter('decode')
826 return self._loadfilter('decode')
827
827
828 def adddatafilter(self, name, filter):
828 def adddatafilter(self, name, filter):
829 self._datafilters[name] = filter
829 self._datafilters[name] = filter
830
830
831 def wread(self, filename):
831 def wread(self, filename):
832 if self._link(filename):
832 if self._link(filename):
833 data = self.wvfs.readlink(filename)
833 data = self.wvfs.readlink(filename)
834 else:
834 else:
835 data = self.wopener.read(filename)
835 data = self.wopener.read(filename)
836 return self._filter(self._encodefilterpats, filename, data)
836 return self._filter(self._encodefilterpats, filename, data)
837
837
838 def wwrite(self, filename, data, flags):
838 def wwrite(self, filename, data, flags):
839 data = self._filter(self._decodefilterpats, filename, data)
839 data = self._filter(self._decodefilterpats, filename, data)
840 if 'l' in flags:
840 if 'l' in flags:
841 self.wopener.symlink(data, filename)
841 self.wopener.symlink(data, filename)
842 else:
842 else:
843 self.wopener.write(filename, data)
843 self.wopener.write(filename, data)
844 if 'x' in flags:
844 if 'x' in flags:
845 self.wvfs.setflags(filename, False, True)
845 self.wvfs.setflags(filename, False, True)
846
846
847 def wwritedata(self, filename, data):
847 def wwritedata(self, filename, data):
848 return self._filter(self._decodefilterpats, filename, data)
848 return self._filter(self._decodefilterpats, filename, data)
849
849
850 def transaction(self, desc, report=None):
850 def transaction(self, desc, report=None):
851 tr = self._transref and self._transref() or None
851 tr = self._transref and self._transref() or None
852 if tr and tr.running():
852 if tr and tr.running():
853 return tr.nest()
853 return tr.nest()
854
854
855 # abort here if the journal already exists
855 # abort here if the journal already exists
856 if self.svfs.exists("journal"):
856 if self.svfs.exists("journal"):
857 raise error.RepoError(
857 raise error.RepoError(
858 _("abandoned transaction found - run hg recover"))
858 _("abandoned transaction found - run hg recover"))
859
859
860 def onclose():
860 def onclose():
861 self.store.write(tr)
861 self.store.write(tr)
862
862
863 self._writejournal(desc)
863 self._writejournal(desc)
864 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
864 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
865 rp = report and report or self.ui.warn
865 rp = report and report or self.ui.warn
866 tr = transaction.transaction(rp, self.sopener,
866 tr = transaction.transaction(rp, self.sopener,
867 "journal",
867 "journal",
868 aftertrans(renames),
868 aftertrans(renames),
869 self.store.createmode,
869 self.store.createmode,
870 onclose)
870 onclose)
871 self._transref = weakref.ref(tr)
871 self._transref = weakref.ref(tr)
872 return tr
872 return tr
873
873
874 def _journalfiles(self):
874 def _journalfiles(self):
875 return ((self.svfs, 'journal'),
875 return ((self.svfs, 'journal'),
876 (self.vfs, 'journal.dirstate'),
876 (self.vfs, 'journal.dirstate'),
877 (self.vfs, 'journal.branch'),
877 (self.vfs, 'journal.branch'),
878 (self.vfs, 'journal.desc'),
878 (self.vfs, 'journal.desc'),
879 (self.vfs, 'journal.bookmarks'),
879 (self.vfs, 'journal.bookmarks'),
880 (self.svfs, 'journal.phaseroots'))
880 (self.svfs, 'journal.phaseroots'))
881
881
882 def undofiles(self):
882 def undofiles(self):
883 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
883 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
884
884
885 def _writejournal(self, desc):
885 def _writejournal(self, desc):
886 self.opener.write("journal.dirstate",
886 self.opener.write("journal.dirstate",
887 self.opener.tryread("dirstate"))
887 self.opener.tryread("dirstate"))
888 self.opener.write("journal.branch",
888 self.opener.write("journal.branch",
889 encoding.fromlocal(self.dirstate.branch()))
889 encoding.fromlocal(self.dirstate.branch()))
890 self.opener.write("journal.desc",
890 self.opener.write("journal.desc",
891 "%d\n%s\n" % (len(self), desc))
891 "%d\n%s\n" % (len(self), desc))
892 self.opener.write("journal.bookmarks",
892 self.opener.write("journal.bookmarks",
893 self.opener.tryread("bookmarks"))
893 self.opener.tryread("bookmarks"))
894 self.sopener.write("journal.phaseroots",
894 self.sopener.write("journal.phaseroots",
895 self.sopener.tryread("phaseroots"))
895 self.sopener.tryread("phaseroots"))
896
896
897 def recover(self):
897 def recover(self):
898 lock = self.lock()
898 lock = self.lock()
899 try:
899 try:
900 if self.svfs.exists("journal"):
900 if self.svfs.exists("journal"):
901 self.ui.status(_("rolling back interrupted transaction\n"))
901 self.ui.status(_("rolling back interrupted transaction\n"))
902 transaction.rollback(self.sopener, "journal",
902 transaction.rollback(self.sopener, "journal",
903 self.ui.warn)
903 self.ui.warn)
904 self.invalidate()
904 self.invalidate()
905 return True
905 return True
906 else:
906 else:
907 self.ui.warn(_("no interrupted transaction available\n"))
907 self.ui.warn(_("no interrupted transaction available\n"))
908 return False
908 return False
909 finally:
909 finally:
910 lock.release()
910 lock.release()
911
911
912 def rollback(self, dryrun=False, force=False):
912 def rollback(self, dryrun=False, force=False):
913 wlock = lock = None
913 wlock = lock = None
914 try:
914 try:
915 wlock = self.wlock()
915 wlock = self.wlock()
916 lock = self.lock()
916 lock = self.lock()
917 if self.svfs.exists("undo"):
917 if self.svfs.exists("undo"):
918 return self._rollback(dryrun, force)
918 return self._rollback(dryrun, force)
919 else:
919 else:
920 self.ui.warn(_("no rollback information available\n"))
920 self.ui.warn(_("no rollback information available\n"))
921 return 1
921 return 1
922 finally:
922 finally:
923 release(lock, wlock)
923 release(lock, wlock)
924
924
925 @unfilteredmethod # Until we get smarter cache management
925 @unfilteredmethod # Until we get smarter cache management
926 def _rollback(self, dryrun, force):
926 def _rollback(self, dryrun, force):
927 ui = self.ui
927 ui = self.ui
928 try:
928 try:
929 args = self.opener.read('undo.desc').splitlines()
929 args = self.opener.read('undo.desc').splitlines()
930 (oldlen, desc, detail) = (int(args[0]), args[1], None)
930 (oldlen, desc, detail) = (int(args[0]), args[1], None)
931 if len(args) >= 3:
931 if len(args) >= 3:
932 detail = args[2]
932 detail = args[2]
933 oldtip = oldlen - 1
933 oldtip = oldlen - 1
934
934
935 if detail and ui.verbose:
935 if detail and ui.verbose:
936 msg = (_('repository tip rolled back to revision %s'
936 msg = (_('repository tip rolled back to revision %s'
937 ' (undo %s: %s)\n')
937 ' (undo %s: %s)\n')
938 % (oldtip, desc, detail))
938 % (oldtip, desc, detail))
939 else:
939 else:
940 msg = (_('repository tip rolled back to revision %s'
940 msg = (_('repository tip rolled back to revision %s'
941 ' (undo %s)\n')
941 ' (undo %s)\n')
942 % (oldtip, desc))
942 % (oldtip, desc))
943 except IOError:
943 except IOError:
944 msg = _('rolling back unknown transaction\n')
944 msg = _('rolling back unknown transaction\n')
945 desc = None
945 desc = None
946
946
947 if not force and self['.'] != self['tip'] and desc == 'commit':
947 if not force and self['.'] != self['tip'] and desc == 'commit':
948 raise util.Abort(
948 raise util.Abort(
949 _('rollback of last commit while not checked out '
949 _('rollback of last commit while not checked out '
950 'may lose data'), hint=_('use -f to force'))
950 'may lose data'), hint=_('use -f to force'))
951
951
952 ui.status(msg)
952 ui.status(msg)
953 if dryrun:
953 if dryrun:
954 return 0
954 return 0
955
955
956 parents = self.dirstate.parents()
956 parents = self.dirstate.parents()
957 self.destroying()
957 self.destroying()
958 transaction.rollback(self.sopener, 'undo', ui.warn)
958 transaction.rollback(self.sopener, 'undo', ui.warn)
959 if self.vfs.exists('undo.bookmarks'):
959 if self.vfs.exists('undo.bookmarks'):
960 self.vfs.rename('undo.bookmarks', 'bookmarks')
960 self.vfs.rename('undo.bookmarks', 'bookmarks')
961 if self.svfs.exists('undo.phaseroots'):
961 if self.svfs.exists('undo.phaseroots'):
962 self.svfs.rename('undo.phaseroots', 'phaseroots')
962 self.svfs.rename('undo.phaseroots', 'phaseroots')
963 self.invalidate()
963 self.invalidate()
964
964
965 parentgone = (parents[0] not in self.changelog.nodemap or
965 parentgone = (parents[0] not in self.changelog.nodemap or
966 parents[1] not in self.changelog.nodemap)
966 parents[1] not in self.changelog.nodemap)
967 if parentgone:
967 if parentgone:
968 self.vfs.rename('undo.dirstate', 'dirstate')
968 self.vfs.rename('undo.dirstate', 'dirstate')
969 try:
969 try:
970 branch = self.opener.read('undo.branch')
970 branch = self.opener.read('undo.branch')
971 self.dirstate.setbranch(encoding.tolocal(branch))
971 self.dirstate.setbranch(encoding.tolocal(branch))
972 except IOError:
972 except IOError:
973 ui.warn(_('named branch could not be reset: '
973 ui.warn(_('named branch could not be reset: '
974 'current branch is still \'%s\'\n')
974 'current branch is still \'%s\'\n')
975 % self.dirstate.branch())
975 % self.dirstate.branch())
976
976
977 self.dirstate.invalidate()
977 self.dirstate.invalidate()
978 parents = tuple([p.rev() for p in self.parents()])
978 parents = tuple([p.rev() for p in self.parents()])
979 if len(parents) > 1:
979 if len(parents) > 1:
980 ui.status(_('working directory now based on '
980 ui.status(_('working directory now based on '
981 'revisions %d and %d\n') % parents)
981 'revisions %d and %d\n') % parents)
982 else:
982 else:
983 ui.status(_('working directory now based on '
983 ui.status(_('working directory now based on '
984 'revision %d\n') % parents)
984 'revision %d\n') % parents)
985 # TODO: if we know which new heads may result from this rollback, pass
985 # TODO: if we know which new heads may result from this rollback, pass
986 # them to destroy(), which will prevent the branchhead cache from being
986 # them to destroy(), which will prevent the branchhead cache from being
987 # invalidated.
987 # invalidated.
988 self.destroyed()
988 self.destroyed()
989 return 0
989 return 0
990
990
991 def invalidatecaches(self):
991 def invalidatecaches(self):
992
992
993 if '_tagscache' in vars(self):
993 if '_tagscache' in vars(self):
994 # can't use delattr on proxy
994 # can't use delattr on proxy
995 del self.__dict__['_tagscache']
995 del self.__dict__['_tagscache']
996
996
997 self.unfiltered()._branchcaches.clear()
997 self.unfiltered()._branchcaches.clear()
998 self.invalidatevolatilesets()
998 self.invalidatevolatilesets()
999
999
1000 def invalidatevolatilesets(self):
1000 def invalidatevolatilesets(self):
1001 self.filteredrevcache.clear()
1001 self.filteredrevcache.clear()
1002 obsolete.clearobscaches(self)
1002 obsolete.clearobscaches(self)
1003
1003
1004 def invalidatedirstate(self):
1004 def invalidatedirstate(self):
1005 '''Invalidates the dirstate, causing the next call to dirstate
1005 '''Invalidates the dirstate, causing the next call to dirstate
1006 to check if it was modified since the last time it was read,
1006 to check if it was modified since the last time it was read,
1007 rereading it if it has.
1007 rereading it if it has.
1008
1008
1009 This is different to dirstate.invalidate() that it doesn't always
1009 This is different to dirstate.invalidate() that it doesn't always
1010 rereads the dirstate. Use dirstate.invalidate() if you want to
1010 rereads the dirstate. Use dirstate.invalidate() if you want to
1011 explicitly read the dirstate again (i.e. restoring it to a previous
1011 explicitly read the dirstate again (i.e. restoring it to a previous
1012 known good state).'''
1012 known good state).'''
1013 if hasunfilteredcache(self, 'dirstate'):
1013 if hasunfilteredcache(self, 'dirstate'):
1014 for k in self.dirstate._filecache:
1014 for k in self.dirstate._filecache:
1015 try:
1015 try:
1016 delattr(self.dirstate, k)
1016 delattr(self.dirstate, k)
1017 except AttributeError:
1017 except AttributeError:
1018 pass
1018 pass
1019 delattr(self.unfiltered(), 'dirstate')
1019 delattr(self.unfiltered(), 'dirstate')
1020
1020
1021 def invalidate(self):
1021 def invalidate(self):
1022 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1022 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1023 for k in self._filecache:
1023 for k in self._filecache:
1024 # dirstate is invalidated separately in invalidatedirstate()
1024 # dirstate is invalidated separately in invalidatedirstate()
1025 if k == 'dirstate':
1025 if k == 'dirstate':
1026 continue
1026 continue
1027
1027
1028 try:
1028 try:
1029 delattr(unfiltered, k)
1029 delattr(unfiltered, k)
1030 except AttributeError:
1030 except AttributeError:
1031 pass
1031 pass
1032 self.invalidatecaches()
1032 self.invalidatecaches()
1033 self.store.invalidatecaches()
1033 self.store.invalidatecaches()
1034
1034
1035 def invalidateall(self):
1035 def invalidateall(self):
1036 '''Fully invalidates both store and non-store parts, causing the
1036 '''Fully invalidates both store and non-store parts, causing the
1037 subsequent operation to reread any outside changes.'''
1037 subsequent operation to reread any outside changes.'''
1038 # extension should hook this to invalidate its caches
1038 # extension should hook this to invalidate its caches
1039 self.invalidate()
1039 self.invalidate()
1040 self.invalidatedirstate()
1040 self.invalidatedirstate()
1041
1041
1042 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1042 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1043 try:
1043 try:
1044 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1044 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1045 except error.LockHeld, inst:
1045 except error.LockHeld, inst:
1046 if not wait:
1046 if not wait:
1047 raise
1047 raise
1048 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1048 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1049 (desc, inst.locker))
1049 (desc, inst.locker))
1050 # default to 600 seconds timeout
1050 # default to 600 seconds timeout
1051 l = lockmod.lock(vfs, lockname,
1051 l = lockmod.lock(vfs, lockname,
1052 int(self.ui.config("ui", "timeout", "600")),
1052 int(self.ui.config("ui", "timeout", "600")),
1053 releasefn, desc=desc)
1053 releasefn, desc=desc)
1054 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1054 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1055 if acquirefn:
1055 if acquirefn:
1056 acquirefn()
1056 acquirefn()
1057 return l
1057 return l
1058
1058
1059 def _afterlock(self, callback):
1059 def _afterlock(self, callback):
1060 """add a callback to the current repository lock.
1060 """add a callback to the current repository lock.
1061
1061
1062 The callback will be executed on lock release."""
1062 The callback will be executed on lock release."""
1063 l = self._lockref and self._lockref()
1063 l = self._lockref and self._lockref()
1064 if l:
1064 if l:
1065 l.postrelease.append(callback)
1065 l.postrelease.append(callback)
1066 else:
1066 else:
1067 callback()
1067 callback()
1068
1068
1069 def lock(self, wait=True):
1069 def lock(self, wait=True):
1070 '''Lock the repository store (.hg/store) and return a weak reference
1070 '''Lock the repository store (.hg/store) and return a weak reference
1071 to the lock. Use this before modifying the store (e.g. committing or
1071 to the lock. Use this before modifying the store (e.g. committing or
1072 stripping). If you are opening a transaction, get a lock as well.)'''
1072 stripping). If you are opening a transaction, get a lock as well.)'''
1073 l = self._lockref and self._lockref()
1073 l = self._lockref and self._lockref()
1074 if l is not None and l.held:
1074 if l is not None and l.held:
1075 l.lock()
1075 l.lock()
1076 return l
1076 return l
1077
1077
1078 def unlock():
1078 def unlock():
1079 if hasunfilteredcache(self, '_phasecache'):
1079 if hasunfilteredcache(self, '_phasecache'):
1080 self._phasecache.write()
1080 self._phasecache.write()
1081 for k, ce in self._filecache.items():
1081 for k, ce in self._filecache.items():
1082 if k == 'dirstate' or k not in self.__dict__:
1082 if k == 'dirstate' or k not in self.__dict__:
1083 continue
1083 continue
1084 ce.refresh()
1084 ce.refresh()
1085
1085
1086 l = self._lock(self.svfs, "lock", wait, unlock,
1086 l = self._lock(self.svfs, "lock", wait, unlock,
1087 self.invalidate, _('repository %s') % self.origroot)
1087 self.invalidate, _('repository %s') % self.origroot)
1088 self._lockref = weakref.ref(l)
1088 self._lockref = weakref.ref(l)
1089 return l
1089 return l
1090
1090
1091 def wlock(self, wait=True):
1091 def wlock(self, wait=True):
1092 '''Lock the non-store parts of the repository (everything under
1092 '''Lock the non-store parts of the repository (everything under
1093 .hg except .hg/store) and return a weak reference to the lock.
1093 .hg except .hg/store) and return a weak reference to the lock.
1094 Use this before modifying files in .hg.'''
1094 Use this before modifying files in .hg.'''
1095 l = self._wlockref and self._wlockref()
1095 l = self._wlockref and self._wlockref()
1096 if l is not None and l.held:
1096 if l is not None and l.held:
1097 l.lock()
1097 l.lock()
1098 return l
1098 return l
1099
1099
1100 def unlock():
1100 def unlock():
1101 self.dirstate.write()
1101 self.dirstate.write()
1102 self._filecache['dirstate'].refresh()
1102 self._filecache['dirstate'].refresh()
1103
1103
1104 l = self._lock(self.vfs, "wlock", wait, unlock,
1104 l = self._lock(self.vfs, "wlock", wait, unlock,
1105 self.invalidatedirstate, _('working directory of %s') %
1105 self.invalidatedirstate, _('working directory of %s') %
1106 self.origroot)
1106 self.origroot)
1107 self._wlockref = weakref.ref(l)
1107 self._wlockref = weakref.ref(l)
1108 return l
1108 return l
1109
1109
1110 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1110 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1111 """
1111 """
1112 commit an individual file as part of a larger transaction
1112 commit an individual file as part of a larger transaction
1113 """
1113 """
1114
1114
1115 fname = fctx.path()
1115 fname = fctx.path()
1116 text = fctx.data()
1116 text = fctx.data()
1117 flog = self.file(fname)
1117 flog = self.file(fname)
1118 fparent1 = manifest1.get(fname, nullid)
1118 fparent1 = manifest1.get(fname, nullid)
1119 fparent2 = fparent2o = manifest2.get(fname, nullid)
1119 fparent2 = fparent2o = manifest2.get(fname, nullid)
1120
1120
1121 meta = {}
1121 meta = {}
1122 copy = fctx.renamed()
1122 copy = fctx.renamed()
1123 if copy and copy[0] != fname:
1123 if copy and copy[0] != fname:
1124 # Mark the new revision of this file as a copy of another
1124 # Mark the new revision of this file as a copy of another
1125 # file. This copy data will effectively act as a parent
1125 # file. This copy data will effectively act as a parent
1126 # of this new revision. If this is a merge, the first
1126 # of this new revision. If this is a merge, the first
1127 # parent will be the nullid (meaning "look up the copy data")
1127 # parent will be the nullid (meaning "look up the copy data")
1128 # and the second one will be the other parent. For example:
1128 # and the second one will be the other parent. For example:
1129 #
1129 #
1130 # 0 --- 1 --- 3 rev1 changes file foo
1130 # 0 --- 1 --- 3 rev1 changes file foo
1131 # \ / rev2 renames foo to bar and changes it
1131 # \ / rev2 renames foo to bar and changes it
1132 # \- 2 -/ rev3 should have bar with all changes and
1132 # \- 2 -/ rev3 should have bar with all changes and
1133 # should record that bar descends from
1133 # should record that bar descends from
1134 # bar in rev2 and foo in rev1
1134 # bar in rev2 and foo in rev1
1135 #
1135 #
1136 # this allows this merge to succeed:
1136 # this allows this merge to succeed:
1137 #
1137 #
1138 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1138 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1139 # \ / merging rev3 and rev4 should use bar@rev2
1139 # \ / merging rev3 and rev4 should use bar@rev2
1140 # \- 2 --- 4 as the merge base
1140 # \- 2 --- 4 as the merge base
1141 #
1141 #
1142
1142
1143 cfname = copy[0]
1143 cfname = copy[0]
1144 crev = manifest1.get(cfname)
1144 crev = manifest1.get(cfname)
1145 newfparent = fparent2
1145 newfparent = fparent2
1146
1146
1147 if manifest2: # branch merge
1147 if manifest2: # branch merge
1148 if fparent2 == nullid or crev is None: # copied on remote side
1148 if fparent2 == nullid or crev is None: # copied on remote side
1149 if cfname in manifest2:
1149 if cfname in manifest2:
1150 crev = manifest2[cfname]
1150 crev = manifest2[cfname]
1151 newfparent = fparent1
1151 newfparent = fparent1
1152
1152
1153 # find source in nearest ancestor if we've lost track
1153 # find source in nearest ancestor if we've lost track
1154 if not crev:
1154 if not crev:
1155 self.ui.debug(" %s: searching for copy revision for %s\n" %
1155 self.ui.debug(" %s: searching for copy revision for %s\n" %
1156 (fname, cfname))
1156 (fname, cfname))
1157 for ancestor in self[None].ancestors():
1157 for ancestor in self[None].ancestors():
1158 if cfname in ancestor:
1158 if cfname in ancestor:
1159 crev = ancestor[cfname].filenode()
1159 crev = ancestor[cfname].filenode()
1160 break
1160 break
1161
1161
1162 if crev:
1162 if crev:
1163 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1163 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1164 meta["copy"] = cfname
1164 meta["copy"] = cfname
1165 meta["copyrev"] = hex(crev)
1165 meta["copyrev"] = hex(crev)
1166 fparent1, fparent2 = nullid, newfparent
1166 fparent1, fparent2 = nullid, newfparent
1167 else:
1167 else:
1168 self.ui.warn(_("warning: can't find ancestor for '%s' "
1168 self.ui.warn(_("warning: can't find ancestor for '%s' "
1169 "copied from '%s'!\n") % (fname, cfname))
1169 "copied from '%s'!\n") % (fname, cfname))
1170
1170
1171 elif fparent1 == nullid:
1171 elif fparent1 == nullid:
1172 fparent1, fparent2 = fparent2, nullid
1172 fparent1, fparent2 = fparent2, nullid
1173 elif fparent2 != nullid:
1173 elif fparent2 != nullid:
1174 # is one parent an ancestor of the other?
1174 # is one parent an ancestor of the other?
1175 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1175 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1176 if fparent1 in fparentancestors:
1176 if fparent1 in fparentancestors:
1177 fparent1, fparent2 = fparent2, nullid
1177 fparent1, fparent2 = fparent2, nullid
1178 elif fparent2 in fparentancestors:
1178 elif fparent2 in fparentancestors:
1179 fparent2 = nullid
1179 fparent2 = nullid
1180
1180
1181 # is the file changed?
1181 # is the file changed?
1182 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1182 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1183 changelist.append(fname)
1183 changelist.append(fname)
1184 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1184 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1185
1185
1186 # are just the flags changed during merge?
1186 # are just the flags changed during merge?
1187 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
1187 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
1188 changelist.append(fname)
1188 changelist.append(fname)
1189
1189
1190 return fparent1
1190 return fparent1
1191
1191
1192 @unfilteredmethod
1192 @unfilteredmethod
1193 def commit(self, text="", user=None, date=None, match=None, force=False,
1193 def commit(self, text="", user=None, date=None, match=None, force=False,
1194 editor=False, extra={}):
1194 editor=False, extra={}):
1195 """Add a new revision to current repository.
1195 """Add a new revision to current repository.
1196
1196
1197 Revision information is gathered from the working directory,
1197 Revision information is gathered from the working directory,
1198 match can be used to filter the committed files. If editor is
1198 match can be used to filter the committed files. If editor is
1199 supplied, it is called to get a commit message.
1199 supplied, it is called to get a commit message.
1200 """
1200 """
1201
1201
1202 def fail(f, msg):
1202 def fail(f, msg):
1203 raise util.Abort('%s: %s' % (f, msg))
1203 raise util.Abort('%s: %s' % (f, msg))
1204
1204
1205 if not match:
1205 if not match:
1206 match = matchmod.always(self.root, '')
1206 match = matchmod.always(self.root, '')
1207
1207
1208 if not force:
1208 if not force:
1209 vdirs = []
1209 vdirs = []
1210 match.explicitdir = vdirs.append
1210 match.explicitdir = vdirs.append
1211 match.bad = fail
1211 match.bad = fail
1212
1212
1213 wlock = self.wlock()
1213 wlock = self.wlock()
1214 try:
1214 try:
1215 wctx = self[None]
1215 wctx = self[None]
1216 merge = len(wctx.parents()) > 1
1216 merge = len(wctx.parents()) > 1
1217
1217
1218 if (not force and merge and match and
1218 if (not force and merge and match and
1219 (match.files() or match.anypats())):
1219 (match.files() or match.anypats())):
1220 raise util.Abort(_('cannot partially commit a merge '
1220 raise util.Abort(_('cannot partially commit a merge '
1221 '(do not specify files or patterns)'))
1221 '(do not specify files or patterns)'))
1222
1222
1223 changes = self.status(match=match, clean=force)
1223 changes = self.status(match=match, clean=force)
1224 if force:
1224 if force:
1225 changes[0].extend(changes[6]) # mq may commit unchanged files
1225 changes[0].extend(changes[6]) # mq may commit unchanged files
1226
1226
1227 # check subrepos
1227 # check subrepos
1228 subs = []
1228 subs = []
1229 commitsubs = set()
1229 commitsubs = set()
1230 newstate = wctx.substate.copy()
1230 newstate = wctx.substate.copy()
1231 # only manage subrepos and .hgsubstate if .hgsub is present
1231 # only manage subrepos and .hgsubstate if .hgsub is present
1232 if '.hgsub' in wctx:
1232 if '.hgsub' in wctx:
1233 # we'll decide whether to track this ourselves, thanks
1233 # we'll decide whether to track this ourselves, thanks
1234 for c in changes[:3]:
1234 for c in changes[:3]:
1235 if '.hgsubstate' in c:
1235 if '.hgsubstate' in c:
1236 c.remove('.hgsubstate')
1236 c.remove('.hgsubstate')
1237
1237
1238 # compare current state to last committed state
1238 # compare current state to last committed state
1239 # build new substate based on last committed state
1239 # build new substate based on last committed state
1240 oldstate = wctx.p1().substate
1240 oldstate = wctx.p1().substate
1241 for s in sorted(newstate.keys()):
1241 for s in sorted(newstate.keys()):
1242 if not match(s):
1242 if not match(s):
1243 # ignore working copy, use old state if present
1243 # ignore working copy, use old state if present
1244 if s in oldstate:
1244 if s in oldstate:
1245 newstate[s] = oldstate[s]
1245 newstate[s] = oldstate[s]
1246 continue
1246 continue
1247 if not force:
1247 if not force:
1248 raise util.Abort(
1248 raise util.Abort(
1249 _("commit with new subrepo %s excluded") % s)
1249 _("commit with new subrepo %s excluded") % s)
1250 if wctx.sub(s).dirty(True):
1250 if wctx.sub(s).dirty(True):
1251 if not self.ui.configbool('ui', 'commitsubrepos'):
1251 if not self.ui.configbool('ui', 'commitsubrepos'):
1252 raise util.Abort(
1252 raise util.Abort(
1253 _("uncommitted changes in subrepo %s") % s,
1253 _("uncommitted changes in subrepo %s") % s,
1254 hint=_("use --subrepos for recursive commit"))
1254 hint=_("use --subrepos for recursive commit"))
1255 subs.append(s)
1255 subs.append(s)
1256 commitsubs.add(s)
1256 commitsubs.add(s)
1257 else:
1257 else:
1258 bs = wctx.sub(s).basestate()
1258 bs = wctx.sub(s).basestate()
1259 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1259 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1260 if oldstate.get(s, (None, None, None))[1] != bs:
1260 if oldstate.get(s, (None, None, None))[1] != bs:
1261 subs.append(s)
1261 subs.append(s)
1262
1262
1263 # check for removed subrepos
1263 # check for removed subrepos
1264 for p in wctx.parents():
1264 for p in wctx.parents():
1265 r = [s for s in p.substate if s not in newstate]
1265 r = [s for s in p.substate if s not in newstate]
1266 subs += [s for s in r if match(s)]
1266 subs += [s for s in r if match(s)]
1267 if subs:
1267 if subs:
1268 if (not match('.hgsub') and
1268 if (not match('.hgsub') and
1269 '.hgsub' in (wctx.modified() + wctx.added())):
1269 '.hgsub' in (wctx.modified() + wctx.added())):
1270 raise util.Abort(
1270 raise util.Abort(
1271 _("can't commit subrepos without .hgsub"))
1271 _("can't commit subrepos without .hgsub"))
1272 changes[0].insert(0, '.hgsubstate')
1272 changes[0].insert(0, '.hgsubstate')
1273
1273
1274 elif '.hgsub' in changes[2]:
1274 elif '.hgsub' in changes[2]:
1275 # clean up .hgsubstate when .hgsub is removed
1275 # clean up .hgsubstate when .hgsub is removed
1276 if ('.hgsubstate' in wctx and
1276 if ('.hgsubstate' in wctx and
1277 '.hgsubstate' not in changes[0] + changes[1] + changes[2]):
1277 '.hgsubstate' not in changes[0] + changes[1] + changes[2]):
1278 changes[2].insert(0, '.hgsubstate')
1278 changes[2].insert(0, '.hgsubstate')
1279
1279
1280 # make sure all explicit patterns are matched
1280 # make sure all explicit patterns are matched
1281 if not force and match.files():
1281 if not force and match.files():
1282 matched = set(changes[0] + changes[1] + changes[2])
1282 matched = set(changes[0] + changes[1] + changes[2])
1283
1283
1284 for f in match.files():
1284 for f in match.files():
1285 f = self.dirstate.normalize(f)
1285 f = self.dirstate.normalize(f)
1286 if f == '.' or f in matched or f in wctx.substate:
1286 if f == '.' or f in matched or f in wctx.substate:
1287 continue
1287 continue
1288 if f in changes[3]: # missing
1288 if f in changes[3]: # missing
1289 fail(f, _('file not found!'))
1289 fail(f, _('file not found!'))
1290 if f in vdirs: # visited directory
1290 if f in vdirs: # visited directory
1291 d = f + '/'
1291 d = f + '/'
1292 for mf in matched:
1292 for mf in matched:
1293 if mf.startswith(d):
1293 if mf.startswith(d):
1294 break
1294 break
1295 else:
1295 else:
1296 fail(f, _("no match under directory!"))
1296 fail(f, _("no match under directory!"))
1297 elif f not in self.dirstate:
1297 elif f not in self.dirstate:
1298 fail(f, _("file not tracked!"))
1298 fail(f, _("file not tracked!"))
1299
1299
1300 cctx = context.workingctx(self, text, user, date, extra, changes)
1300 cctx = context.workingctx(self, text, user, date, extra, changes)
1301
1301
1302 if (not force and not extra.get("close") and not merge
1302 if (not force and not extra.get("close") and not merge
1303 and not cctx.files()
1303 and not cctx.files()
1304 and wctx.branch() == wctx.p1().branch()):
1304 and wctx.branch() == wctx.p1().branch()):
1305 return None
1305 return None
1306
1306
1307 if merge and cctx.deleted():
1307 if merge and cctx.deleted():
1308 raise util.Abort(_("cannot commit merge with missing files"))
1308 raise util.Abort(_("cannot commit merge with missing files"))
1309
1309
1310 ms = mergemod.mergestate(self)
1310 ms = mergemod.mergestate(self)
1311 for f in changes[0]:
1311 for f in changes[0]:
1312 if f in ms and ms[f] == 'u':
1312 if f in ms and ms[f] == 'u':
1313 raise util.Abort(_("unresolved merge conflicts "
1313 raise util.Abort(_("unresolved merge conflicts "
1314 "(see hg help resolve)"))
1314 "(see hg help resolve)"))
1315
1315
1316 if editor:
1316 if editor:
1317 cctx._text = editor(self, cctx, subs)
1317 cctx._text = editor(self, cctx, subs)
1318 edited = (text != cctx._text)
1318 edited = (text != cctx._text)
1319
1319
1320 # Save commit message in case this transaction gets rolled back
1320 # Save commit message in case this transaction gets rolled back
1321 # (e.g. by a pretxncommit hook). Leave the content alone on
1321 # (e.g. by a pretxncommit hook). Leave the content alone on
1322 # the assumption that the user will use the same editor again.
1322 # the assumption that the user will use the same editor again.
1323 msgfn = self.savecommitmessage(cctx._text)
1323 msgfn = self.savecommitmessage(cctx._text)
1324
1324
1325 # commit subs and write new state
1325 # commit subs and write new state
1326 if subs:
1326 if subs:
1327 for s in sorted(commitsubs):
1327 for s in sorted(commitsubs):
1328 sub = wctx.sub(s)
1328 sub = wctx.sub(s)
1329 self.ui.status(_('committing subrepository %s\n') %
1329 self.ui.status(_('committing subrepository %s\n') %
1330 subrepo.subrelpath(sub))
1330 subrepo.subrelpath(sub))
1331 sr = sub.commit(cctx._text, user, date)
1331 sr = sub.commit(cctx._text, user, date)
1332 newstate[s] = (newstate[s][0], sr)
1332 newstate[s] = (newstate[s][0], sr)
1333 subrepo.writestate(self, newstate)
1333 subrepo.writestate(self, newstate)
1334
1334
1335 p1, p2 = self.dirstate.parents()
1335 p1, p2 = self.dirstate.parents()
1336 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1336 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1337 try:
1337 try:
1338 self.hook("precommit", throw=True, parent1=hookp1,
1338 self.hook("precommit", throw=True, parent1=hookp1,
1339 parent2=hookp2)
1339 parent2=hookp2)
1340 ret = self.commitctx(cctx, True)
1340 ret = self.commitctx(cctx, True)
1341 except: # re-raises
1341 except: # re-raises
1342 if edited:
1342 if edited:
1343 self.ui.write(
1343 self.ui.write(
1344 _('note: commit message saved in %s\n') % msgfn)
1344 _('note: commit message saved in %s\n') % msgfn)
1345 raise
1345 raise
1346
1346
1347 # update bookmarks, dirstate and mergestate
1347 # update bookmarks, dirstate and mergestate
1348 bookmarks.update(self, [p1, p2], ret)
1348 bookmarks.update(self, [p1, p2], ret)
1349 cctx.markcommitted(ret)
1349 cctx.markcommitted(ret)
1350 ms.reset()
1350 ms.reset()
1351 finally:
1351 finally:
1352 wlock.release()
1352 wlock.release()
1353
1353
1354 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1354 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1355 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1355 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1356 self._afterlock(commithook)
1356 self._afterlock(commithook)
1357 return ret
1357 return ret
1358
1358
1359 @unfilteredmethod
1359 @unfilteredmethod
1360 def commitctx(self, ctx, error=False):
1360 def commitctx(self, ctx, error=False):
1361 """Add a new revision to current repository.
1361 """Add a new revision to current repository.
1362 Revision information is passed via the context argument.
1362 Revision information is passed via the context argument.
1363 """
1363 """
1364
1364
1365 tr = lock = None
1365 tr = lock = None
1366 removed = list(ctx.removed())
1366 removed = list(ctx.removed())
1367 p1, p2 = ctx.p1(), ctx.p2()
1367 p1, p2 = ctx.p1(), ctx.p2()
1368 user = ctx.user()
1368 user = ctx.user()
1369
1369
1370 lock = self.lock()
1370 lock = self.lock()
1371 try:
1371 try:
1372 tr = self.transaction("commit")
1372 tr = self.transaction("commit")
1373 trp = weakref.proxy(tr)
1373 trp = weakref.proxy(tr)
1374
1374
1375 if ctx.files():
1375 if ctx.files():
1376 m1 = p1.manifest().copy()
1376 m1 = p1.manifest().copy()
1377 m2 = p2.manifest()
1377 m2 = p2.manifest()
1378
1378
1379 # check in files
1379 # check in files
1380 new = {}
1380 new = {}
1381 changed = []
1381 changed = []
1382 linkrev = len(self)
1382 linkrev = len(self)
1383 for f in sorted(ctx.modified() + ctx.added()):
1383 for f in sorted(ctx.modified() + ctx.added()):
1384 self.ui.note(f + "\n")
1384 self.ui.note(f + "\n")
1385 try:
1385 try:
1386 fctx = ctx[f]
1386 fctx = ctx[f]
1387 new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
1387 new[f] = self._filecommit(fctx, m1, m2, linkrev, trp,
1388 changed)
1388 changed)
1389 m1.set(f, fctx.flags())
1389 m1.set(f, fctx.flags())
1390 except OSError, inst:
1390 except OSError, inst:
1391 self.ui.warn(_("trouble committing %s!\n") % f)
1391 self.ui.warn(_("trouble committing %s!\n") % f)
1392 raise
1392 raise
1393 except IOError, inst:
1393 except IOError, inst:
1394 errcode = getattr(inst, 'errno', errno.ENOENT)
1394 errcode = getattr(inst, 'errno', errno.ENOENT)
1395 if error or errcode and errcode != errno.ENOENT:
1395 if error or errcode and errcode != errno.ENOENT:
1396 self.ui.warn(_("trouble committing %s!\n") % f)
1396 self.ui.warn(_("trouble committing %s!\n") % f)
1397 raise
1397 raise
1398 else:
1398 else:
1399 removed.append(f)
1399 removed.append(f)
1400
1400
1401 # update manifest
1401 # update manifest
1402 m1.update(new)
1402 m1.update(new)
1403 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1403 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1404 drop = [f for f in removed if f in m1]
1404 drop = [f for f in removed if f in m1]
1405 for f in drop:
1405 for f in drop:
1406 del m1[f]
1406 del m1[f]
1407 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1407 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1408 p2.manifestnode(), (new, drop))
1408 p2.manifestnode(), (new, drop))
1409 files = changed + removed
1409 files = changed + removed
1410 else:
1410 else:
1411 mn = p1.manifestnode()
1411 mn = p1.manifestnode()
1412 files = []
1412 files = []
1413
1413
1414 # update changelog
1414 # update changelog
1415 self.changelog.delayupdate()
1415 self.changelog.delayupdate()
1416 n = self.changelog.add(mn, files, ctx.description(),
1416 n = self.changelog.add(mn, files, ctx.description(),
1417 trp, p1.node(), p2.node(),
1417 trp, p1.node(), p2.node(),
1418 user, ctx.date(), ctx.extra().copy())
1418 user, ctx.date(), ctx.extra().copy())
1419 p = lambda: self.changelog.writepending() and self.root or ""
1419 p = lambda: self.changelog.writepending() and self.root or ""
1420 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1420 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1421 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1421 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1422 parent2=xp2, pending=p)
1422 parent2=xp2, pending=p)
1423 self.changelog.finalize(trp)
1423 self.changelog.finalize(trp)
1424 # set the new commit is proper phase
1424 # set the new commit is proper phase
1425 targetphase = subrepo.newcommitphase(self.ui, ctx)
1425 targetphase = subrepo.newcommitphase(self.ui, ctx)
1426 if targetphase:
1426 if targetphase:
1427 # retract boundary do not alter parent changeset.
1427 # retract boundary do not alter parent changeset.
1428 # if a parent have higher the resulting phase will
1428 # if a parent have higher the resulting phase will
1429 # be compliant anyway
1429 # be compliant anyway
1430 #
1430 #
1431 # if minimal phase was 0 we don't need to retract anything
1431 # if minimal phase was 0 we don't need to retract anything
1432 phases.retractboundary(self, targetphase, [n])
1432 phases.retractboundary(self, targetphase, [n])
1433 tr.close()
1433 tr.close()
1434 branchmap.updatecache(self.filtered('served'))
1434 branchmap.updatecache(self.filtered('served'))
1435 return n
1435 return n
1436 finally:
1436 finally:
1437 if tr:
1437 if tr:
1438 tr.release()
1438 tr.release()
1439 lock.release()
1439 lock.release()
1440
1440
1441 @unfilteredmethod
1441 @unfilteredmethod
1442 def destroying(self):
1442 def destroying(self):
1443 '''Inform the repository that nodes are about to be destroyed.
1443 '''Inform the repository that nodes are about to be destroyed.
1444 Intended for use by strip and rollback, so there's a common
1444 Intended for use by strip and rollback, so there's a common
1445 place for anything that has to be done before destroying history.
1445 place for anything that has to be done before destroying history.
1446
1446
1447 This is mostly useful for saving state that is in memory and waiting
1447 This is mostly useful for saving state that is in memory and waiting
1448 to be flushed when the current lock is released. Because a call to
1448 to be flushed when the current lock is released. Because a call to
1449 destroyed is imminent, the repo will be invalidated causing those
1449 destroyed is imminent, the repo will be invalidated causing those
1450 changes to stay in memory (waiting for the next unlock), or vanish
1450 changes to stay in memory (waiting for the next unlock), or vanish
1451 completely.
1451 completely.
1452 '''
1452 '''
1453 # When using the same lock to commit and strip, the phasecache is left
1453 # When using the same lock to commit and strip, the phasecache is left
1454 # dirty after committing. Then when we strip, the repo is invalidated,
1454 # dirty after committing. Then when we strip, the repo is invalidated,
1455 # causing those changes to disappear.
1455 # causing those changes to disappear.
1456 if '_phasecache' in vars(self):
1456 if '_phasecache' in vars(self):
1457 self._phasecache.write()
1457 self._phasecache.write()
1458
1458
1459 @unfilteredmethod
1459 @unfilteredmethod
1460 def destroyed(self):
1460 def destroyed(self):
1461 '''Inform the repository that nodes have been destroyed.
1461 '''Inform the repository that nodes have been destroyed.
1462 Intended for use by strip and rollback, so there's a common
1462 Intended for use by strip and rollback, so there's a common
1463 place for anything that has to be done after destroying history.
1463 place for anything that has to be done after destroying history.
1464 '''
1464 '''
1465 # When one tries to:
1465 # When one tries to:
1466 # 1) destroy nodes thus calling this method (e.g. strip)
1466 # 1) destroy nodes thus calling this method (e.g. strip)
1467 # 2) use phasecache somewhere (e.g. commit)
1467 # 2) use phasecache somewhere (e.g. commit)
1468 #
1468 #
1469 # then 2) will fail because the phasecache contains nodes that were
1469 # then 2) will fail because the phasecache contains nodes that were
1470 # removed. We can either remove phasecache from the filecache,
1470 # removed. We can either remove phasecache from the filecache,
1471 # causing it to reload next time it is accessed, or simply filter
1471 # causing it to reload next time it is accessed, or simply filter
1472 # the removed nodes now and write the updated cache.
1472 # the removed nodes now and write the updated cache.
1473 self._phasecache.filterunknown(self)
1473 self._phasecache.filterunknown(self)
1474 self._phasecache.write()
1474 self._phasecache.write()
1475
1475
1476 # update the 'served' branch cache to help read only server process
1476 # update the 'served' branch cache to help read only server process
1477 # Thanks to branchcache collaboration this is done from the nearest
1477 # Thanks to branchcache collaboration this is done from the nearest
1478 # filtered subset and it is expected to be fast.
1478 # filtered subset and it is expected to be fast.
1479 branchmap.updatecache(self.filtered('served'))
1479 branchmap.updatecache(self.filtered('served'))
1480
1480
1481 # Ensure the persistent tag cache is updated. Doing it now
1481 # Ensure the persistent tag cache is updated. Doing it now
1482 # means that the tag cache only has to worry about destroyed
1482 # means that the tag cache only has to worry about destroyed
1483 # heads immediately after a strip/rollback. That in turn
1483 # heads immediately after a strip/rollback. That in turn
1484 # guarantees that "cachetip == currenttip" (comparing both rev
1484 # guarantees that "cachetip == currenttip" (comparing both rev
1485 # and node) always means no nodes have been added or destroyed.
1485 # and node) always means no nodes have been added or destroyed.
1486
1486
1487 # XXX this is suboptimal when qrefresh'ing: we strip the current
1487 # XXX this is suboptimal when qrefresh'ing: we strip the current
1488 # head, refresh the tag cache, then immediately add a new head.
1488 # head, refresh the tag cache, then immediately add a new head.
1489 # But I think doing it this way is necessary for the "instant
1489 # But I think doing it this way is necessary for the "instant
1490 # tag cache retrieval" case to work.
1490 # tag cache retrieval" case to work.
1491 self.invalidate()
1491 self.invalidate()
1492
1492
1493 def walk(self, match, node=None):
1493 def walk(self, match, node=None):
1494 '''
1494 '''
1495 walk recursively through the directory tree or a given
1495 walk recursively through the directory tree or a given
1496 changeset, finding all files matched by the match
1496 changeset, finding all files matched by the match
1497 function
1497 function
1498 '''
1498 '''
1499 return self[node].walk(match)
1499 return self[node].walk(match)
1500
1500
1501 def status(self, node1='.', node2=None, match=None,
1501 def status(self, node1='.', node2=None, match=None,
1502 ignored=False, clean=False, unknown=False,
1502 ignored=False, clean=False, unknown=False,
1503 listsubrepos=False):
1503 listsubrepos=False):
1504 """return status of files between two nodes or node and working
1504 """return status of files between two nodes or node and working
1505 directory.
1505 directory.
1506
1506
1507 If node1 is None, use the first dirstate parent instead.
1507 If node1 is None, use the first dirstate parent instead.
1508 If node2 is None, compare node1 with working directory.
1508 If node2 is None, compare node1 with working directory.
1509 """
1509 """
1510
1510
1511 def mfmatches(ctx):
1511 def mfmatches(ctx):
1512 mf = ctx.manifest().copy()
1512 mf = ctx.manifest().copy()
1513 if match.always():
1513 if match.always():
1514 return mf
1514 return mf
1515 for fn in mf.keys():
1515 for fn in mf.keys():
1516 if not match(fn):
1516 if not match(fn):
1517 del mf[fn]
1517 del mf[fn]
1518 return mf
1518 return mf
1519
1519
1520 ctx1 = self[node1]
1520 ctx1 = self[node1]
1521 ctx2 = self[node2]
1521 ctx2 = self[node2]
1522
1522
1523 working = ctx2.rev() is None
1523 working = ctx2.rev() is None
1524 parentworking = working and ctx1 == self['.']
1524 parentworking = working and ctx1 == self['.']
1525 match = match or matchmod.always(self.root, self.getcwd())
1525 match = match or matchmod.always(self.root, self.getcwd())
1526 listignored, listclean, listunknown = ignored, clean, unknown
1526 listignored, listclean, listunknown = ignored, clean, unknown
1527
1527
1528 # load earliest manifest first for caching reasons
1528 # load earliest manifest first for caching reasons
1529 if not working and ctx2.rev() < ctx1.rev():
1529 if not working and ctx2.rev() < ctx1.rev():
1530 ctx2.manifest()
1530 ctx2.manifest()
1531
1531
1532 if not parentworking:
1532 if not parentworking:
1533 def bad(f, msg):
1533 def bad(f, msg):
1534 # 'f' may be a directory pattern from 'match.files()',
1534 # 'f' may be a directory pattern from 'match.files()',
1535 # so 'f not in ctx1' is not enough
1535 # so 'f not in ctx1' is not enough
1536 if f not in ctx1 and f not in ctx1.dirs():
1536 if f not in ctx1 and f not in ctx1.dirs():
1537 self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
1537 self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg))
1538 match.bad = bad
1538 match.bad = bad
1539
1539
1540 if working: # we need to scan the working dir
1540 if working: # we need to scan the working dir
1541 subrepos = []
1541 subrepos = []
1542 if '.hgsub' in self.dirstate:
1542 if '.hgsub' in self.dirstate:
1543 subrepos = sorted(ctx2.substate)
1543 subrepos = sorted(ctx2.substate)
1544 s = self.dirstate.status(match, subrepos, listignored,
1544 s = self.dirstate.status(match, subrepos, listignored,
1545 listclean, listunknown)
1545 listclean, listunknown)
1546 cmp, modified, added, removed, deleted, unknown, ignored, clean = s
1546 cmp, modified, added, removed, deleted, unknown, ignored, clean = s
1547
1547
1548 # check for any possibly clean files
1548 # check for any possibly clean files
1549 if parentworking and cmp:
1549 if parentworking and cmp:
1550 fixup = []
1550 fixup = []
1551 # do a full compare of any files that might have changed
1551 # do a full compare of any files that might have changed
1552 for f in sorted(cmp):
1552 for f in sorted(cmp):
1553 if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
1553 if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f)
1554 or ctx1[f].cmp(ctx2[f])):
1554 or ctx1[f].cmp(ctx2[f])):
1555 modified.append(f)
1555 modified.append(f)
1556 else:
1556 else:
1557 fixup.append(f)
1557 fixup.append(f)
1558
1558
1559 # update dirstate for files that are actually clean
1559 # update dirstate for files that are actually clean
1560 if fixup:
1560 if fixup:
1561 if listclean:
1561 if listclean:
1562 clean += fixup
1562 clean += fixup
1563
1563
1564 try:
1564 try:
1565 # updating the dirstate is optional
1565 # updating the dirstate is optional
1566 # so we don't wait on the lock
1566 # so we don't wait on the lock
1567 wlock = self.wlock(False)
1567 wlock = self.wlock(False)
1568 try:
1568 try:
1569 for f in fixup:
1569 for f in fixup:
1570 self.dirstate.normal(f)
1570 self.dirstate.normal(f)
1571 finally:
1571 finally:
1572 wlock.release()
1572 wlock.release()
1573 except error.LockError:
1573 except error.LockError:
1574 pass
1574 pass
1575
1575
1576 if not parentworking:
1576 if not parentworking:
1577 mf1 = mfmatches(ctx1)
1577 mf1 = mfmatches(ctx1)
1578 if working:
1578 if working:
1579 # we are comparing working dir against non-parent
1579 # we are comparing working dir against non-parent
1580 # generate a pseudo-manifest for the working dir
1580 # generate a pseudo-manifest for the working dir
1581 mf2 = mfmatches(self['.'])
1581 mf2 = mfmatches(self['.'])
1582 for f in cmp + modified + added:
1582 for f in cmp + modified + added:
1583 mf2[f] = None
1583 mf2[f] = None
1584 mf2.set(f, ctx2.flags(f))
1584 mf2.set(f, ctx2.flags(f))
1585 for f in removed:
1585 for f in removed:
1586 if f in mf2:
1586 if f in mf2:
1587 del mf2[f]
1587 del mf2[f]
1588 else:
1588 else:
1589 # we are comparing two revisions
1589 # we are comparing two revisions
1590 deleted, unknown, ignored = [], [], []
1590 deleted, unknown, ignored = [], [], []
1591 mf2 = mfmatches(ctx2)
1591 mf2 = mfmatches(ctx2)
1592
1592
1593 modified, added, clean = [], [], []
1593 modified, added, clean = [], [], []
1594 withflags = mf1.withflags() | mf2.withflags()
1594 withflags = mf1.withflags() | mf2.withflags()
1595 for fn, mf2node in mf2.iteritems():
1595 for fn, mf2node in mf2.iteritems():
1596 if fn in mf1:
1596 if fn in mf1:
1597 if (fn not in deleted and
1597 if (fn not in deleted and
1598 ((fn in withflags and mf1.flags(fn) != mf2.flags(fn)) or
1598 ((fn in withflags and mf1.flags(fn) != mf2.flags(fn)) or
1599 (mf1[fn] != mf2node and
1599 (mf1[fn] != mf2node and
1600 (mf2node or ctx1[fn].cmp(ctx2[fn]))))):
1600 (mf2node or ctx1[fn].cmp(ctx2[fn]))))):
1601 modified.append(fn)
1601 modified.append(fn)
1602 elif listclean:
1602 elif listclean:
1603 clean.append(fn)
1603 clean.append(fn)
1604 del mf1[fn]
1604 del mf1[fn]
1605 elif fn not in deleted:
1605 elif fn not in deleted:
1606 added.append(fn)
1606 added.append(fn)
1607 removed = mf1.keys()
1607 removed = mf1.keys()
1608
1608
1609 if working and modified and not self.dirstate._checklink:
1609 if working and modified and not self.dirstate._checklink:
1610 # Symlink placeholders may get non-symlink-like contents
1610 # Symlink placeholders may get non-symlink-like contents
1611 # via user error or dereferencing by NFS or Samba servers,
1611 # via user error or dereferencing by NFS or Samba servers,
1612 # so we filter out any placeholders that don't look like a
1612 # so we filter out any placeholders that don't look like a
1613 # symlink
1613 # symlink
1614 sane = []
1614 sane = []
1615 for f in modified:
1615 for f in modified:
1616 if ctx2.flags(f) == 'l':
1616 if ctx2.flags(f) == 'l':
1617 d = ctx2[f].data()
1617 d = ctx2[f].data()
1618 if d == '' or len(d) >= 1024 or '\n' in d or util.binary(d):
1618 if d == '' or len(d) >= 1024 or '\n' in d or util.binary(d):
1619 self.ui.debug('ignoring suspect symlink placeholder'
1619 self.ui.debug('ignoring suspect symlink placeholder'
1620 ' "%s"\n' % f)
1620 ' "%s"\n' % f)
1621 continue
1621 continue
1622 sane.append(f)
1622 sane.append(f)
1623 modified = sane
1623 modified = sane
1624
1624
1625 r = modified, added, removed, deleted, unknown, ignored, clean
1625 r = modified, added, removed, deleted, unknown, ignored, clean
1626
1626
1627 if listsubrepos:
1627 if listsubrepos:
1628 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1628 for subpath, sub in scmutil.itersubrepos(ctx1, ctx2):
1629 if working:
1629 if working:
1630 rev2 = None
1630 rev2 = None
1631 else:
1631 else:
1632 rev2 = ctx2.substate[subpath][1]
1632 rev2 = ctx2.substate[subpath][1]
1633 try:
1633 try:
1634 submatch = matchmod.narrowmatcher(subpath, match)
1634 submatch = matchmod.narrowmatcher(subpath, match)
1635 s = sub.status(rev2, match=submatch, ignored=listignored,
1635 s = sub.status(rev2, match=submatch, ignored=listignored,
1636 clean=listclean, unknown=listunknown,
1636 clean=listclean, unknown=listunknown,
1637 listsubrepos=True)
1637 listsubrepos=True)
1638 for rfiles, sfiles in zip(r, s):
1638 for rfiles, sfiles in zip(r, s):
1639 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
1639 rfiles.extend("%s/%s" % (subpath, f) for f in sfiles)
1640 except error.LookupError:
1640 except error.LookupError:
1641 self.ui.status(_("skipping missing subrepository: %s\n")
1641 self.ui.status(_("skipping missing subrepository: %s\n")
1642 % subpath)
1642 % subpath)
1643
1643
1644 for l in r:
1644 for l in r:
1645 l.sort()
1645 l.sort()
1646 return r
1646 return r
1647
1647
1648 def heads(self, start=None):
1648 def heads(self, start=None):
1649 heads = self.changelog.heads(start)
1649 heads = self.changelog.heads(start)
1650 # sort the output in rev descending order
1650 # sort the output in rev descending order
1651 return sorted(heads, key=self.changelog.rev, reverse=True)
1651 return sorted(heads, key=self.changelog.rev, reverse=True)
1652
1652
1653 def branchheads(self, branch=None, start=None, closed=False):
1653 def branchheads(self, branch=None, start=None, closed=False):
1654 '''return a (possibly filtered) list of heads for the given branch
1654 '''return a (possibly filtered) list of heads for the given branch
1655
1655
1656 Heads are returned in topological order, from newest to oldest.
1656 Heads are returned in topological order, from newest to oldest.
1657 If branch is None, use the dirstate branch.
1657 If branch is None, use the dirstate branch.
1658 If start is not None, return only heads reachable from start.
1658 If start is not None, return only heads reachable from start.
1659 If closed is True, return heads that are marked as closed as well.
1659 If closed is True, return heads that are marked as closed as well.
1660 '''
1660 '''
1661 if branch is None:
1661 if branch is None:
1662 branch = self[None].branch()
1662 branch = self[None].branch()
1663 branches = self.branchmap()
1663 branches = self.branchmap()
1664 if branch not in branches:
1664 if branch not in branches:
1665 return []
1665 return []
1666 # the cache returns heads ordered lowest to highest
1666 # the cache returns heads ordered lowest to highest
1667 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1667 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1668 if start is not None:
1668 if start is not None:
1669 # filter out the heads that cannot be reached from startrev
1669 # filter out the heads that cannot be reached from startrev
1670 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1670 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1671 bheads = [h for h in bheads if h in fbheads]
1671 bheads = [h for h in bheads if h in fbheads]
1672 return bheads
1672 return bheads
1673
1673
1674 def branches(self, nodes):
1674 def branches(self, nodes):
1675 if not nodes:
1675 if not nodes:
1676 nodes = [self.changelog.tip()]
1676 nodes = [self.changelog.tip()]
1677 b = []
1677 b = []
1678 for n in nodes:
1678 for n in nodes:
1679 t = n
1679 t = n
1680 while True:
1680 while True:
1681 p = self.changelog.parents(n)
1681 p = self.changelog.parents(n)
1682 if p[1] != nullid or p[0] == nullid:
1682 if p[1] != nullid or p[0] == nullid:
1683 b.append((t, n, p[0], p[1]))
1683 b.append((t, n, p[0], p[1]))
1684 break
1684 break
1685 n = p[0]
1685 n = p[0]
1686 return b
1686 return b
1687
1687
1688 def between(self, pairs):
1688 def between(self, pairs):
1689 r = []
1689 r = []
1690
1690
1691 for top, bottom in pairs:
1691 for top, bottom in pairs:
1692 n, l, i = top, [], 0
1692 n, l, i = top, [], 0
1693 f = 1
1693 f = 1
1694
1694
1695 while n != bottom and n != nullid:
1695 while n != bottom and n != nullid:
1696 p = self.changelog.parents(n)[0]
1696 p = self.changelog.parents(n)[0]
1697 if i == f:
1697 if i == f:
1698 l.append(n)
1698 l.append(n)
1699 f = f * 2
1699 f = f * 2
1700 n = p
1700 n = p
1701 i += 1
1701 i += 1
1702
1702
1703 r.append(l)
1703 r.append(l)
1704
1704
1705 return r
1705 return r
1706
1706
1707 def pull(self, remote, heads=None, force=False):
1707 def pull(self, remote, heads=None, force=False):
1708 return exchange.pull (self, remote, heads, force)
1708 return exchange.pull (self, remote, heads, force)
1709
1709
1710 def checkpush(self, pushop):
1710 def checkpush(self, pushop):
1711 """Extensions can override this function if additional checks have
1711 """Extensions can override this function if additional checks have
1712 to be performed before pushing, or call it if they override push
1712 to be performed before pushing, or call it if they override push
1713 command.
1713 command.
1714 """
1714 """
1715 pass
1715 pass
1716
1716
1717 @unfilteredpropertycache
1717 @unfilteredpropertycache
1718 def prepushoutgoinghooks(self):
1718 def prepushoutgoinghooks(self):
1719 """Return util.hooks consists of "(repo, remote, outgoing)"
1719 """Return util.hooks consists of "(repo, remote, outgoing)"
1720 functions, which are called before pushing changesets.
1720 functions, which are called before pushing changesets.
1721 """
1721 """
1722 return util.hooks()
1722 return util.hooks()
1723
1723
1724 def push(self, remote, force=False, revs=None, newbranch=False):
1724 def push(self, remote, force=False, revs=None, newbranch=False):
1725 return exchange.push(self, remote, force, revs, newbranch)
1725 return exchange.push(self, remote, force, revs, newbranch)
1726
1726
1727 def stream_in(self, remote, requirements):
1727 def stream_in(self, remote, requirements):
1728 lock = self.lock()
1728 lock = self.lock()
1729 try:
1729 try:
1730 # Save remote branchmap. We will use it later
1730 # Save remote branchmap. We will use it later
1731 # to speed up branchcache creation
1731 # to speed up branchcache creation
1732 rbranchmap = None
1732 rbranchmap = None
1733 if remote.capable("branchmap"):
1733 if remote.capable("branchmap"):
1734 rbranchmap = remote.branchmap()
1734 rbranchmap = remote.branchmap()
1735
1735
1736 fp = remote.stream_out()
1736 fp = remote.stream_out()
1737 l = fp.readline()
1737 l = fp.readline()
1738 try:
1738 try:
1739 resp = int(l)
1739 resp = int(l)
1740 except ValueError:
1740 except ValueError:
1741 raise error.ResponseError(
1741 raise error.ResponseError(
1742 _('unexpected response from remote server:'), l)
1742 _('unexpected response from remote server:'), l)
1743 if resp == 1:
1743 if resp == 1:
1744 raise util.Abort(_('operation forbidden by server'))
1744 raise util.Abort(_('operation forbidden by server'))
1745 elif resp == 2:
1745 elif resp == 2:
1746 raise util.Abort(_('locking the remote repository failed'))
1746 raise util.Abort(_('locking the remote repository failed'))
1747 elif resp != 0:
1747 elif resp != 0:
1748 raise util.Abort(_('the server sent an unknown error code'))
1748 raise util.Abort(_('the server sent an unknown error code'))
1749 self.ui.status(_('streaming all changes\n'))
1749 self.ui.status(_('streaming all changes\n'))
1750 l = fp.readline()
1750 l = fp.readline()
1751 try:
1751 try:
1752 total_files, total_bytes = map(int, l.split(' ', 1))
1752 total_files, total_bytes = map(int, l.split(' ', 1))
1753 except (ValueError, TypeError):
1753 except (ValueError, TypeError):
1754 raise error.ResponseError(
1754 raise error.ResponseError(
1755 _('unexpected response from remote server:'), l)
1755 _('unexpected response from remote server:'), l)
1756 self.ui.status(_('%d files to transfer, %s of data\n') %
1756 self.ui.status(_('%d files to transfer, %s of data\n') %
1757 (total_files, util.bytecount(total_bytes)))
1757 (total_files, util.bytecount(total_bytes)))
1758 handled_bytes = 0
1758 handled_bytes = 0
1759 self.ui.progress(_('clone'), 0, total=total_bytes)
1759 self.ui.progress(_('clone'), 0, total=total_bytes)
1760 start = time.time()
1760 start = time.time()
1761
1761
1762 tr = self.transaction(_('clone'))
1762 tr = self.transaction(_('clone'))
1763 try:
1763 try:
1764 for i in xrange(total_files):
1764 for i in xrange(total_files):
1765 # XXX doesn't support '\n' or '\r' in filenames
1765 # XXX doesn't support '\n' or '\r' in filenames
1766 l = fp.readline()
1766 l = fp.readline()
1767 try:
1767 try:
1768 name, size = l.split('\0', 1)
1768 name, size = l.split('\0', 1)
1769 size = int(size)
1769 size = int(size)
1770 except (ValueError, TypeError):
1770 except (ValueError, TypeError):
1771 raise error.ResponseError(
1771 raise error.ResponseError(
1772 _('unexpected response from remote server:'), l)
1772 _('unexpected response from remote server:'), l)
1773 if self.ui.debugflag:
1773 if self.ui.debugflag:
1774 self.ui.debug('adding %s (%s)\n' %
1774 self.ui.debug('adding %s (%s)\n' %
1775 (name, util.bytecount(size)))
1775 (name, util.bytecount(size)))
1776 # for backwards compat, name was partially encoded
1776 # for backwards compat, name was partially encoded
1777 ofp = self.sopener(store.decodedir(name), 'w')
1777 ofp = self.sopener(store.decodedir(name), 'w')
1778 for chunk in util.filechunkiter(fp, limit=size):
1778 for chunk in util.filechunkiter(fp, limit=size):
1779 handled_bytes += len(chunk)
1779 handled_bytes += len(chunk)
1780 self.ui.progress(_('clone'), handled_bytes,
1780 self.ui.progress(_('clone'), handled_bytes,
1781 total=total_bytes)
1781 total=total_bytes)
1782 ofp.write(chunk)
1782 ofp.write(chunk)
1783 ofp.close()
1783 ofp.close()
1784 tr.close()
1784 tr.close()
1785 finally:
1785 finally:
1786 tr.release()
1786 tr.release()
1787
1787
1788 # Writing straight to files circumvented the inmemory caches
1788 # Writing straight to files circumvented the inmemory caches
1789 self.invalidate()
1789 self.invalidate()
1790
1790
1791 elapsed = time.time() - start
1791 elapsed = time.time() - start
1792 if elapsed <= 0:
1792 if elapsed <= 0:
1793 elapsed = 0.001
1793 elapsed = 0.001
1794 self.ui.progress(_('clone'), None)
1794 self.ui.progress(_('clone'), None)
1795 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1795 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1796 (util.bytecount(total_bytes), elapsed,
1796 (util.bytecount(total_bytes), elapsed,
1797 util.bytecount(total_bytes / elapsed)))
1797 util.bytecount(total_bytes / elapsed)))
1798
1798
1799 # new requirements = old non-format requirements +
1799 # new requirements = old non-format requirements +
1800 # new format-related
1800 # new format-related
1801 # requirements from the streamed-in repository
1801 # requirements from the streamed-in repository
1802 requirements.update(set(self.requirements) - self.supportedformats)
1802 requirements.update(set(self.requirements) - self.supportedformats)
1803 self._applyrequirements(requirements)
1803 self._applyrequirements(requirements)
1804 self._writerequirements()
1804 self._writerequirements()
1805
1805
1806 if rbranchmap:
1806 if rbranchmap:
1807 rbheads = []
1807 rbheads = []
1808 for bheads in rbranchmap.itervalues():
1808 for bheads in rbranchmap.itervalues():
1809 rbheads.extend(bheads)
1809 rbheads.extend(bheads)
1810
1810
1811 if rbheads:
1811 if rbheads:
1812 rtiprev = max((int(self.changelog.rev(node))
1812 rtiprev = max((int(self.changelog.rev(node))
1813 for node in rbheads))
1813 for node in rbheads))
1814 cache = branchmap.branchcache(rbranchmap,
1814 cache = branchmap.branchcache(rbranchmap,
1815 self[rtiprev].node(),
1815 self[rtiprev].node(),
1816 rtiprev)
1816 rtiprev)
1817 # Try to stick it as low as possible
1817 # Try to stick it as low as possible
1818 # filter above served are unlikely to be fetch from a clone
1818 # filter above served are unlikely to be fetch from a clone
1819 for candidate in ('base', 'immutable', 'served'):
1819 for candidate in ('base', 'immutable', 'served'):
1820 rview = self.filtered(candidate)
1820 rview = self.filtered(candidate)
1821 if cache.validfor(rview):
1821 if cache.validfor(rview):
1822 self._branchcaches[candidate] = cache
1822 self._branchcaches[candidate] = cache
1823 cache.write(rview)
1823 cache.write(rview)
1824 break
1824 break
1825 self.invalidate()
1825 self.invalidate()
1826 return len(self.heads()) + 1
1826 return len(self.heads()) + 1
1827 finally:
1827 finally:
1828 lock.release()
1828 lock.release()
1829
1829
1830 def clone(self, remote, heads=[], stream=False):
1830 def clone(self, remote, heads=[], stream=False):
1831 '''clone remote repository.
1831 '''clone remote repository.
1832
1832
1833 keyword arguments:
1833 keyword arguments:
1834 heads: list of revs to clone (forces use of pull)
1834 heads: list of revs to clone (forces use of pull)
1835 stream: use streaming clone if possible'''
1835 stream: use streaming clone if possible'''
1836
1836
1837 # now, all clients that can request uncompressed clones can
1837 # now, all clients that can request uncompressed clones can
1838 # read repo formats supported by all servers that can serve
1838 # read repo formats supported by all servers that can serve
1839 # them.
1839 # them.
1840
1840
1841 # if revlog format changes, client will have to check version
1841 # if revlog format changes, client will have to check version
1842 # and format flags on "stream" capability, and use
1842 # and format flags on "stream" capability, and use
1843 # uncompressed only if compatible.
1843 # uncompressed only if compatible.
1844
1844
1845 if not stream:
1845 if not stream:
1846 # if the server explicitly prefers to stream (for fast LANs)
1846 # if the server explicitly prefers to stream (for fast LANs)
1847 stream = remote.capable('stream-preferred')
1847 stream = remote.capable('stream-preferred')
1848
1848
1849 if stream and not heads:
1849 if stream and not heads:
1850 # 'stream' means remote revlog format is revlogv1 only
1850 # 'stream' means remote revlog format is revlogv1 only
1851 if remote.capable('stream'):
1851 if remote.capable('stream'):
1852 return self.stream_in(remote, set(('revlogv1',)))
1852 return self.stream_in(remote, set(('revlogv1',)))
1853 # otherwise, 'streamreqs' contains the remote revlog format
1853 # otherwise, 'streamreqs' contains the remote revlog format
1854 streamreqs = remote.capable('streamreqs')
1854 streamreqs = remote.capable('streamreqs')
1855 if streamreqs:
1855 if streamreqs:
1856 streamreqs = set(streamreqs.split(','))
1856 streamreqs = set(streamreqs.split(','))
1857 # if we support it, stream in and adjust our requirements
1857 # if we support it, stream in and adjust our requirements
1858 if not streamreqs - self.supportedformats:
1858 if not streamreqs - self.supportedformats:
1859 return self.stream_in(remote, streamreqs)
1859 return self.stream_in(remote, streamreqs)
1860 return self.pull(remote, heads)
1860 return self.pull(remote, heads)
1861
1861
1862 def pushkey(self, namespace, key, old, new):
1862 def pushkey(self, namespace, key, old, new):
1863 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1863 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1864 old=old, new=new)
1864 old=old, new=new)
1865 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1865 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1866 ret = pushkey.push(self, namespace, key, old, new)
1866 ret = pushkey.push(self, namespace, key, old, new)
1867 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1867 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1868 ret=ret)
1868 ret=ret)
1869 return ret
1869 return ret
1870
1870
1871 def listkeys(self, namespace):
1871 def listkeys(self, namespace):
1872 self.hook('prelistkeys', throw=True, namespace=namespace)
1872 self.hook('prelistkeys', throw=True, namespace=namespace)
1873 self.ui.debug('listing keys for "%s"\n' % namespace)
1873 self.ui.debug('listing keys for "%s"\n' % namespace)
1874 values = pushkey.list(self, namespace)
1874 values = pushkey.list(self, namespace)
1875 self.hook('listkeys', namespace=namespace, values=values)
1875 self.hook('listkeys', namespace=namespace, values=values)
1876 return values
1876 return values
1877
1877
1878 def debugwireargs(self, one, two, three=None, four=None, five=None):
1878 def debugwireargs(self, one, two, three=None, four=None, five=None):
1879 '''used to test argument passing over the wire'''
1879 '''used to test argument passing over the wire'''
1880 return "%s %s %s %s %s" % (one, two, three, four, five)
1880 return "%s %s %s %s %s" % (one, two, three, four, five)
1881
1881
1882 def savecommitmessage(self, text):
1882 def savecommitmessage(self, text):
1883 fp = self.opener('last-message.txt', 'wb')
1883 fp = self.opener('last-message.txt', 'wb')
1884 try:
1884 try:
1885 fp.write(text)
1885 fp.write(text)
1886 finally:
1886 finally:
1887 fp.close()
1887 fp.close()
1888 return self.pathto(fp.name[len(self.root) + 1:])
1888 return self.pathto(fp.name[len(self.root) + 1:])
1889
1889
1890 # used to avoid circular references so destructors work
1890 # used to avoid circular references so destructors work
1891 def aftertrans(files):
1891 def aftertrans(files):
1892 renamefiles = [tuple(t) for t in files]
1892 renamefiles = [tuple(t) for t in files]
1893 def a():
1893 def a():
1894 for vfs, src, dest in renamefiles:
1894 for vfs, src, dest in renamefiles:
1895 try:
1895 try:
1896 vfs.rename(src, dest)
1896 vfs.rename(src, dest)
1897 except OSError: # journal file does not yet exist
1897 except OSError: # journal file does not yet exist
1898 pass
1898 pass
1899 return a
1899 return a
1900
1900
1901 def undoname(fn):
1901 def undoname(fn):
1902 base, name = os.path.split(fn)
1902 base, name = os.path.split(fn)
1903 assert name.startswith('journal')
1903 assert name.startswith('journal')
1904 return os.path.join(base, name.replace('journal', 'undo', 1))
1904 return os.path.join(base, name.replace('journal', 'undo', 1))
1905
1905
1906 def instance(ui, path, create):
1906 def instance(ui, path, create):
1907 return localrepository(ui, util.urllocalpath(path), create)
1907 return localrepository(ui, util.urllocalpath(path), create)
1908
1908
1909 def islocal(path):
1909 def islocal(path):
1910 return True
1910 return True
@@ -1,812 +1,812 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2
11 import changegroup as changegroupmod, bundle2
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # client side
193 # client side
194
194
195 class wirepeer(peer.peerrepository):
195 class wirepeer(peer.peerrepository):
196
196
197 def batch(self):
197 def batch(self):
198 return remotebatch(self)
198 return remotebatch(self)
199 def _submitbatch(self, req):
199 def _submitbatch(self, req):
200 cmds = []
200 cmds = []
201 for op, argsdict in req:
201 for op, argsdict in req:
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
202 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
203 cmds.append('%s %s' % (op, args))
203 cmds.append('%s %s' % (op, args))
204 rsp = self._call("batch", cmds=';'.join(cmds))
204 rsp = self._call("batch", cmds=';'.join(cmds))
205 return rsp.split(';')
205 return rsp.split(';')
206 def _submitone(self, op, args):
206 def _submitone(self, op, args):
207 return self._call(op, **args)
207 return self._call(op, **args)
208
208
209 @batchable
209 @batchable
210 def lookup(self, key):
210 def lookup(self, key):
211 self.requirecap('lookup', _('look up remote revision'))
211 self.requirecap('lookup', _('look up remote revision'))
212 f = future()
212 f = future()
213 yield {'key': encoding.fromlocal(key)}, f
213 yield {'key': encoding.fromlocal(key)}, f
214 d = f.value
214 d = f.value
215 success, data = d[:-1].split(" ", 1)
215 success, data = d[:-1].split(" ", 1)
216 if int(success):
216 if int(success):
217 yield bin(data)
217 yield bin(data)
218 self._abort(error.RepoError(data))
218 self._abort(error.RepoError(data))
219
219
220 @batchable
220 @batchable
221 def heads(self):
221 def heads(self):
222 f = future()
222 f = future()
223 yield {}, f
223 yield {}, f
224 d = f.value
224 d = f.value
225 try:
225 try:
226 yield decodelist(d[:-1])
226 yield decodelist(d[:-1])
227 except ValueError:
227 except ValueError:
228 self._abort(error.ResponseError(_("unexpected response:"), d))
228 self._abort(error.ResponseError(_("unexpected response:"), d))
229
229
230 @batchable
230 @batchable
231 def known(self, nodes):
231 def known(self, nodes):
232 f = future()
232 f = future()
233 yield {'nodes': encodelist(nodes)}, f
233 yield {'nodes': encodelist(nodes)}, f
234 d = f.value
234 d = f.value
235 try:
235 try:
236 yield [bool(int(f)) for f in d]
236 yield [bool(int(f)) for f in d]
237 except ValueError:
237 except ValueError:
238 self._abort(error.ResponseError(_("unexpected response:"), d))
238 self._abort(error.ResponseError(_("unexpected response:"), d))
239
239
240 @batchable
240 @batchable
241 def branchmap(self):
241 def branchmap(self):
242 f = future()
242 f = future()
243 yield {}, f
243 yield {}, f
244 d = f.value
244 d = f.value
245 try:
245 try:
246 branchmap = {}
246 branchmap = {}
247 for branchpart in d.splitlines():
247 for branchpart in d.splitlines():
248 branchname, branchheads = branchpart.split(' ', 1)
248 branchname, branchheads = branchpart.split(' ', 1)
249 branchname = encoding.tolocal(urllib.unquote(branchname))
249 branchname = encoding.tolocal(urllib.unquote(branchname))
250 branchheads = decodelist(branchheads)
250 branchheads = decodelist(branchheads)
251 branchmap[branchname] = branchheads
251 branchmap[branchname] = branchheads
252 yield branchmap
252 yield branchmap
253 except TypeError:
253 except TypeError:
254 self._abort(error.ResponseError(_("unexpected response:"), d))
254 self._abort(error.ResponseError(_("unexpected response:"), d))
255
255
256 def branches(self, nodes):
256 def branches(self, nodes):
257 n = encodelist(nodes)
257 n = encodelist(nodes)
258 d = self._call("branches", nodes=n)
258 d = self._call("branches", nodes=n)
259 try:
259 try:
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
260 br = [tuple(decodelist(b)) for b in d.splitlines()]
261 return br
261 return br
262 except ValueError:
262 except ValueError:
263 self._abort(error.ResponseError(_("unexpected response:"), d))
263 self._abort(error.ResponseError(_("unexpected response:"), d))
264
264
265 def between(self, pairs):
265 def between(self, pairs):
266 batch = 8 # avoid giant requests
266 batch = 8 # avoid giant requests
267 r = []
267 r = []
268 for i in xrange(0, len(pairs), batch):
268 for i in xrange(0, len(pairs), batch):
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
269 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
270 d = self._call("between", pairs=n)
270 d = self._call("between", pairs=n)
271 try:
271 try:
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
272 r.extend(l and decodelist(l) or [] for l in d.splitlines())
273 except ValueError:
273 except ValueError:
274 self._abort(error.ResponseError(_("unexpected response:"), d))
274 self._abort(error.ResponseError(_("unexpected response:"), d))
275 return r
275 return r
276
276
277 @batchable
277 @batchable
278 def pushkey(self, namespace, key, old, new):
278 def pushkey(self, namespace, key, old, new):
279 if not self.capable('pushkey'):
279 if not self.capable('pushkey'):
280 yield False, None
280 yield False, None
281 f = future()
281 f = future()
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
282 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
283 yield {'namespace': encoding.fromlocal(namespace),
283 yield {'namespace': encoding.fromlocal(namespace),
284 'key': encoding.fromlocal(key),
284 'key': encoding.fromlocal(key),
285 'old': encoding.fromlocal(old),
285 'old': encoding.fromlocal(old),
286 'new': encoding.fromlocal(new)}, f
286 'new': encoding.fromlocal(new)}, f
287 d = f.value
287 d = f.value
288 d, output = d.split('\n', 1)
288 d, output = d.split('\n', 1)
289 try:
289 try:
290 d = bool(int(d))
290 d = bool(int(d))
291 except ValueError:
291 except ValueError:
292 raise error.ResponseError(
292 raise error.ResponseError(
293 _('push failed (unexpected response):'), d)
293 _('push failed (unexpected response):'), d)
294 for l in output.splitlines(True):
294 for l in output.splitlines(True):
295 self.ui.status(_('remote: '), l)
295 self.ui.status(_('remote: '), l)
296 yield d
296 yield d
297
297
298 @batchable
298 @batchable
299 def listkeys(self, namespace):
299 def listkeys(self, namespace):
300 if not self.capable('pushkey'):
300 if not self.capable('pushkey'):
301 yield {}, None
301 yield {}, None
302 f = future()
302 f = future()
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
303 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
304 yield {'namespace': encoding.fromlocal(namespace)}, f
304 yield {'namespace': encoding.fromlocal(namespace)}, f
305 d = f.value
305 d = f.value
306 r = {}
306 r = {}
307 for l in d.splitlines():
307 for l in d.splitlines():
308 k, v = l.split('\t')
308 k, v = l.split('\t')
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
309 r[encoding.tolocal(k)] = encoding.tolocal(v)
310 yield r
310 yield r
311
311
312 def stream_out(self):
312 def stream_out(self):
313 return self._callstream('stream_out')
313 return self._callstream('stream_out')
314
314
315 def changegroup(self, nodes, kind):
315 def changegroup(self, nodes, kind):
316 n = encodelist(nodes)
316 n = encodelist(nodes)
317 f = self._callcompressable("changegroup", roots=n)
317 f = self._callcompressable("changegroup", roots=n)
318 return changegroupmod.unbundle10(f, 'UN')
318 return changegroupmod.unbundle10(f, 'UN')
319
319
320 def changegroupsubset(self, bases, heads, kind):
320 def changegroupsubset(self, bases, heads, kind):
321 self.requirecap('changegroupsubset', _('look up remote changes'))
321 self.requirecap('changegroupsubset', _('look up remote changes'))
322 bases = encodelist(bases)
322 bases = encodelist(bases)
323 heads = encodelist(heads)
323 heads = encodelist(heads)
324 f = self._callcompressable("changegroupsubset",
324 f = self._callcompressable("changegroupsubset",
325 bases=bases, heads=heads)
325 bases=bases, heads=heads)
326 return changegroupmod.unbundle10(f, 'UN')
326 return changegroupmod.unbundle10(f, 'UN')
327
327
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None):
328 def getbundle(self, source, heads=None, common=None, bundlecaps=None):
329 self.requirecap('getbundle', _('look up remote changes'))
329 self.requirecap('getbundle', _('look up remote changes'))
330 opts = {}
330 opts = {}
331 if heads is not None:
331 if heads is not None:
332 opts['heads'] = encodelist(heads)
332 opts['heads'] = encodelist(heads)
333 if common is not None:
333 if common is not None:
334 opts['common'] = encodelist(common)
334 opts['common'] = encodelist(common)
335 if bundlecaps is not None:
335 if bundlecaps is not None:
336 opts['bundlecaps'] = ','.join(bundlecaps)
336 opts['bundlecaps'] = ','.join(bundlecaps)
337 f = self._callcompressable("getbundle", **opts)
337 f = self._callcompressable("getbundle", **opts)
338 if bundlecaps is not None and 'HG20' in bundlecaps:
338 if bundlecaps is not None and 'HG2X' in bundlecaps:
339 return bundle2.unbundle20(self.ui, f)
339 return bundle2.unbundle20(self.ui, f)
340 else:
340 else:
341 return changegroupmod.unbundle10(f, 'UN')
341 return changegroupmod.unbundle10(f, 'UN')
342
342
343 def unbundle(self, cg, heads, source):
343 def unbundle(self, cg, heads, source):
344 '''Send cg (a readable file-like object representing the
344 '''Send cg (a readable file-like object representing the
345 changegroup to push, typically a chunkbuffer object) to the
345 changegroup to push, typically a chunkbuffer object) to the
346 remote server as a bundle.
346 remote server as a bundle.
347
347
348 When pushing a bundle10 stream, return an integer indicating the
348 When pushing a bundle10 stream, return an integer indicating the
349 result of the push (see localrepository.addchangegroup()).
349 result of the push (see localrepository.addchangegroup()).
350
350
351 When pushing a bundle20 stream, return a bundle20 stream.'''
351 When pushing a bundle20 stream, return a bundle20 stream.'''
352
352
353 if heads != ['force'] and self.capable('unbundlehash'):
353 if heads != ['force'] and self.capable('unbundlehash'):
354 heads = encodelist(['hashed',
354 heads = encodelist(['hashed',
355 util.sha1(''.join(sorted(heads))).digest()])
355 util.sha1(''.join(sorted(heads))).digest()])
356 else:
356 else:
357 heads = encodelist(heads)
357 heads = encodelist(heads)
358
358
359 if util.safehasattr(cg, 'deltaheader'):
359 if util.safehasattr(cg, 'deltaheader'):
360 # this a bundle10, do the old style call sequence
360 # this a bundle10, do the old style call sequence
361 ret, output = self._callpush("unbundle", cg, heads=heads)
361 ret, output = self._callpush("unbundle", cg, heads=heads)
362 if ret == "":
362 if ret == "":
363 raise error.ResponseError(
363 raise error.ResponseError(
364 _('push failed:'), output)
364 _('push failed:'), output)
365 try:
365 try:
366 ret = int(ret)
366 ret = int(ret)
367 except ValueError:
367 except ValueError:
368 raise error.ResponseError(
368 raise error.ResponseError(
369 _('push failed (unexpected response):'), ret)
369 _('push failed (unexpected response):'), ret)
370
370
371 for l in output.splitlines(True):
371 for l in output.splitlines(True):
372 self.ui.status(_('remote: '), l)
372 self.ui.status(_('remote: '), l)
373 else:
373 else:
374 # bundle2 push. Send a stream, fetch a stream.
374 # bundle2 push. Send a stream, fetch a stream.
375 stream = self._calltwowaystream('unbundle', cg, heads=heads)
375 stream = self._calltwowaystream('unbundle', cg, heads=heads)
376 ret = bundle2.unbundle20(self.ui, stream)
376 ret = bundle2.unbundle20(self.ui, stream)
377 return ret
377 return ret
378
378
379 def debugwireargs(self, one, two, three=None, four=None, five=None):
379 def debugwireargs(self, one, two, three=None, four=None, five=None):
380 # don't pass optional arguments left at their default value
380 # don't pass optional arguments left at their default value
381 opts = {}
381 opts = {}
382 if three is not None:
382 if three is not None:
383 opts['three'] = three
383 opts['three'] = three
384 if four is not None:
384 if four is not None:
385 opts['four'] = four
385 opts['four'] = four
386 return self._call('debugwireargs', one=one, two=two, **opts)
386 return self._call('debugwireargs', one=one, two=two, **opts)
387
387
388 def _call(self, cmd, **args):
388 def _call(self, cmd, **args):
389 """execute <cmd> on the server
389 """execute <cmd> on the server
390
390
391 The command is expected to return a simple string.
391 The command is expected to return a simple string.
392
392
393 returns the server reply as a string."""
393 returns the server reply as a string."""
394 raise NotImplementedError()
394 raise NotImplementedError()
395
395
396 def _callstream(self, cmd, **args):
396 def _callstream(self, cmd, **args):
397 """execute <cmd> on the server
397 """execute <cmd> on the server
398
398
399 The command is expected to return a stream.
399 The command is expected to return a stream.
400
400
401 returns the server reply as a file like object."""
401 returns the server reply as a file like object."""
402 raise NotImplementedError()
402 raise NotImplementedError()
403
403
404 def _callcompressable(self, cmd, **args):
404 def _callcompressable(self, cmd, **args):
405 """execute <cmd> on the server
405 """execute <cmd> on the server
406
406
407 The command is expected to return a stream.
407 The command is expected to return a stream.
408
408
409 The stream may have been compressed in some implementations. This
409 The stream may have been compressed in some implementations. This
410 function takes care of the decompression. This is the only difference
410 function takes care of the decompression. This is the only difference
411 with _callstream.
411 with _callstream.
412
412
413 returns the server reply as a file like object.
413 returns the server reply as a file like object.
414 """
414 """
415 raise NotImplementedError()
415 raise NotImplementedError()
416
416
417 def _callpush(self, cmd, fp, **args):
417 def _callpush(self, cmd, fp, **args):
418 """execute a <cmd> on server
418 """execute a <cmd> on server
419
419
420 The command is expected to be related to a push. Push has a special
420 The command is expected to be related to a push. Push has a special
421 return method.
421 return method.
422
422
423 returns the server reply as a (ret, output) tuple. ret is either
423 returns the server reply as a (ret, output) tuple. ret is either
424 empty (error) or a stringified int.
424 empty (error) or a stringified int.
425 """
425 """
426 raise NotImplementedError()
426 raise NotImplementedError()
427
427
428 def _calltwowaystream(self, cmd, fp, **args):
428 def _calltwowaystream(self, cmd, fp, **args):
429 """execute <cmd> on server
429 """execute <cmd> on server
430
430
431 The command will send a stream to the server and get a stream in reply.
431 The command will send a stream to the server and get a stream in reply.
432 """
432 """
433 raise NotImplementedError()
433 raise NotImplementedError()
434
434
435 def _abort(self, exception):
435 def _abort(self, exception):
436 """clearly abort the wire protocol connection and raise the exception
436 """clearly abort the wire protocol connection and raise the exception
437 """
437 """
438 raise NotImplementedError()
438 raise NotImplementedError()
439
439
440 # server side
440 # server side
441
441
442 # wire protocol command can either return a string or one of these classes.
442 # wire protocol command can either return a string or one of these classes.
443 class streamres(object):
443 class streamres(object):
444 """wireproto reply: binary stream
444 """wireproto reply: binary stream
445
445
446 The call was successful and the result is a stream.
446 The call was successful and the result is a stream.
447 Iterate on the `self.gen` attribute to retrieve chunks.
447 Iterate on the `self.gen` attribute to retrieve chunks.
448 """
448 """
449 def __init__(self, gen):
449 def __init__(self, gen):
450 self.gen = gen
450 self.gen = gen
451
451
452 class pushres(object):
452 class pushres(object):
453 """wireproto reply: success with simple integer return
453 """wireproto reply: success with simple integer return
454
454
455 The call was successful and returned an integer contained in `self.res`.
455 The call was successful and returned an integer contained in `self.res`.
456 """
456 """
457 def __init__(self, res):
457 def __init__(self, res):
458 self.res = res
458 self.res = res
459
459
460 class pusherr(object):
460 class pusherr(object):
461 """wireproto reply: failure
461 """wireproto reply: failure
462
462
463 The call failed. The `self.res` attribute contains the error message.
463 The call failed. The `self.res` attribute contains the error message.
464 """
464 """
465 def __init__(self, res):
465 def __init__(self, res):
466 self.res = res
466 self.res = res
467
467
468 class ooberror(object):
468 class ooberror(object):
469 """wireproto reply: failure of a batch of operation
469 """wireproto reply: failure of a batch of operation
470
470
471 Something failed during a batch call. The error message is stored in
471 Something failed during a batch call. The error message is stored in
472 `self.message`.
472 `self.message`.
473 """
473 """
474 def __init__(self, message):
474 def __init__(self, message):
475 self.message = message
475 self.message = message
476
476
477 def dispatch(repo, proto, command):
477 def dispatch(repo, proto, command):
478 repo = repo.filtered("served")
478 repo = repo.filtered("served")
479 func, spec = commands[command]
479 func, spec = commands[command]
480 args = proto.getargs(spec)
480 args = proto.getargs(spec)
481 return func(repo, proto, *args)
481 return func(repo, proto, *args)
482
482
483 def options(cmd, keys, others):
483 def options(cmd, keys, others):
484 opts = {}
484 opts = {}
485 for k in keys:
485 for k in keys:
486 if k in others:
486 if k in others:
487 opts[k] = others[k]
487 opts[k] = others[k]
488 del others[k]
488 del others[k]
489 if others:
489 if others:
490 sys.stderr.write("abort: %s got unexpected arguments %s\n"
490 sys.stderr.write("abort: %s got unexpected arguments %s\n"
491 % (cmd, ",".join(others)))
491 % (cmd, ",".join(others)))
492 return opts
492 return opts
493
493
494 # list of commands
494 # list of commands
495 commands = {}
495 commands = {}
496
496
497 def wireprotocommand(name, args=''):
497 def wireprotocommand(name, args=''):
498 """decorator for wire protocol command"""
498 """decorator for wire protocol command"""
499 def register(func):
499 def register(func):
500 commands[name] = (func, args)
500 commands[name] = (func, args)
501 return func
501 return func
502 return register
502 return register
503
503
504 @wireprotocommand('batch', 'cmds *')
504 @wireprotocommand('batch', 'cmds *')
505 def batch(repo, proto, cmds, others):
505 def batch(repo, proto, cmds, others):
506 repo = repo.filtered("served")
506 repo = repo.filtered("served")
507 res = []
507 res = []
508 for pair in cmds.split(';'):
508 for pair in cmds.split(';'):
509 op, args = pair.split(' ', 1)
509 op, args = pair.split(' ', 1)
510 vals = {}
510 vals = {}
511 for a in args.split(','):
511 for a in args.split(','):
512 if a:
512 if a:
513 n, v = a.split('=')
513 n, v = a.split('=')
514 vals[n] = unescapearg(v)
514 vals[n] = unescapearg(v)
515 func, spec = commands[op]
515 func, spec = commands[op]
516 if spec:
516 if spec:
517 keys = spec.split()
517 keys = spec.split()
518 data = {}
518 data = {}
519 for k in keys:
519 for k in keys:
520 if k == '*':
520 if k == '*':
521 star = {}
521 star = {}
522 for key in vals.keys():
522 for key in vals.keys():
523 if key not in keys:
523 if key not in keys:
524 star[key] = vals[key]
524 star[key] = vals[key]
525 data['*'] = star
525 data['*'] = star
526 else:
526 else:
527 data[k] = vals[k]
527 data[k] = vals[k]
528 result = func(repo, proto, *[data[k] for k in keys])
528 result = func(repo, proto, *[data[k] for k in keys])
529 else:
529 else:
530 result = func(repo, proto)
530 result = func(repo, proto)
531 if isinstance(result, ooberror):
531 if isinstance(result, ooberror):
532 return result
532 return result
533 res.append(escapearg(result))
533 res.append(escapearg(result))
534 return ';'.join(res)
534 return ';'.join(res)
535
535
536 @wireprotocommand('between', 'pairs')
536 @wireprotocommand('between', 'pairs')
537 def between(repo, proto, pairs):
537 def between(repo, proto, pairs):
538 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
538 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
539 r = []
539 r = []
540 for b in repo.between(pairs):
540 for b in repo.between(pairs):
541 r.append(encodelist(b) + "\n")
541 r.append(encodelist(b) + "\n")
542 return "".join(r)
542 return "".join(r)
543
543
544 @wireprotocommand('branchmap')
544 @wireprotocommand('branchmap')
545 def branchmap(repo, proto):
545 def branchmap(repo, proto):
546 branchmap = repo.branchmap()
546 branchmap = repo.branchmap()
547 heads = []
547 heads = []
548 for branch, nodes in branchmap.iteritems():
548 for branch, nodes in branchmap.iteritems():
549 branchname = urllib.quote(encoding.fromlocal(branch))
549 branchname = urllib.quote(encoding.fromlocal(branch))
550 branchnodes = encodelist(nodes)
550 branchnodes = encodelist(nodes)
551 heads.append('%s %s' % (branchname, branchnodes))
551 heads.append('%s %s' % (branchname, branchnodes))
552 return '\n'.join(heads)
552 return '\n'.join(heads)
553
553
554 @wireprotocommand('branches', 'nodes')
554 @wireprotocommand('branches', 'nodes')
555 def branches(repo, proto, nodes):
555 def branches(repo, proto, nodes):
556 nodes = decodelist(nodes)
556 nodes = decodelist(nodes)
557 r = []
557 r = []
558 for b in repo.branches(nodes):
558 for b in repo.branches(nodes):
559 r.append(encodelist(b) + "\n")
559 r.append(encodelist(b) + "\n")
560 return "".join(r)
560 return "".join(r)
561
561
562
562
563 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
563 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
564 'known', 'getbundle', 'unbundlehash', 'batch']
564 'known', 'getbundle', 'unbundlehash', 'batch']
565
565
566 def _capabilities(repo, proto):
566 def _capabilities(repo, proto):
567 """return a list of capabilities for a repo
567 """return a list of capabilities for a repo
568
568
569 This function exists to allow extensions to easily wrap capabilities
569 This function exists to allow extensions to easily wrap capabilities
570 computation
570 computation
571
571
572 - returns a lists: easy to alter
572 - returns a lists: easy to alter
573 - change done here will be propagated to both `capabilities` and `hello`
573 - change done here will be propagated to both `capabilities` and `hello`
574 command without any other action needed.
574 command without any other action needed.
575 """
575 """
576 # copy to prevent modification of the global list
576 # copy to prevent modification of the global list
577 caps = list(wireprotocaps)
577 caps = list(wireprotocaps)
578 if _allowstream(repo.ui):
578 if _allowstream(repo.ui):
579 if repo.ui.configbool('server', 'preferuncompressed', False):
579 if repo.ui.configbool('server', 'preferuncompressed', False):
580 caps.append('stream-preferred')
580 caps.append('stream-preferred')
581 requiredformats = repo.requirements & repo.supportedformats
581 requiredformats = repo.requirements & repo.supportedformats
582 # if our local revlogs are just revlogv1, add 'stream' cap
582 # if our local revlogs are just revlogv1, add 'stream' cap
583 if not requiredformats - set(('revlogv1',)):
583 if not requiredformats - set(('revlogv1',)):
584 caps.append('stream')
584 caps.append('stream')
585 # otherwise, add 'streamreqs' detailing our local revlog format
585 # otherwise, add 'streamreqs' detailing our local revlog format
586 else:
586 else:
587 caps.append('streamreqs=%s' % ','.join(requiredformats))
587 caps.append('streamreqs=%s' % ','.join(requiredformats))
588 if repo.ui.configbool('server', 'bundle2', False):
588 if repo.ui.configbool('server', 'bundle2', False):
589 capsblob = bundle2.encodecaps(repo.bundle2caps)
589 capsblob = bundle2.encodecaps(repo.bundle2caps)
590 caps.append('bundle2=' + urllib.quote(capsblob))
590 caps.append('bundle2=' + urllib.quote(capsblob))
591 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
591 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
592 caps.append('httpheader=1024')
592 caps.append('httpheader=1024')
593 return caps
593 return caps
594
594
595 # If you are writing an extension and consider wrapping this function. Wrap
595 # If you are writing an extension and consider wrapping this function. Wrap
596 # `_capabilities` instead.
596 # `_capabilities` instead.
597 @wireprotocommand('capabilities')
597 @wireprotocommand('capabilities')
598 def capabilities(repo, proto):
598 def capabilities(repo, proto):
599 return ' '.join(_capabilities(repo, proto))
599 return ' '.join(_capabilities(repo, proto))
600
600
601 @wireprotocommand('changegroup', 'roots')
601 @wireprotocommand('changegroup', 'roots')
602 def changegroup(repo, proto, roots):
602 def changegroup(repo, proto, roots):
603 nodes = decodelist(roots)
603 nodes = decodelist(roots)
604 cg = changegroupmod.changegroup(repo, nodes, 'serve')
604 cg = changegroupmod.changegroup(repo, nodes, 'serve')
605 return streamres(proto.groupchunks(cg))
605 return streamres(proto.groupchunks(cg))
606
606
607 @wireprotocommand('changegroupsubset', 'bases heads')
607 @wireprotocommand('changegroupsubset', 'bases heads')
608 def changegroupsubset(repo, proto, bases, heads):
608 def changegroupsubset(repo, proto, bases, heads):
609 bases = decodelist(bases)
609 bases = decodelist(bases)
610 heads = decodelist(heads)
610 heads = decodelist(heads)
611 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
611 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
612 return streamres(proto.groupchunks(cg))
612 return streamres(proto.groupchunks(cg))
613
613
614 @wireprotocommand('debugwireargs', 'one two *')
614 @wireprotocommand('debugwireargs', 'one two *')
615 def debugwireargs(repo, proto, one, two, others):
615 def debugwireargs(repo, proto, one, two, others):
616 # only accept optional args from the known set
616 # only accept optional args from the known set
617 opts = options('debugwireargs', ['three', 'four'], others)
617 opts = options('debugwireargs', ['three', 'four'], others)
618 return repo.debugwireargs(one, two, **opts)
618 return repo.debugwireargs(one, two, **opts)
619
619
620 @wireprotocommand('getbundle', '*')
620 @wireprotocommand('getbundle', '*')
621 def getbundle(repo, proto, others):
621 def getbundle(repo, proto, others):
622 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
622 opts = options('getbundle', ['heads', 'common', 'bundlecaps'], others)
623 for k, v in opts.iteritems():
623 for k, v in opts.iteritems():
624 if k in ('heads', 'common'):
624 if k in ('heads', 'common'):
625 opts[k] = decodelist(v)
625 opts[k] = decodelist(v)
626 elif k == 'bundlecaps':
626 elif k == 'bundlecaps':
627 opts[k] = set(v.split(','))
627 opts[k] = set(v.split(','))
628 cg = exchange.getbundle(repo, 'serve', **opts)
628 cg = exchange.getbundle(repo, 'serve', **opts)
629 return streamres(proto.groupchunks(cg))
629 return streamres(proto.groupchunks(cg))
630
630
631 @wireprotocommand('heads')
631 @wireprotocommand('heads')
632 def heads(repo, proto):
632 def heads(repo, proto):
633 h = repo.heads()
633 h = repo.heads()
634 return encodelist(h) + "\n"
634 return encodelist(h) + "\n"
635
635
636 @wireprotocommand('hello')
636 @wireprotocommand('hello')
637 def hello(repo, proto):
637 def hello(repo, proto):
638 '''the hello command returns a set of lines describing various
638 '''the hello command returns a set of lines describing various
639 interesting things about the server, in an RFC822-like format.
639 interesting things about the server, in an RFC822-like format.
640 Currently the only one defined is "capabilities", which
640 Currently the only one defined is "capabilities", which
641 consists of a line in the form:
641 consists of a line in the form:
642
642
643 capabilities: space separated list of tokens
643 capabilities: space separated list of tokens
644 '''
644 '''
645 return "capabilities: %s\n" % (capabilities(repo, proto))
645 return "capabilities: %s\n" % (capabilities(repo, proto))
646
646
647 @wireprotocommand('listkeys', 'namespace')
647 @wireprotocommand('listkeys', 'namespace')
648 def listkeys(repo, proto, namespace):
648 def listkeys(repo, proto, namespace):
649 d = repo.listkeys(encoding.tolocal(namespace)).items()
649 d = repo.listkeys(encoding.tolocal(namespace)).items()
650 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
650 t = '\n'.join(['%s\t%s' % (encoding.fromlocal(k), encoding.fromlocal(v))
651 for k, v in d])
651 for k, v in d])
652 return t
652 return t
653
653
654 @wireprotocommand('lookup', 'key')
654 @wireprotocommand('lookup', 'key')
655 def lookup(repo, proto, key):
655 def lookup(repo, proto, key):
656 try:
656 try:
657 k = encoding.tolocal(key)
657 k = encoding.tolocal(key)
658 c = repo[k]
658 c = repo[k]
659 r = c.hex()
659 r = c.hex()
660 success = 1
660 success = 1
661 except Exception, inst:
661 except Exception, inst:
662 r = str(inst)
662 r = str(inst)
663 success = 0
663 success = 0
664 return "%s %s\n" % (success, r)
664 return "%s %s\n" % (success, r)
665
665
666 @wireprotocommand('known', 'nodes *')
666 @wireprotocommand('known', 'nodes *')
667 def known(repo, proto, nodes, others):
667 def known(repo, proto, nodes, others):
668 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
668 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
669
669
670 @wireprotocommand('pushkey', 'namespace key old new')
670 @wireprotocommand('pushkey', 'namespace key old new')
671 def pushkey(repo, proto, namespace, key, old, new):
671 def pushkey(repo, proto, namespace, key, old, new):
672 # compatibility with pre-1.8 clients which were accidentally
672 # compatibility with pre-1.8 clients which were accidentally
673 # sending raw binary nodes rather than utf-8-encoded hex
673 # sending raw binary nodes rather than utf-8-encoded hex
674 if len(new) == 20 and new.encode('string-escape') != new:
674 if len(new) == 20 and new.encode('string-escape') != new:
675 # looks like it could be a binary node
675 # looks like it could be a binary node
676 try:
676 try:
677 new.decode('utf-8')
677 new.decode('utf-8')
678 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
678 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
679 except UnicodeDecodeError:
679 except UnicodeDecodeError:
680 pass # binary, leave unmodified
680 pass # binary, leave unmodified
681 else:
681 else:
682 new = encoding.tolocal(new) # normal path
682 new = encoding.tolocal(new) # normal path
683
683
684 if util.safehasattr(proto, 'restore'):
684 if util.safehasattr(proto, 'restore'):
685
685
686 proto.redirect()
686 proto.redirect()
687
687
688 try:
688 try:
689 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
689 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
690 encoding.tolocal(old), new) or False
690 encoding.tolocal(old), new) or False
691 except util.Abort:
691 except util.Abort:
692 r = False
692 r = False
693
693
694 output = proto.restore()
694 output = proto.restore()
695
695
696 return '%s\n%s' % (int(r), output)
696 return '%s\n%s' % (int(r), output)
697
697
698 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
698 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
699 encoding.tolocal(old), new)
699 encoding.tolocal(old), new)
700 return '%s\n' % int(r)
700 return '%s\n' % int(r)
701
701
702 def _allowstream(ui):
702 def _allowstream(ui):
703 return ui.configbool('server', 'uncompressed', True, untrusted=True)
703 return ui.configbool('server', 'uncompressed', True, untrusted=True)
704
704
705 def _walkstreamfiles(repo):
705 def _walkstreamfiles(repo):
706 # this is it's own function so extensions can override it
706 # this is it's own function so extensions can override it
707 return repo.store.walk()
707 return repo.store.walk()
708
708
709 @wireprotocommand('stream_out')
709 @wireprotocommand('stream_out')
710 def stream(repo, proto):
710 def stream(repo, proto):
711 '''If the server supports streaming clone, it advertises the "stream"
711 '''If the server supports streaming clone, it advertises the "stream"
712 capability with a value representing the version and flags of the repo
712 capability with a value representing the version and flags of the repo
713 it is serving. Client checks to see if it understands the format.
713 it is serving. Client checks to see if it understands the format.
714
714
715 The format is simple: the server writes out a line with the amount
715 The format is simple: the server writes out a line with the amount
716 of files, then the total amount of bytes to be transferred (separated
716 of files, then the total amount of bytes to be transferred (separated
717 by a space). Then, for each file, the server first writes the filename
717 by a space). Then, for each file, the server first writes the filename
718 and file size (separated by the null character), then the file contents.
718 and file size (separated by the null character), then the file contents.
719 '''
719 '''
720
720
721 if not _allowstream(repo.ui):
721 if not _allowstream(repo.ui):
722 return '1\n'
722 return '1\n'
723
723
724 entries = []
724 entries = []
725 total_bytes = 0
725 total_bytes = 0
726 try:
726 try:
727 # get consistent snapshot of repo, lock during scan
727 # get consistent snapshot of repo, lock during scan
728 lock = repo.lock()
728 lock = repo.lock()
729 try:
729 try:
730 repo.ui.debug('scanning\n')
730 repo.ui.debug('scanning\n')
731 for name, ename, size in _walkstreamfiles(repo):
731 for name, ename, size in _walkstreamfiles(repo):
732 if size:
732 if size:
733 entries.append((name, size))
733 entries.append((name, size))
734 total_bytes += size
734 total_bytes += size
735 finally:
735 finally:
736 lock.release()
736 lock.release()
737 except error.LockError:
737 except error.LockError:
738 return '2\n' # error: 2
738 return '2\n' # error: 2
739
739
740 def streamer(repo, entries, total):
740 def streamer(repo, entries, total):
741 '''stream out all metadata files in repository.'''
741 '''stream out all metadata files in repository.'''
742 yield '0\n' # success
742 yield '0\n' # success
743 repo.ui.debug('%d files, %d bytes to transfer\n' %
743 repo.ui.debug('%d files, %d bytes to transfer\n' %
744 (len(entries), total_bytes))
744 (len(entries), total_bytes))
745 yield '%d %d\n' % (len(entries), total_bytes)
745 yield '%d %d\n' % (len(entries), total_bytes)
746
746
747 sopener = repo.sopener
747 sopener = repo.sopener
748 oldaudit = sopener.mustaudit
748 oldaudit = sopener.mustaudit
749 debugflag = repo.ui.debugflag
749 debugflag = repo.ui.debugflag
750 sopener.mustaudit = False
750 sopener.mustaudit = False
751
751
752 try:
752 try:
753 for name, size in entries:
753 for name, size in entries:
754 if debugflag:
754 if debugflag:
755 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
755 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
756 # partially encode name over the wire for backwards compat
756 # partially encode name over the wire for backwards compat
757 yield '%s\0%d\n' % (store.encodedir(name), size)
757 yield '%s\0%d\n' % (store.encodedir(name), size)
758 if size <= 65536:
758 if size <= 65536:
759 fp = sopener(name)
759 fp = sopener(name)
760 try:
760 try:
761 data = fp.read(size)
761 data = fp.read(size)
762 finally:
762 finally:
763 fp.close()
763 fp.close()
764 yield data
764 yield data
765 else:
765 else:
766 for chunk in util.filechunkiter(sopener(name), limit=size):
766 for chunk in util.filechunkiter(sopener(name), limit=size):
767 yield chunk
767 yield chunk
768 # replace with "finally:" when support for python 2.4 has been dropped
768 # replace with "finally:" when support for python 2.4 has been dropped
769 except Exception:
769 except Exception:
770 sopener.mustaudit = oldaudit
770 sopener.mustaudit = oldaudit
771 raise
771 raise
772 sopener.mustaudit = oldaudit
772 sopener.mustaudit = oldaudit
773
773
774 return streamres(streamer(repo, entries, total_bytes))
774 return streamres(streamer(repo, entries, total_bytes))
775
775
776 @wireprotocommand('unbundle', 'heads')
776 @wireprotocommand('unbundle', 'heads')
777 def unbundle(repo, proto, heads):
777 def unbundle(repo, proto, heads):
778 their_heads = decodelist(heads)
778 their_heads = decodelist(heads)
779
779
780 try:
780 try:
781 proto.redirect()
781 proto.redirect()
782
782
783 exchange.check_heads(repo, their_heads, 'preparing changes')
783 exchange.check_heads(repo, their_heads, 'preparing changes')
784
784
785 # write bundle data to temporary file because it can be big
785 # write bundle data to temporary file because it can be big
786 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
786 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
787 fp = os.fdopen(fd, 'wb+')
787 fp = os.fdopen(fd, 'wb+')
788 r = 0
788 r = 0
789 try:
789 try:
790 proto.getfile(fp)
790 proto.getfile(fp)
791 fp.seek(0)
791 fp.seek(0)
792 gen = exchange.readbundle(repo.ui, fp, None)
792 gen = exchange.readbundle(repo.ui, fp, None)
793 r = exchange.unbundle(repo, gen, their_heads, 'serve',
793 r = exchange.unbundle(repo, gen, their_heads, 'serve',
794 proto._client())
794 proto._client())
795 if util.safehasattr(r, 'addpart'):
795 if util.safehasattr(r, 'addpart'):
796 # The return looks streameable, we are in the bundle2 case and
796 # The return looks streameable, we are in the bundle2 case and
797 # should return a stream.
797 # should return a stream.
798 return streamres(r.getchunks())
798 return streamres(r.getchunks())
799 return pushres(r)
799 return pushres(r)
800
800
801 finally:
801 finally:
802 fp.close()
802 fp.close()
803 os.unlink(tempname)
803 os.unlink(tempname)
804 except util.Abort, inst:
804 except util.Abort, inst:
805 # The old code we moved used sys.stderr directly.
805 # The old code we moved used sys.stderr directly.
806 # We did not change it to minimise code change.
806 # We did not change it to minimise code change.
807 # This need to be moved to something proper.
807 # This need to be moved to something proper.
808 # Feel free to do it.
808 # Feel free to do it.
809 sys.stderr.write("abort: %s\n" % inst)
809 sys.stderr.write("abort: %s\n" % inst)
810 return pushres(0)
810 return pushres(0)
811 except exchange.PushRaced, exc:
811 except exchange.PushRaced, exc:
812 return pusherr(str(exc))
812 return pusherr(str(exc))
@@ -1,881 +1,881 b''
1
1
2 Create an extension to test bundle2 API
2 Create an extension to test bundle2 API
3
3
4 $ cat > bundle2.py << EOF
4 $ cat > bundle2.py << EOF
5 > """A small extension to test bundle2 implementation
5 > """A small extension to test bundle2 implementation
6 >
6 >
7 > Current bundle2 implementation is far too limited to be used in any core
7 > Current bundle2 implementation is far too limited to be used in any core
8 > code. We still need to be able to test it while it grow up.
8 > code. We still need to be able to test it while it grow up.
9 > """
9 > """
10 >
10 >
11 > import sys
11 > import sys
12 > from mercurial import cmdutil
12 > from mercurial import cmdutil
13 > from mercurial import util
13 > from mercurial import util
14 > from mercurial import bundle2
14 > from mercurial import bundle2
15 > from mercurial import scmutil
15 > from mercurial import scmutil
16 > from mercurial import discovery
16 > from mercurial import discovery
17 > from mercurial import changegroup
17 > from mercurial import changegroup
18 > cmdtable = {}
18 > cmdtable = {}
19 > command = cmdutil.command(cmdtable)
19 > command = cmdutil.command(cmdtable)
20 >
20 >
21 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
21 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
22 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
22 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
23 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
23 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
24 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
24 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
25 >
25 >
26 > @bundle2.parthandler('test:song')
26 > @bundle2.parthandler('test:song')
27 > def songhandler(op, part):
27 > def songhandler(op, part):
28 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
28 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
29 > op.ui.write('The choir starts singing:\n')
29 > op.ui.write('The choir starts singing:\n')
30 > verses = 0
30 > verses = 0
31 > for line in part.read().split('\n'):
31 > for line in part.read().split('\n'):
32 > op.ui.write(' %s\n' % line)
32 > op.ui.write(' %s\n' % line)
33 > verses += 1
33 > verses += 1
34 > op.records.add('song', {'verses': verses})
34 > op.records.add('song', {'verses': verses})
35 >
35 >
36 > @bundle2.parthandler('test:ping')
36 > @bundle2.parthandler('test:ping')
37 > def pinghandler(op, part):
37 > def pinghandler(op, part):
38 > op.ui.write('received ping request (id %i)\n' % part.id)
38 > op.ui.write('received ping request (id %i)\n' % part.id)
39 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
39 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
40 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
40 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
41 > rpart = bundle2.bundlepart('test:pong',
41 > rpart = bundle2.bundlepart('test:pong',
42 > [('in-reply-to', str(part.id))])
42 > [('in-reply-to', str(part.id))])
43 > op.reply.addpart(rpart)
43 > op.reply.addpart(rpart)
44 >
44 >
45 > @bundle2.parthandler('test:debugreply')
45 > @bundle2.parthandler('test:debugreply')
46 > def debugreply(op, part):
46 > def debugreply(op, part):
47 > """print data about the capacity of the bundle reply"""
47 > """print data about the capacity of the bundle reply"""
48 > if op.reply is None:
48 > if op.reply is None:
49 > op.ui.write('debugreply: no reply\n')
49 > op.ui.write('debugreply: no reply\n')
50 > else:
50 > else:
51 > op.ui.write('debugreply: capabilities:\n')
51 > op.ui.write('debugreply: capabilities:\n')
52 > for cap in sorted(op.reply.capabilities):
52 > for cap in sorted(op.reply.capabilities):
53 > op.ui.write('debugreply: %r\n' % cap)
53 > op.ui.write('debugreply: %r\n' % cap)
54 > for val in op.reply.capabilities[cap]:
54 > for val in op.reply.capabilities[cap]:
55 > op.ui.write('debugreply: %r\n' % val)
55 > op.ui.write('debugreply: %r\n' % val)
56 >
56 >
57 > @command('bundle2',
57 > @command('bundle2',
58 > [('', 'param', [], 'stream level parameter'),
58 > [('', 'param', [], 'stream level parameter'),
59 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
59 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
60 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
60 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
61 > ('', 'reply', False, 'produce a reply bundle'),
61 > ('', 'reply', False, 'produce a reply bundle'),
62 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
62 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
63 > '[OUTPUTFILE]')
63 > '[OUTPUTFILE]')
64 > def cmdbundle2(ui, repo, path=None, **opts):
64 > def cmdbundle2(ui, repo, path=None, **opts):
65 > """write a bundle2 container on standard ouput"""
65 > """write a bundle2 container on standard ouput"""
66 > bundler = bundle2.bundle20(ui)
66 > bundler = bundle2.bundle20(ui)
67 > for p in opts['param']:
67 > for p in opts['param']:
68 > p = p.split('=', 1)
68 > p = p.split('=', 1)
69 > try:
69 > try:
70 > bundler.addparam(*p)
70 > bundler.addparam(*p)
71 > except ValueError, exc:
71 > except ValueError, exc:
72 > raise util.Abort('%s' % exc)
72 > raise util.Abort('%s' % exc)
73 >
73 >
74 > if opts['reply']:
74 > if opts['reply']:
75 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
75 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
76 > bundler.addpart(bundle2.bundlepart('replycaps', data=capsstring))
76 > bundler.addpart(bundle2.bundlepart('replycaps', data=capsstring))
77 >
77 >
78 > revs = opts['rev']
78 > revs = opts['rev']
79 > if 'rev' in opts:
79 > if 'rev' in opts:
80 > revs = scmutil.revrange(repo, opts['rev'])
80 > revs = scmutil.revrange(repo, opts['rev'])
81 > if revs:
81 > if revs:
82 > # very crude version of a changegroup part creation
82 > # very crude version of a changegroup part creation
83 > bundled = repo.revs('%ld::%ld', revs, revs)
83 > bundled = repo.revs('%ld::%ld', revs, revs)
84 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
84 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
85 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
85 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
86 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
86 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
87 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
87 > cg = changegroup.getlocalbundle(repo, 'test:bundle2', outgoing, None)
88 > part = bundle2.bundlepart('changegroup', data=cg.getchunks())
88 > part = bundle2.bundlepart('changegroup', data=cg.getchunks())
89 > bundler.addpart(part)
89 > bundler.addpart(part)
90 >
90 >
91 > if opts['parts']:
91 > if opts['parts']:
92 > part = bundle2.bundlepart('test:empty')
92 > part = bundle2.bundlepart('test:empty')
93 > bundler.addpart(part)
93 > bundler.addpart(part)
94 > # add a second one to make sure we handle multiple parts
94 > # add a second one to make sure we handle multiple parts
95 > part = bundle2.bundlepart('test:empty')
95 > part = bundle2.bundlepart('test:empty')
96 > bundler.addpart(part)
96 > bundler.addpart(part)
97 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
97 > part = bundle2.bundlepart('test:song', data=ELEPHANTSSONG)
98 > bundler.addpart(part)
98 > bundler.addpart(part)
99 > part = bundle2.bundlepart('test:debugreply')
99 > part = bundle2.bundlepart('test:debugreply')
100 > bundler.addpart(part)
100 > bundler.addpart(part)
101 > part = bundle2.bundlepart('test:math',
101 > part = bundle2.bundlepart('test:math',
102 > [('pi', '3.14'), ('e', '2.72')],
102 > [('pi', '3.14'), ('e', '2.72')],
103 > [('cooking', 'raw')],
103 > [('cooking', 'raw')],
104 > '42')
104 > '42')
105 > bundler.addpart(part)
105 > bundler.addpart(part)
106 > if opts['unknown']:
106 > if opts['unknown']:
107 > part = bundle2.bundlepart('test:UNKNOWN',
107 > part = bundle2.bundlepart('test:UNKNOWN',
108 > data='some random content')
108 > data='some random content')
109 > bundler.addpart(part)
109 > bundler.addpart(part)
110 > if opts['parts']:
110 > if opts['parts']:
111 > part = bundle2.bundlepart('test:ping')
111 > part = bundle2.bundlepart('test:ping')
112 > bundler.addpart(part)
112 > bundler.addpart(part)
113 >
113 >
114 > if path is None:
114 > if path is None:
115 > file = sys.stdout
115 > file = sys.stdout
116 > else:
116 > else:
117 > file = open(path, 'w')
117 > file = open(path, 'w')
118 >
118 >
119 > for chunk in bundler.getchunks():
119 > for chunk in bundler.getchunks():
120 > file.write(chunk)
120 > file.write(chunk)
121 >
121 >
122 > @command('unbundle2', [], '')
122 > @command('unbundle2', [], '')
123 > def cmdunbundle2(ui, repo, replypath=None):
123 > def cmdunbundle2(ui, repo, replypath=None):
124 > """process a bundle2 stream from stdin on the current repo"""
124 > """process a bundle2 stream from stdin on the current repo"""
125 > try:
125 > try:
126 > tr = None
126 > tr = None
127 > lock = repo.lock()
127 > lock = repo.lock()
128 > tr = repo.transaction('processbundle')
128 > tr = repo.transaction('processbundle')
129 > try:
129 > try:
130 > unbundler = bundle2.unbundle20(ui, sys.stdin)
130 > unbundler = bundle2.unbundle20(ui, sys.stdin)
131 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
131 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
132 > tr.close()
132 > tr.close()
133 > except KeyError, exc:
133 > except KeyError, exc:
134 > raise util.Abort('missing support for %s' % exc)
134 > raise util.Abort('missing support for %s' % exc)
135 > finally:
135 > finally:
136 > if tr is not None:
136 > if tr is not None:
137 > tr.release()
137 > tr.release()
138 > lock.release()
138 > lock.release()
139 > remains = sys.stdin.read()
139 > remains = sys.stdin.read()
140 > ui.write('%i unread bytes\n' % len(remains))
140 > ui.write('%i unread bytes\n' % len(remains))
141 > if op.records['song']:
141 > if op.records['song']:
142 > totalverses = sum(r['verses'] for r in op.records['song'])
142 > totalverses = sum(r['verses'] for r in op.records['song'])
143 > ui.write('%i total verses sung\n' % totalverses)
143 > ui.write('%i total verses sung\n' % totalverses)
144 > for rec in op.records['changegroup']:
144 > for rec in op.records['changegroup']:
145 > ui.write('addchangegroup return: %i\n' % rec['return'])
145 > ui.write('addchangegroup return: %i\n' % rec['return'])
146 > if op.reply is not None and replypath is not None:
146 > if op.reply is not None and replypath is not None:
147 > file = open(replypath, 'w')
147 > file = open(replypath, 'w')
148 > for chunk in op.reply.getchunks():
148 > for chunk in op.reply.getchunks():
149 > file.write(chunk)
149 > file.write(chunk)
150 >
150 >
151 > @command('statbundle2', [], '')
151 > @command('statbundle2', [], '')
152 > def cmdstatbundle2(ui, repo):
152 > def cmdstatbundle2(ui, repo):
153 > """print statistic on the bundle2 container read from stdin"""
153 > """print statistic on the bundle2 container read from stdin"""
154 > unbundler = bundle2.unbundle20(ui, sys.stdin)
154 > unbundler = bundle2.unbundle20(ui, sys.stdin)
155 > try:
155 > try:
156 > params = unbundler.params
156 > params = unbundler.params
157 > except KeyError, exc:
157 > except KeyError, exc:
158 > raise util.Abort('unknown parameters: %s' % exc)
158 > raise util.Abort('unknown parameters: %s' % exc)
159 > ui.write('options count: %i\n' % len(params))
159 > ui.write('options count: %i\n' % len(params))
160 > for key in sorted(params):
160 > for key in sorted(params):
161 > ui.write('- %s\n' % key)
161 > ui.write('- %s\n' % key)
162 > value = params[key]
162 > value = params[key]
163 > if value is not None:
163 > if value is not None:
164 > ui.write(' %s\n' % value)
164 > ui.write(' %s\n' % value)
165 > count = 0
165 > count = 0
166 > for p in unbundler.iterparts():
166 > for p in unbundler.iterparts():
167 > count += 1
167 > count += 1
168 > ui.write(' :%s:\n' % p.type)
168 > ui.write(' :%s:\n' % p.type)
169 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
169 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
170 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
170 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
171 > ui.write(' payload: %i bytes\n' % len(p.read()))
171 > ui.write(' payload: %i bytes\n' % len(p.read()))
172 > ui.write('parts count: %i\n' % count)
172 > ui.write('parts count: %i\n' % count)
173 > EOF
173 > EOF
174 $ cat >> $HGRCPATH << EOF
174 $ cat >> $HGRCPATH << EOF
175 > [extensions]
175 > [extensions]
176 > bundle2=$TESTTMP/bundle2.py
176 > bundle2=$TESTTMP/bundle2.py
177 > [server]
177 > [server]
178 > bundle2=True
178 > bundle2=True
179 > [ui]
179 > [ui]
180 > ssh=python "$TESTDIR/dummyssh"
180 > ssh=python "$TESTDIR/dummyssh"
181 > [web]
181 > [web]
182 > push_ssl = false
182 > push_ssl = false
183 > allow_push = *
183 > allow_push = *
184 > EOF
184 > EOF
185
185
186 The extension requires a repo (currently unused)
186 The extension requires a repo (currently unused)
187
187
188 $ hg init main
188 $ hg init main
189 $ cd main
189 $ cd main
190 $ touch a
190 $ touch a
191 $ hg add a
191 $ hg add a
192 $ hg commit -m 'a'
192 $ hg commit -m 'a'
193
193
194
194
195 Empty bundle
195 Empty bundle
196 =================
196 =================
197
197
198 - no option
198 - no option
199 - no parts
199 - no parts
200
200
201 Test bundling
201 Test bundling
202
202
203 $ hg bundle2
203 $ hg bundle2
204 HG20\x00\x00\x00\x00 (no-eol) (esc)
204 HG2X\x00\x00\x00\x00 (no-eol) (esc)
205
205
206 Test unbundling
206 Test unbundling
207
207
208 $ hg bundle2 | hg statbundle2
208 $ hg bundle2 | hg statbundle2
209 options count: 0
209 options count: 0
210 parts count: 0
210 parts count: 0
211
211
212 Test old style bundle are detected and refused
212 Test old style bundle are detected and refused
213
213
214 $ hg bundle --all ../bundle.hg
214 $ hg bundle --all ../bundle.hg
215 1 changesets found
215 1 changesets found
216 $ hg statbundle2 < ../bundle.hg
216 $ hg statbundle2 < ../bundle.hg
217 abort: unknown bundle version 10
217 abort: unknown bundle version 10
218 [255]
218 [255]
219
219
220 Test parameters
220 Test parameters
221 =================
221 =================
222
222
223 - some options
223 - some options
224 - no parts
224 - no parts
225
225
226 advisory parameters, no value
226 advisory parameters, no value
227 -------------------------------
227 -------------------------------
228
228
229 Simplest possible parameters form
229 Simplest possible parameters form
230
230
231 Test generation simple option
231 Test generation simple option
232
232
233 $ hg bundle2 --param 'caution'
233 $ hg bundle2 --param 'caution'
234 HG20\x00\x07caution\x00\x00 (no-eol) (esc)
234 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
235
235
236 Test unbundling
236 Test unbundling
237
237
238 $ hg bundle2 --param 'caution' | hg statbundle2
238 $ hg bundle2 --param 'caution' | hg statbundle2
239 options count: 1
239 options count: 1
240 - caution
240 - caution
241 parts count: 0
241 parts count: 0
242
242
243 Test generation multiple option
243 Test generation multiple option
244
244
245 $ hg bundle2 --param 'caution' --param 'meal'
245 $ hg bundle2 --param 'caution' --param 'meal'
246 HG20\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
246 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
247
247
248 Test unbundling
248 Test unbundling
249
249
250 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
250 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
251 options count: 2
251 options count: 2
252 - caution
252 - caution
253 - meal
253 - meal
254 parts count: 0
254 parts count: 0
255
255
256 advisory parameters, with value
256 advisory parameters, with value
257 -------------------------------
257 -------------------------------
258
258
259 Test generation
259 Test generation
260
260
261 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
261 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
262 HG20\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
262 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
263
263
264 Test unbundling
264 Test unbundling
265
265
266 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
266 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
267 options count: 3
267 options count: 3
268 - caution
268 - caution
269 - elephants
269 - elephants
270 - meal
270 - meal
271 vegan
271 vegan
272 parts count: 0
272 parts count: 0
273
273
274 parameter with special char in value
274 parameter with special char in value
275 ---------------------------------------------------
275 ---------------------------------------------------
276
276
277 Test generation
277 Test generation
278
278
279 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
279 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
280 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
280 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
281
281
282 Test unbundling
282 Test unbundling
283
283
284 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
284 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
285 options count: 2
285 options count: 2
286 - e|! 7/
286 - e|! 7/
287 babar%#==tutu
287 babar%#==tutu
288 - simple
288 - simple
289 parts count: 0
289 parts count: 0
290
290
291 Test unknown mandatory option
291 Test unknown mandatory option
292 ---------------------------------------------------
292 ---------------------------------------------------
293
293
294 $ hg bundle2 --param 'Gravity' | hg statbundle2
294 $ hg bundle2 --param 'Gravity' | hg statbundle2
295 abort: unknown parameters: 'Gravity'
295 abort: unknown parameters: 'Gravity'
296 [255]
296 [255]
297
297
298 Test debug output
298 Test debug output
299 ---------------------------------------------------
299 ---------------------------------------------------
300
300
301 bundling debug
301 bundling debug
302
302
303 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
303 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
304 start emission of HG20 stream
304 start emission of HG2X stream
305 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
305 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
306 start of parts
306 start of parts
307 end of bundle
307 end of bundle
308
308
309 file content is ok
309 file content is ok
310
310
311 $ cat ../out.hg2
311 $ cat ../out.hg2
312 HG20\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
312 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
313
313
314 unbundling debug
314 unbundling debug
315
315
316 $ hg statbundle2 --debug < ../out.hg2
316 $ hg statbundle2 --debug < ../out.hg2
317 start processing of HG20 stream
317 start processing of HG2X stream
318 reading bundle2 stream parameters
318 reading bundle2 stream parameters
319 ignoring unknown parameter 'e|! 7/'
319 ignoring unknown parameter 'e|! 7/'
320 ignoring unknown parameter 'simple'
320 ignoring unknown parameter 'simple'
321 options count: 2
321 options count: 2
322 - e|! 7/
322 - e|! 7/
323 babar%#==tutu
323 babar%#==tutu
324 - simple
324 - simple
325 start extraction of bundle2 parts
325 start extraction of bundle2 parts
326 part header size: 0
326 part header size: 0
327 end of bundle2 stream
327 end of bundle2 stream
328 parts count: 0
328 parts count: 0
329
329
330
330
331 Test buggy input
331 Test buggy input
332 ---------------------------------------------------
332 ---------------------------------------------------
333
333
334 empty parameter name
334 empty parameter name
335
335
336 $ hg bundle2 --param '' --quiet
336 $ hg bundle2 --param '' --quiet
337 abort: empty parameter name
337 abort: empty parameter name
338 [255]
338 [255]
339
339
340 bad parameter name
340 bad parameter name
341
341
342 $ hg bundle2 --param 42babar
342 $ hg bundle2 --param 42babar
343 abort: non letter first character: '42babar'
343 abort: non letter first character: '42babar'
344 [255]
344 [255]
345
345
346
346
347 Test part
347 Test part
348 =================
348 =================
349
349
350 $ hg bundle2 --parts ../parts.hg2 --debug
350 $ hg bundle2 --parts ../parts.hg2 --debug
351 start emission of HG20 stream
351 start emission of HG2X stream
352 bundle parameter:
352 bundle parameter:
353 start of parts
353 start of parts
354 bundle part: "test:empty"
354 bundle part: "test:empty"
355 bundle part: "test:empty"
355 bundle part: "test:empty"
356 bundle part: "test:song"
356 bundle part: "test:song"
357 bundle part: "test:debugreply"
357 bundle part: "test:debugreply"
358 bundle part: "test:math"
358 bundle part: "test:math"
359 bundle part: "test:ping"
359 bundle part: "test:ping"
360 end of bundle
360 end of bundle
361
361
362 $ cat ../parts.hg2
362 $ cat ../parts.hg2
363 HG20\x00\x00\x00\x11 (esc)
363 HG2X\x00\x00\x00\x11 (esc)
364 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
364 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
365 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
365 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
366 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
366 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
367 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
367 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x05\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
368
368
369
369
370 $ hg statbundle2 < ../parts.hg2
370 $ hg statbundle2 < ../parts.hg2
371 options count: 0
371 options count: 0
372 :test:empty:
372 :test:empty:
373 mandatory: 0
373 mandatory: 0
374 advisory: 0
374 advisory: 0
375 payload: 0 bytes
375 payload: 0 bytes
376 :test:empty:
376 :test:empty:
377 mandatory: 0
377 mandatory: 0
378 advisory: 0
378 advisory: 0
379 payload: 0 bytes
379 payload: 0 bytes
380 :test:song:
380 :test:song:
381 mandatory: 0
381 mandatory: 0
382 advisory: 0
382 advisory: 0
383 payload: 178 bytes
383 payload: 178 bytes
384 :test:debugreply:
384 :test:debugreply:
385 mandatory: 0
385 mandatory: 0
386 advisory: 0
386 advisory: 0
387 payload: 0 bytes
387 payload: 0 bytes
388 :test:math:
388 :test:math:
389 mandatory: 2
389 mandatory: 2
390 advisory: 1
390 advisory: 1
391 payload: 2 bytes
391 payload: 2 bytes
392 :test:ping:
392 :test:ping:
393 mandatory: 0
393 mandatory: 0
394 advisory: 0
394 advisory: 0
395 payload: 0 bytes
395 payload: 0 bytes
396 parts count: 6
396 parts count: 6
397
397
398 $ hg statbundle2 --debug < ../parts.hg2
398 $ hg statbundle2 --debug < ../parts.hg2
399 start processing of HG20 stream
399 start processing of HG2X stream
400 reading bundle2 stream parameters
400 reading bundle2 stream parameters
401 options count: 0
401 options count: 0
402 start extraction of bundle2 parts
402 start extraction of bundle2 parts
403 part header size: 17
403 part header size: 17
404 part type: "test:empty"
404 part type: "test:empty"
405 part id: "0"
405 part id: "0"
406 part parameters: 0
406 part parameters: 0
407 :test:empty:
407 :test:empty:
408 mandatory: 0
408 mandatory: 0
409 advisory: 0
409 advisory: 0
410 payload chunk size: 0
410 payload chunk size: 0
411 payload: 0 bytes
411 payload: 0 bytes
412 part header size: 17
412 part header size: 17
413 part type: "test:empty"
413 part type: "test:empty"
414 part id: "1"
414 part id: "1"
415 part parameters: 0
415 part parameters: 0
416 :test:empty:
416 :test:empty:
417 mandatory: 0
417 mandatory: 0
418 advisory: 0
418 advisory: 0
419 payload chunk size: 0
419 payload chunk size: 0
420 payload: 0 bytes
420 payload: 0 bytes
421 part header size: 16
421 part header size: 16
422 part type: "test:song"
422 part type: "test:song"
423 part id: "2"
423 part id: "2"
424 part parameters: 0
424 part parameters: 0
425 :test:song:
425 :test:song:
426 mandatory: 0
426 mandatory: 0
427 advisory: 0
427 advisory: 0
428 payload chunk size: 178
428 payload chunk size: 178
429 payload chunk size: 0
429 payload chunk size: 0
430 payload: 178 bytes
430 payload: 178 bytes
431 part header size: 22
431 part header size: 22
432 part type: "test:debugreply"
432 part type: "test:debugreply"
433 part id: "3"
433 part id: "3"
434 part parameters: 0
434 part parameters: 0
435 :test:debugreply:
435 :test:debugreply:
436 mandatory: 0
436 mandatory: 0
437 advisory: 0
437 advisory: 0
438 payload chunk size: 0
438 payload chunk size: 0
439 payload: 0 bytes
439 payload: 0 bytes
440 part header size: 43
440 part header size: 43
441 part type: "test:math"
441 part type: "test:math"
442 part id: "4"
442 part id: "4"
443 part parameters: 3
443 part parameters: 3
444 :test:math:
444 :test:math:
445 mandatory: 2
445 mandatory: 2
446 advisory: 1
446 advisory: 1
447 payload chunk size: 2
447 payload chunk size: 2
448 payload chunk size: 0
448 payload chunk size: 0
449 payload: 2 bytes
449 payload: 2 bytes
450 part header size: 16
450 part header size: 16
451 part type: "test:ping"
451 part type: "test:ping"
452 part id: "5"
452 part id: "5"
453 part parameters: 0
453 part parameters: 0
454 :test:ping:
454 :test:ping:
455 mandatory: 0
455 mandatory: 0
456 advisory: 0
456 advisory: 0
457 payload chunk size: 0
457 payload chunk size: 0
458 payload: 0 bytes
458 payload: 0 bytes
459 part header size: 0
459 part header size: 0
460 end of bundle2 stream
460 end of bundle2 stream
461 parts count: 6
461 parts count: 6
462
462
463 Test actual unbundling of test part
463 Test actual unbundling of test part
464 =======================================
464 =======================================
465
465
466 Process the bundle
466 Process the bundle
467
467
468 $ hg unbundle2 --debug < ../parts.hg2
468 $ hg unbundle2 --debug < ../parts.hg2
469 start processing of HG20 stream
469 start processing of HG2X stream
470 reading bundle2 stream parameters
470 reading bundle2 stream parameters
471 start extraction of bundle2 parts
471 start extraction of bundle2 parts
472 part header size: 17
472 part header size: 17
473 part type: "test:empty"
473 part type: "test:empty"
474 part id: "0"
474 part id: "0"
475 part parameters: 0
475 part parameters: 0
476 ignoring unknown advisory part 'test:empty'
476 ignoring unknown advisory part 'test:empty'
477 payload chunk size: 0
477 payload chunk size: 0
478 part header size: 17
478 part header size: 17
479 part type: "test:empty"
479 part type: "test:empty"
480 part id: "1"
480 part id: "1"
481 part parameters: 0
481 part parameters: 0
482 ignoring unknown advisory part 'test:empty'
482 ignoring unknown advisory part 'test:empty'
483 payload chunk size: 0
483 payload chunk size: 0
484 part header size: 16
484 part header size: 16
485 part type: "test:song"
485 part type: "test:song"
486 part id: "2"
486 part id: "2"
487 part parameters: 0
487 part parameters: 0
488 found a handler for part 'test:song'
488 found a handler for part 'test:song'
489 The choir starts singing:
489 The choir starts singing:
490 payload chunk size: 178
490 payload chunk size: 178
491 payload chunk size: 0
491 payload chunk size: 0
492 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
492 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
493 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
493 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
494 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
494 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
495 part header size: 22
495 part header size: 22
496 part type: "test:debugreply"
496 part type: "test:debugreply"
497 part id: "3"
497 part id: "3"
498 part parameters: 0
498 part parameters: 0
499 found a handler for part 'test:debugreply'
499 found a handler for part 'test:debugreply'
500 debugreply: no reply
500 debugreply: no reply
501 payload chunk size: 0
501 payload chunk size: 0
502 part header size: 43
502 part header size: 43
503 part type: "test:math"
503 part type: "test:math"
504 part id: "4"
504 part id: "4"
505 part parameters: 3
505 part parameters: 3
506 ignoring unknown advisory part 'test:math'
506 ignoring unknown advisory part 'test:math'
507 payload chunk size: 2
507 payload chunk size: 2
508 payload chunk size: 0
508 payload chunk size: 0
509 part header size: 16
509 part header size: 16
510 part type: "test:ping"
510 part type: "test:ping"
511 part id: "5"
511 part id: "5"
512 part parameters: 0
512 part parameters: 0
513 found a handler for part 'test:ping'
513 found a handler for part 'test:ping'
514 received ping request (id 5)
514 received ping request (id 5)
515 payload chunk size: 0
515 payload chunk size: 0
516 part header size: 0
516 part header size: 0
517 end of bundle2 stream
517 end of bundle2 stream
518 0 unread bytes
518 0 unread bytes
519 3 total verses sung
519 3 total verses sung
520
520
521 Unbundle with an unknown mandatory part
521 Unbundle with an unknown mandatory part
522 (should abort)
522 (should abort)
523
523
524 $ hg bundle2 --parts --unknown ../unknown.hg2
524 $ hg bundle2 --parts --unknown ../unknown.hg2
525
525
526 $ hg unbundle2 < ../unknown.hg2
526 $ hg unbundle2 < ../unknown.hg2
527 The choir starts singing:
527 The choir starts singing:
528 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
528 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
529 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
529 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
530 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
530 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
531 debugreply: no reply
531 debugreply: no reply
532 0 unread bytes
532 0 unread bytes
533 abort: missing support for 'test:unknown'
533 abort: missing support for 'test:unknown'
534 [255]
534 [255]
535
535
536 unbundle with a reply
536 unbundle with a reply
537
537
538 $ hg bundle2 --parts --reply ../parts-reply.hg2
538 $ hg bundle2 --parts --reply ../parts-reply.hg2
539 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
539 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
540 0 unread bytes
540 0 unread bytes
541 3 total verses sung
541 3 total verses sung
542
542
543 The reply is a bundle
543 The reply is a bundle
544
544
545 $ cat ../reply.hg2
545 $ cat ../reply.hg2
546 HG20\x00\x00\x00\x1b\x06output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
546 HG2X\x00\x00\x00\x1b\x06output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
547 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
547 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
548 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
548 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
549 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
549 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
550 \x00\x00\x00\x00\x00\x1b\x06output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
550 \x00\x00\x00\x00\x00\x1b\x06output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
551 debugreply: 'city=!'
551 debugreply: 'city=!'
552 debugreply: 'celeste,ville'
552 debugreply: 'celeste,ville'
553 debugreply: 'elephants'
553 debugreply: 'elephants'
554 debugreply: 'babar'
554 debugreply: 'babar'
555 debugreply: 'celeste'
555 debugreply: 'celeste'
556 debugreply: 'ping-pong'
556 debugreply: 'ping-pong'
557 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1b\x06output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
557 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to6\x00\x00\x00\x00\x00\x1b\x06output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to6\x00\x00\x00=received ping request (id 6) (esc)
558 replying to ping request (id 6)
558 replying to ping request (id 6)
559 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
559 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
560
560
561 The reply is valid
561 The reply is valid
562
562
563 $ hg statbundle2 < ../reply.hg2
563 $ hg statbundle2 < ../reply.hg2
564 options count: 0
564 options count: 0
565 :output:
565 :output:
566 mandatory: 0
566 mandatory: 0
567 advisory: 1
567 advisory: 1
568 payload: 217 bytes
568 payload: 217 bytes
569 :output:
569 :output:
570 mandatory: 0
570 mandatory: 0
571 advisory: 1
571 advisory: 1
572 payload: 201 bytes
572 payload: 201 bytes
573 :test:pong:
573 :test:pong:
574 mandatory: 1
574 mandatory: 1
575 advisory: 0
575 advisory: 0
576 payload: 0 bytes
576 payload: 0 bytes
577 :output:
577 :output:
578 mandatory: 0
578 mandatory: 0
579 advisory: 1
579 advisory: 1
580 payload: 61 bytes
580 payload: 61 bytes
581 parts count: 4
581 parts count: 4
582
582
583 Unbundle the reply to get the output:
583 Unbundle the reply to get the output:
584
584
585 $ hg unbundle2 < ../reply.hg2
585 $ hg unbundle2 < ../reply.hg2
586 remote: The choir starts singing:
586 remote: The choir starts singing:
587 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
587 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
588 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
588 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
589 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
589 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
590 remote: debugreply: capabilities:
590 remote: debugreply: capabilities:
591 remote: debugreply: 'city=!'
591 remote: debugreply: 'city=!'
592 remote: debugreply: 'celeste,ville'
592 remote: debugreply: 'celeste,ville'
593 remote: debugreply: 'elephants'
593 remote: debugreply: 'elephants'
594 remote: debugreply: 'babar'
594 remote: debugreply: 'babar'
595 remote: debugreply: 'celeste'
595 remote: debugreply: 'celeste'
596 remote: debugreply: 'ping-pong'
596 remote: debugreply: 'ping-pong'
597 remote: received ping request (id 6)
597 remote: received ping request (id 6)
598 remote: replying to ping request (id 6)
598 remote: replying to ping request (id 6)
599 0 unread bytes
599 0 unread bytes
600
600
601 Support for changegroup
601 Support for changegroup
602 ===================================
602 ===================================
603
603
604 $ hg unbundle $TESTDIR/bundles/rebase.hg
604 $ hg unbundle $TESTDIR/bundles/rebase.hg
605 adding changesets
605 adding changesets
606 adding manifests
606 adding manifests
607 adding file changes
607 adding file changes
608 added 8 changesets with 7 changes to 7 files (+3 heads)
608 added 8 changesets with 7 changes to 7 files (+3 heads)
609 (run 'hg heads' to see heads, 'hg merge' to merge)
609 (run 'hg heads' to see heads, 'hg merge' to merge)
610
610
611 $ hg log -G
611 $ hg log -G
612 o changeset: 8:02de42196ebe
612 o changeset: 8:02de42196ebe
613 | tag: tip
613 | tag: tip
614 | parent: 6:24b6387c8c8c
614 | parent: 6:24b6387c8c8c
615 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
615 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
616 | date: Sat Apr 30 15:24:48 2011 +0200
616 | date: Sat Apr 30 15:24:48 2011 +0200
617 | summary: H
617 | summary: H
618 |
618 |
619 | o changeset: 7:eea13746799a
619 | o changeset: 7:eea13746799a
620 |/| parent: 6:24b6387c8c8c
620 |/| parent: 6:24b6387c8c8c
621 | | parent: 5:9520eea781bc
621 | | parent: 5:9520eea781bc
622 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
622 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
623 | | date: Sat Apr 30 15:24:48 2011 +0200
623 | | date: Sat Apr 30 15:24:48 2011 +0200
624 | | summary: G
624 | | summary: G
625 | |
625 | |
626 o | changeset: 6:24b6387c8c8c
626 o | changeset: 6:24b6387c8c8c
627 | | parent: 1:cd010b8cd998
627 | | parent: 1:cd010b8cd998
628 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
628 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
629 | | date: Sat Apr 30 15:24:48 2011 +0200
629 | | date: Sat Apr 30 15:24:48 2011 +0200
630 | | summary: F
630 | | summary: F
631 | |
631 | |
632 | o changeset: 5:9520eea781bc
632 | o changeset: 5:9520eea781bc
633 |/ parent: 1:cd010b8cd998
633 |/ parent: 1:cd010b8cd998
634 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
634 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
635 | date: Sat Apr 30 15:24:48 2011 +0200
635 | date: Sat Apr 30 15:24:48 2011 +0200
636 | summary: E
636 | summary: E
637 |
637 |
638 | o changeset: 4:32af7686d403
638 | o changeset: 4:32af7686d403
639 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
639 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
640 | | date: Sat Apr 30 15:24:48 2011 +0200
640 | | date: Sat Apr 30 15:24:48 2011 +0200
641 | | summary: D
641 | | summary: D
642 | |
642 | |
643 | o changeset: 3:5fddd98957c8
643 | o changeset: 3:5fddd98957c8
644 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
644 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
645 | | date: Sat Apr 30 15:24:48 2011 +0200
645 | | date: Sat Apr 30 15:24:48 2011 +0200
646 | | summary: C
646 | | summary: C
647 | |
647 | |
648 | o changeset: 2:42ccdea3bb16
648 | o changeset: 2:42ccdea3bb16
649 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
649 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
650 | date: Sat Apr 30 15:24:48 2011 +0200
650 | date: Sat Apr 30 15:24:48 2011 +0200
651 | summary: B
651 | summary: B
652 |
652 |
653 o changeset: 1:cd010b8cd998
653 o changeset: 1:cd010b8cd998
654 parent: -1:000000000000
654 parent: -1:000000000000
655 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
655 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
656 date: Sat Apr 30 15:24:48 2011 +0200
656 date: Sat Apr 30 15:24:48 2011 +0200
657 summary: A
657 summary: A
658
658
659 @ changeset: 0:3903775176ed
659 @ changeset: 0:3903775176ed
660 user: test
660 user: test
661 date: Thu Jan 01 00:00:00 1970 +0000
661 date: Thu Jan 01 00:00:00 1970 +0000
662 summary: a
662 summary: a
663
663
664
664
665 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
665 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
666 4 changesets found
666 4 changesets found
667 list of changesets:
667 list of changesets:
668 32af7686d403cf45b5d95f2d70cebea587ac806a
668 32af7686d403cf45b5d95f2d70cebea587ac806a
669 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
669 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
670 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
670 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
671 02de42196ebee42ef284b6780a87cdc96e8eaab6
671 02de42196ebee42ef284b6780a87cdc96e8eaab6
672 start emission of HG20 stream
672 start emission of HG2X stream
673 bundle parameter:
673 bundle parameter:
674 start of parts
674 start of parts
675 bundle part: "changegroup"
675 bundle part: "changegroup"
676 bundling: 1/4 changesets (25.00%)
676 bundling: 1/4 changesets (25.00%)
677 bundling: 2/4 changesets (50.00%)
677 bundling: 2/4 changesets (50.00%)
678 bundling: 3/4 changesets (75.00%)
678 bundling: 3/4 changesets (75.00%)
679 bundling: 4/4 changesets (100.00%)
679 bundling: 4/4 changesets (100.00%)
680 bundling: 1/4 manifests (25.00%)
680 bundling: 1/4 manifests (25.00%)
681 bundling: 2/4 manifests (50.00%)
681 bundling: 2/4 manifests (50.00%)
682 bundling: 3/4 manifests (75.00%)
682 bundling: 3/4 manifests (75.00%)
683 bundling: 4/4 manifests (100.00%)
683 bundling: 4/4 manifests (100.00%)
684 bundling: D 1/3 files (33.33%)
684 bundling: D 1/3 files (33.33%)
685 bundling: E 2/3 files (66.67%)
685 bundling: E 2/3 files (66.67%)
686 bundling: H 3/3 files (100.00%)
686 bundling: H 3/3 files (100.00%)
687 end of bundle
687 end of bundle
688
688
689 $ cat ../rev.hg2
689 $ cat ../rev.hg2
690 HG20\x00\x00\x00\x12\x0bchangegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
690 HG2X\x00\x00\x00\x12\x0bchangegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
691 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
691 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
692 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
692 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
693 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
693 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
694 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
694 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
695 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
695 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
696 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
696 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
697 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
697 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
698 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
698 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
699 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
699 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
700 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
700 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
701 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
701 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
702 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
702 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
703 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
703 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
704 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
704 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
705 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
705 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
706 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
706 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
707 l\r (no-eol) (esc)
707 l\r (no-eol) (esc)
708 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
708 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
709 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
709 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
710 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
710 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
711 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
711 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
712
712
713 $ hg unbundle2 < ../rev.hg2
713 $ hg unbundle2 < ../rev.hg2
714 adding changesets
714 adding changesets
715 adding manifests
715 adding manifests
716 adding file changes
716 adding file changes
717 added 0 changesets with 0 changes to 3 files
717 added 0 changesets with 0 changes to 3 files
718 0 unread bytes
718 0 unread bytes
719 addchangegroup return: 1
719 addchangegroup return: 1
720
720
721 with reply
721 with reply
722
722
723 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
723 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
724 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
724 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
725 0 unread bytes
725 0 unread bytes
726 addchangegroup return: 1
726 addchangegroup return: 1
727
727
728 $ cat ../rev-reply.hg2
728 $ cat ../rev-reply.hg2
729 HG20\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1b\x06output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
729 HG2X\x00\x00\x00/\x11reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1b\x06output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
730 adding manifests
730 adding manifests
731 adding file changes
731 adding file changes
732 added 0 changesets with 0 changes to 3 files
732 added 0 changesets with 0 changes to 3 files
733 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
733 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
734
734
735 Real world exchange
735 Real world exchange
736 =====================
736 =====================
737
737
738
738
739 clone --pull
739 clone --pull
740
740
741 $ cd ..
741 $ cd ..
742 $ hg clone main other --pull --rev 9520eea781bc
742 $ hg clone main other --pull --rev 9520eea781bc
743 adding changesets
743 adding changesets
744 adding manifests
744 adding manifests
745 adding file changes
745 adding file changes
746 added 2 changesets with 2 changes to 2 files
746 added 2 changesets with 2 changes to 2 files
747 updating to branch default
747 updating to branch default
748 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
748 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
749 $ hg -R other log -G
749 $ hg -R other log -G
750 @ changeset: 1:9520eea781bc
750 @ changeset: 1:9520eea781bc
751 | tag: tip
751 | tag: tip
752 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
752 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
753 | date: Sat Apr 30 15:24:48 2011 +0200
753 | date: Sat Apr 30 15:24:48 2011 +0200
754 | summary: E
754 | summary: E
755 |
755 |
756 o changeset: 0:cd010b8cd998
756 o changeset: 0:cd010b8cd998
757 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
757 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
758 date: Sat Apr 30 15:24:48 2011 +0200
758 date: Sat Apr 30 15:24:48 2011 +0200
759 summary: A
759 summary: A
760
760
761
761
762 pull
762 pull
763
763
764 $ hg -R other pull -r 24b6387c8c8c
764 $ hg -R other pull -r 24b6387c8c8c
765 pulling from $TESTTMP/main (glob)
765 pulling from $TESTTMP/main (glob)
766 searching for changes
766 searching for changes
767 adding changesets
767 adding changesets
768 adding manifests
768 adding manifests
769 adding file changes
769 adding file changes
770 added 1 changesets with 1 changes to 1 files (+1 heads)
770 added 1 changesets with 1 changes to 1 files (+1 heads)
771 (run 'hg heads' to see heads, 'hg merge' to merge)
771 (run 'hg heads' to see heads, 'hg merge' to merge)
772
772
773 push
773 push
774
774
775 $ hg -R main push other --rev eea13746799a
775 $ hg -R main push other --rev eea13746799a
776 pushing to other
776 pushing to other
777 searching for changes
777 searching for changes
778 remote: adding changesets
778 remote: adding changesets
779 remote: adding manifests
779 remote: adding manifests
780 remote: adding file changes
780 remote: adding file changes
781 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
781 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
782
782
783 pull over ssh
783 pull over ssh
784
784
785 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
785 $ hg -R other pull ssh://user@dummy/main -r 02de42196ebe --traceback
786 pulling from ssh://user@dummy/main
786 pulling from ssh://user@dummy/main
787 searching for changes
787 searching for changes
788 adding changesets
788 adding changesets
789 adding manifests
789 adding manifests
790 adding file changes
790 adding file changes
791 added 1 changesets with 1 changes to 1 files (+1 heads)
791 added 1 changesets with 1 changes to 1 files (+1 heads)
792 (run 'hg heads' to see heads, 'hg merge' to merge)
792 (run 'hg heads' to see heads, 'hg merge' to merge)
793
793
794 pull over http
794 pull over http
795
795
796 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
796 $ hg -R main serve -p $HGPORT -d --pid-file=main.pid -E main-error.log
797 $ cat main.pid >> $DAEMON_PIDS
797 $ cat main.pid >> $DAEMON_PIDS
798
798
799 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
799 $ hg -R other pull http://localhost:$HGPORT/ -r 42ccdea3bb16
800 pulling from http://localhost:$HGPORT/
800 pulling from http://localhost:$HGPORT/
801 searching for changes
801 searching for changes
802 adding changesets
802 adding changesets
803 adding manifests
803 adding manifests
804 adding file changes
804 adding file changes
805 added 1 changesets with 1 changes to 1 files (+1 heads)
805 added 1 changesets with 1 changes to 1 files (+1 heads)
806 (run 'hg heads .' to see heads, 'hg merge' to merge)
806 (run 'hg heads .' to see heads, 'hg merge' to merge)
807 $ cat main-error.log
807 $ cat main-error.log
808
808
809 push over ssh
809 push over ssh
810
810
811 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
811 $ hg -R main push ssh://user@dummy/other -r 5fddd98957c8
812 pushing to ssh://user@dummy/other
812 pushing to ssh://user@dummy/other
813 searching for changes
813 searching for changes
814 remote: adding changesets
814 remote: adding changesets
815 remote: adding manifests
815 remote: adding manifests
816 remote: adding file changes
816 remote: adding file changes
817 remote: added 1 changesets with 1 changes to 1 files
817 remote: added 1 changesets with 1 changes to 1 files
818
818
819 push over http
819 push over http
820
820
821 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
821 $ hg -R other serve -p $HGPORT2 -d --pid-file=other.pid -E other-error.log
822 $ cat other.pid >> $DAEMON_PIDS
822 $ cat other.pid >> $DAEMON_PIDS
823
823
824 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
824 $ hg -R main push http://localhost:$HGPORT2/ -r 32af7686d403
825 pushing to http://localhost:$HGPORT2/
825 pushing to http://localhost:$HGPORT2/
826 searching for changes
826 searching for changes
827 remote: adding changesets
827 remote: adding changesets
828 remote: adding manifests
828 remote: adding manifests
829 remote: adding file changes
829 remote: adding file changes
830 remote: added 1 changesets with 1 changes to 1 files
830 remote: added 1 changesets with 1 changes to 1 files
831 $ cat other-error.log
831 $ cat other-error.log
832
832
833 Check final content.
833 Check final content.
834
834
835 $ hg -R other log -G
835 $ hg -R other log -G
836 o changeset: 7:32af7686d403
836 o changeset: 7:32af7686d403
837 | tag: tip
837 | tag: tip
838 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
838 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
839 | date: Sat Apr 30 15:24:48 2011 +0200
839 | date: Sat Apr 30 15:24:48 2011 +0200
840 | summary: D
840 | summary: D
841 |
841 |
842 o changeset: 6:5fddd98957c8
842 o changeset: 6:5fddd98957c8
843 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
843 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
844 | date: Sat Apr 30 15:24:48 2011 +0200
844 | date: Sat Apr 30 15:24:48 2011 +0200
845 | summary: C
845 | summary: C
846 |
846 |
847 o changeset: 5:42ccdea3bb16
847 o changeset: 5:42ccdea3bb16
848 | parent: 0:cd010b8cd998
848 | parent: 0:cd010b8cd998
849 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
849 | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
850 | date: Sat Apr 30 15:24:48 2011 +0200
850 | date: Sat Apr 30 15:24:48 2011 +0200
851 | summary: B
851 | summary: B
852 |
852 |
853 | o changeset: 4:02de42196ebe
853 | o changeset: 4:02de42196ebe
854 | | parent: 2:24b6387c8c8c
854 | | parent: 2:24b6387c8c8c
855 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
855 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
856 | | date: Sat Apr 30 15:24:48 2011 +0200
856 | | date: Sat Apr 30 15:24:48 2011 +0200
857 | | summary: H
857 | | summary: H
858 | |
858 | |
859 | | o changeset: 3:eea13746799a
859 | | o changeset: 3:eea13746799a
860 | |/| parent: 2:24b6387c8c8c
860 | |/| parent: 2:24b6387c8c8c
861 | | | parent: 1:9520eea781bc
861 | | | parent: 1:9520eea781bc
862 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
862 | | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
863 | | | date: Sat Apr 30 15:24:48 2011 +0200
863 | | | date: Sat Apr 30 15:24:48 2011 +0200
864 | | | summary: G
864 | | | summary: G
865 | | |
865 | | |
866 | o | changeset: 2:24b6387c8c8c
866 | o | changeset: 2:24b6387c8c8c
867 |/ / parent: 0:cd010b8cd998
867 |/ / parent: 0:cd010b8cd998
868 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
868 | | user: Nicolas Dumazet <nicdumz.commits@gmail.com>
869 | | date: Sat Apr 30 15:24:48 2011 +0200
869 | | date: Sat Apr 30 15:24:48 2011 +0200
870 | | summary: F
870 | | summary: F
871 | |
871 | |
872 | @ changeset: 1:9520eea781bc
872 | @ changeset: 1:9520eea781bc
873 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
873 |/ user: Nicolas Dumazet <nicdumz.commits@gmail.com>
874 | date: Sat Apr 30 15:24:48 2011 +0200
874 | date: Sat Apr 30 15:24:48 2011 +0200
875 | summary: E
875 | summary: E
876 |
876 |
877 o changeset: 0:cd010b8cd998
877 o changeset: 0:cd010b8cd998
878 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
878 user: Nicolas Dumazet <nicdumz.commits@gmail.com>
879 date: Sat Apr 30 15:24:48 2011 +0200
879 date: Sat Apr 30 15:24:48 2011 +0200
880 summary: A
880 summary: A
881
881
General Comments 0
You need to be logged in to leave comments. Login now