##// END OF EJS Templates
bundle2: change header size and make them signed (new format)...
Pierre-Yves David -
r23009:90f86ad3 default
parent child Browse files
Show More
@@ -1,953 +1,956 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: int32
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
130
128 Bundle processing
131 Bundle processing
129 ============================
132 ============================
130
133
131 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
132 for a certain part type.
135 for a certain part type.
133
136
134 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
135 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
136 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
137 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
138 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
139 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
140 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
141 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
142 preserve.
145 preserve.
143 """
146 """
144
147
145 import util
148 import util
146 import struct
149 import struct
147 import urllib
150 import urllib
148 import string
151 import string
149 import obsolete
152 import obsolete
150 import pushkey
153 import pushkey
151
154
152 import changegroup, error
155 import changegroup, error
153 from i18n import _
156 from i18n import _
154
157
155 _pack = struct.pack
158 _pack = struct.pack
156 _unpack = struct.unpack
159 _unpack = struct.unpack
157
160
158 _magicstring = 'HG2X'
161 _magicstring = 'HG2Y'
159
162
160 _fstreamparamsize = '>H'
163 _fstreamparamsize = '>i'
161 _fpartheadersize = '>H'
164 _fpartheadersize = '>i'
162 _fparttypesize = '>B'
165 _fparttypesize = '>B'
163 _fpartid = '>I'
166 _fpartid = '>I'
164 _fpayloadsize = '>I'
167 _fpayloadsize = '>i'
165 _fpartparamcount = '>BB'
168 _fpartparamcount = '>BB'
166
169
167 preferedchunksize = 4096
170 preferedchunksize = 4096
168
171
169 def _makefpartparamsizes(nbparams):
172 def _makefpartparamsizes(nbparams):
170 """return a struct format to read part parameter sizes
173 """return a struct format to read part parameter sizes
171
174
172 The number parameters is variable so we need to build that format
175 The number parameters is variable so we need to build that format
173 dynamically.
176 dynamically.
174 """
177 """
175 return '>'+('BB'*nbparams)
178 return '>'+('BB'*nbparams)
176
179
177 parthandlermapping = {}
180 parthandlermapping = {}
178
181
179 def parthandler(parttype, params=()):
182 def parthandler(parttype, params=()):
180 """decorator that register a function as a bundle2 part handler
183 """decorator that register a function as a bundle2 part handler
181
184
182 eg::
185 eg::
183
186
184 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
187 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
185 def myparttypehandler(...):
188 def myparttypehandler(...):
186 '''process a part of type "my part".'''
189 '''process a part of type "my part".'''
187 ...
190 ...
188 """
191 """
189 def _decorator(func):
192 def _decorator(func):
190 lparttype = parttype.lower() # enforce lower case matching.
193 lparttype = parttype.lower() # enforce lower case matching.
191 assert lparttype not in parthandlermapping
194 assert lparttype not in parthandlermapping
192 parthandlermapping[lparttype] = func
195 parthandlermapping[lparttype] = func
193 func.params = frozenset(params)
196 func.params = frozenset(params)
194 return func
197 return func
195 return _decorator
198 return _decorator
196
199
197 class unbundlerecords(object):
200 class unbundlerecords(object):
198 """keep record of what happens during and unbundle
201 """keep record of what happens during and unbundle
199
202
200 New records are added using `records.add('cat', obj)`. Where 'cat' is a
203 New records are added using `records.add('cat', obj)`. Where 'cat' is a
201 category of record and obj is an arbitrary object.
204 category of record and obj is an arbitrary object.
202
205
203 `records['cat']` will return all entries of this category 'cat'.
206 `records['cat']` will return all entries of this category 'cat'.
204
207
205 Iterating on the object itself will yield `('category', obj)` tuples
208 Iterating on the object itself will yield `('category', obj)` tuples
206 for all entries.
209 for all entries.
207
210
208 All iterations happens in chronological order.
211 All iterations happens in chronological order.
209 """
212 """
210
213
211 def __init__(self):
214 def __init__(self):
212 self._categories = {}
215 self._categories = {}
213 self._sequences = []
216 self._sequences = []
214 self._replies = {}
217 self._replies = {}
215
218
216 def add(self, category, entry, inreplyto=None):
219 def add(self, category, entry, inreplyto=None):
217 """add a new record of a given category.
220 """add a new record of a given category.
218
221
219 The entry can then be retrieved in the list returned by
222 The entry can then be retrieved in the list returned by
220 self['category']."""
223 self['category']."""
221 self._categories.setdefault(category, []).append(entry)
224 self._categories.setdefault(category, []).append(entry)
222 self._sequences.append((category, entry))
225 self._sequences.append((category, entry))
223 if inreplyto is not None:
226 if inreplyto is not None:
224 self.getreplies(inreplyto).add(category, entry)
227 self.getreplies(inreplyto).add(category, entry)
225
228
226 def getreplies(self, partid):
229 def getreplies(self, partid):
227 """get the subrecords that replies to a specific part"""
230 """get the subrecords that replies to a specific part"""
228 return self._replies.setdefault(partid, unbundlerecords())
231 return self._replies.setdefault(partid, unbundlerecords())
229
232
230 def __getitem__(self, cat):
233 def __getitem__(self, cat):
231 return tuple(self._categories.get(cat, ()))
234 return tuple(self._categories.get(cat, ()))
232
235
233 def __iter__(self):
236 def __iter__(self):
234 return iter(self._sequences)
237 return iter(self._sequences)
235
238
236 def __len__(self):
239 def __len__(self):
237 return len(self._sequences)
240 return len(self._sequences)
238
241
239 def __nonzero__(self):
242 def __nonzero__(self):
240 return bool(self._sequences)
243 return bool(self._sequences)
241
244
242 class bundleoperation(object):
245 class bundleoperation(object):
243 """an object that represents a single bundling process
246 """an object that represents a single bundling process
244
247
245 Its purpose is to carry unbundle-related objects and states.
248 Its purpose is to carry unbundle-related objects and states.
246
249
247 A new object should be created at the beginning of each bundle processing.
250 A new object should be created at the beginning of each bundle processing.
248 The object is to be returned by the processing function.
251 The object is to be returned by the processing function.
249
252
250 The object has very little content now it will ultimately contain:
253 The object has very little content now it will ultimately contain:
251 * an access to the repo the bundle is applied to,
254 * an access to the repo the bundle is applied to,
252 * a ui object,
255 * a ui object,
253 * a way to retrieve a transaction to add changes to the repo,
256 * a way to retrieve a transaction to add changes to the repo,
254 * a way to record the result of processing each part,
257 * a way to record the result of processing each part,
255 * a way to construct a bundle response when applicable.
258 * a way to construct a bundle response when applicable.
256 """
259 """
257
260
258 def __init__(self, repo, transactiongetter):
261 def __init__(self, repo, transactiongetter):
259 self.repo = repo
262 self.repo = repo
260 self.ui = repo.ui
263 self.ui = repo.ui
261 self.records = unbundlerecords()
264 self.records = unbundlerecords()
262 self.gettransaction = transactiongetter
265 self.gettransaction = transactiongetter
263 self.reply = None
266 self.reply = None
264
267
265 class TransactionUnavailable(RuntimeError):
268 class TransactionUnavailable(RuntimeError):
266 pass
269 pass
267
270
268 def _notransaction():
271 def _notransaction():
269 """default method to get a transaction while processing a bundle
272 """default method to get a transaction while processing a bundle
270
273
271 Raise an exception to highlight the fact that no transaction was expected
274 Raise an exception to highlight the fact that no transaction was expected
272 to be created"""
275 to be created"""
273 raise TransactionUnavailable()
276 raise TransactionUnavailable()
274
277
275 def processbundle(repo, unbundler, transactiongetter=_notransaction):
278 def processbundle(repo, unbundler, transactiongetter=_notransaction):
276 """This function process a bundle, apply effect to/from a repo
279 """This function process a bundle, apply effect to/from a repo
277
280
278 It iterates over each part then searches for and uses the proper handling
281 It iterates over each part then searches for and uses the proper handling
279 code to process the part. Parts are processed in order.
282 code to process the part. Parts are processed in order.
280
283
281 This is very early version of this function that will be strongly reworked
284 This is very early version of this function that will be strongly reworked
282 before final usage.
285 before final usage.
283
286
284 Unknown Mandatory part will abort the process.
287 Unknown Mandatory part will abort the process.
285 """
288 """
286 op = bundleoperation(repo, transactiongetter)
289 op = bundleoperation(repo, transactiongetter)
287 # todo:
290 # todo:
288 # - replace this is a init function soon.
291 # - replace this is a init function soon.
289 # - exception catching
292 # - exception catching
290 unbundler.params
293 unbundler.params
291 iterparts = unbundler.iterparts()
294 iterparts = unbundler.iterparts()
292 part = None
295 part = None
293 try:
296 try:
294 for part in iterparts:
297 for part in iterparts:
295 _processpart(op, part)
298 _processpart(op, part)
296 except Exception, exc:
299 except Exception, exc:
297 for part in iterparts:
300 for part in iterparts:
298 # consume the bundle content
301 # consume the bundle content
299 part.read()
302 part.read()
300 # Small hack to let caller code distinguish exceptions from bundle2
303 # Small hack to let caller code distinguish exceptions from bundle2
301 # processing fron the ones from bundle1 processing. This is mostly
304 # processing fron the ones from bundle1 processing. This is mostly
302 # needed to handle different return codes to unbundle according to the
305 # needed to handle different return codes to unbundle according to the
303 # type of bundle. We should probably clean up or drop this return code
306 # type of bundle. We should probably clean up or drop this return code
304 # craziness in a future version.
307 # craziness in a future version.
305 exc.duringunbundle2 = True
308 exc.duringunbundle2 = True
306 raise
309 raise
307 return op
310 return op
308
311
309 def _processpart(op, part):
312 def _processpart(op, part):
310 """process a single part from a bundle
313 """process a single part from a bundle
311
314
312 The part is guaranteed to have been fully consumed when the function exits
315 The part is guaranteed to have been fully consumed when the function exits
313 (even if an exception is raised)."""
316 (even if an exception is raised)."""
314 try:
317 try:
315 parttype = part.type
318 parttype = part.type
316 # part key are matched lower case
319 # part key are matched lower case
317 key = parttype.lower()
320 key = parttype.lower()
318 try:
321 try:
319 handler = parthandlermapping.get(key)
322 handler = parthandlermapping.get(key)
320 if handler is None:
323 if handler is None:
321 raise error.BundleValueError(parttype=key)
324 raise error.BundleValueError(parttype=key)
322 op.ui.debug('found a handler for part %r\n' % parttype)
325 op.ui.debug('found a handler for part %r\n' % parttype)
323 unknownparams = part.mandatorykeys - handler.params
326 unknownparams = part.mandatorykeys - handler.params
324 if unknownparams:
327 if unknownparams:
325 unknownparams = list(unknownparams)
328 unknownparams = list(unknownparams)
326 unknownparams.sort()
329 unknownparams.sort()
327 raise error.BundleValueError(parttype=key,
330 raise error.BundleValueError(parttype=key,
328 params=unknownparams)
331 params=unknownparams)
329 except error.BundleValueError, exc:
332 except error.BundleValueError, exc:
330 if key != parttype: # mandatory parts
333 if key != parttype: # mandatory parts
331 raise
334 raise
332 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
335 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
333 return # skip to part processing
336 return # skip to part processing
334
337
335 # handler is called outside the above try block so that we don't
338 # handler is called outside the above try block so that we don't
336 # risk catching KeyErrors from anything other than the
339 # risk catching KeyErrors from anything other than the
337 # parthandlermapping lookup (any KeyError raised by handler()
340 # parthandlermapping lookup (any KeyError raised by handler()
338 # itself represents a defect of a different variety).
341 # itself represents a defect of a different variety).
339 output = None
342 output = None
340 if op.reply is not None:
343 if op.reply is not None:
341 op.ui.pushbuffer(error=True)
344 op.ui.pushbuffer(error=True)
342 output = ''
345 output = ''
343 try:
346 try:
344 handler(op, part)
347 handler(op, part)
345 finally:
348 finally:
346 if output is not None:
349 if output is not None:
347 output = op.ui.popbuffer()
350 output = op.ui.popbuffer()
348 if output:
351 if output:
349 outpart = op.reply.newpart('b2x:output', data=output)
352 outpart = op.reply.newpart('b2x:output', data=output)
350 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
353 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
351 finally:
354 finally:
352 # consume the part content to not corrupt the stream.
355 # consume the part content to not corrupt the stream.
353 part.read()
356 part.read()
354
357
355
358
356 def decodecaps(blob):
359 def decodecaps(blob):
357 """decode a bundle2 caps bytes blob into a dictionnary
360 """decode a bundle2 caps bytes blob into a dictionnary
358
361
359 The blob is a list of capabilities (one per line)
362 The blob is a list of capabilities (one per line)
360 Capabilities may have values using a line of the form::
363 Capabilities may have values using a line of the form::
361
364
362 capability=value1,value2,value3
365 capability=value1,value2,value3
363
366
364 The values are always a list."""
367 The values are always a list."""
365 caps = {}
368 caps = {}
366 for line in blob.splitlines():
369 for line in blob.splitlines():
367 if not line:
370 if not line:
368 continue
371 continue
369 if '=' not in line:
372 if '=' not in line:
370 key, vals = line, ()
373 key, vals = line, ()
371 else:
374 else:
372 key, vals = line.split('=', 1)
375 key, vals = line.split('=', 1)
373 vals = vals.split(',')
376 vals = vals.split(',')
374 key = urllib.unquote(key)
377 key = urllib.unquote(key)
375 vals = [urllib.unquote(v) for v in vals]
378 vals = [urllib.unquote(v) for v in vals]
376 caps[key] = vals
379 caps[key] = vals
377 return caps
380 return caps
378
381
379 def encodecaps(caps):
382 def encodecaps(caps):
380 """encode a bundle2 caps dictionary into a bytes blob"""
383 """encode a bundle2 caps dictionary into a bytes blob"""
381 chunks = []
384 chunks = []
382 for ca in sorted(caps):
385 for ca in sorted(caps):
383 vals = caps[ca]
386 vals = caps[ca]
384 ca = urllib.quote(ca)
387 ca = urllib.quote(ca)
385 vals = [urllib.quote(v) for v in vals]
388 vals = [urllib.quote(v) for v in vals]
386 if vals:
389 if vals:
387 ca = "%s=%s" % (ca, ','.join(vals))
390 ca = "%s=%s" % (ca, ','.join(vals))
388 chunks.append(ca)
391 chunks.append(ca)
389 return '\n'.join(chunks)
392 return '\n'.join(chunks)
390
393
391 class bundle20(object):
394 class bundle20(object):
392 """represent an outgoing bundle2 container
395 """represent an outgoing bundle2 container
393
396
394 Use the `addparam` method to add stream level parameter. and `newpart` to
397 Use the `addparam` method to add stream level parameter. and `newpart` to
395 populate it. Then call `getchunks` to retrieve all the binary chunks of
398 populate it. Then call `getchunks` to retrieve all the binary chunks of
396 data that compose the bundle2 container."""
399 data that compose the bundle2 container."""
397
400
398 def __init__(self, ui, capabilities=()):
401 def __init__(self, ui, capabilities=()):
399 self.ui = ui
402 self.ui = ui
400 self._params = []
403 self._params = []
401 self._parts = []
404 self._parts = []
402 self.capabilities = dict(capabilities)
405 self.capabilities = dict(capabilities)
403
406
404 @property
407 @property
405 def nbparts(self):
408 def nbparts(self):
406 """total number of parts added to the bundler"""
409 """total number of parts added to the bundler"""
407 return len(self._parts)
410 return len(self._parts)
408
411
409 # methods used to defines the bundle2 content
412 # methods used to defines the bundle2 content
410 def addparam(self, name, value=None):
413 def addparam(self, name, value=None):
411 """add a stream level parameter"""
414 """add a stream level parameter"""
412 if not name:
415 if not name:
413 raise ValueError('empty parameter name')
416 raise ValueError('empty parameter name')
414 if name[0] not in string.letters:
417 if name[0] not in string.letters:
415 raise ValueError('non letter first character: %r' % name)
418 raise ValueError('non letter first character: %r' % name)
416 self._params.append((name, value))
419 self._params.append((name, value))
417
420
418 def addpart(self, part):
421 def addpart(self, part):
419 """add a new part to the bundle2 container
422 """add a new part to the bundle2 container
420
423
421 Parts contains the actual applicative payload."""
424 Parts contains the actual applicative payload."""
422 assert part.id is None
425 assert part.id is None
423 part.id = len(self._parts) # very cheap counter
426 part.id = len(self._parts) # very cheap counter
424 self._parts.append(part)
427 self._parts.append(part)
425
428
426 def newpart(self, typeid, *args, **kwargs):
429 def newpart(self, typeid, *args, **kwargs):
427 """create a new part and add it to the containers
430 """create a new part and add it to the containers
428
431
429 As the part is directly added to the containers. For now, this means
432 As the part is directly added to the containers. For now, this means
430 that any failure to properly initialize the part after calling
433 that any failure to properly initialize the part after calling
431 ``newpart`` should result in a failure of the whole bundling process.
434 ``newpart`` should result in a failure of the whole bundling process.
432
435
433 You can still fall back to manually create and add if you need better
436 You can still fall back to manually create and add if you need better
434 control."""
437 control."""
435 part = bundlepart(typeid, *args, **kwargs)
438 part = bundlepart(typeid, *args, **kwargs)
436 self.addpart(part)
439 self.addpart(part)
437 return part
440 return part
438
441
439 # methods used to generate the bundle2 stream
442 # methods used to generate the bundle2 stream
440 def getchunks(self):
443 def getchunks(self):
441 self.ui.debug('start emission of %s stream\n' % _magicstring)
444 self.ui.debug('start emission of %s stream\n' % _magicstring)
442 yield _magicstring
445 yield _magicstring
443 param = self._paramchunk()
446 param = self._paramchunk()
444 self.ui.debug('bundle parameter: %s\n' % param)
447 self.ui.debug('bundle parameter: %s\n' % param)
445 yield _pack(_fstreamparamsize, len(param))
448 yield _pack(_fstreamparamsize, len(param))
446 if param:
449 if param:
447 yield param
450 yield param
448
451
449 self.ui.debug('start of parts\n')
452 self.ui.debug('start of parts\n')
450 for part in self._parts:
453 for part in self._parts:
451 self.ui.debug('bundle part: "%s"\n' % part.type)
454 self.ui.debug('bundle part: "%s"\n' % part.type)
452 for chunk in part.getchunks():
455 for chunk in part.getchunks():
453 yield chunk
456 yield chunk
454 self.ui.debug('end of bundle\n')
457 self.ui.debug('end of bundle\n')
455 yield _pack(_fpartheadersize, 0)
458 yield _pack(_fpartheadersize, 0)
456
459
457 def _paramchunk(self):
460 def _paramchunk(self):
458 """return a encoded version of all stream parameters"""
461 """return a encoded version of all stream parameters"""
459 blocks = []
462 blocks = []
460 for par, value in self._params:
463 for par, value in self._params:
461 par = urllib.quote(par)
464 par = urllib.quote(par)
462 if value is not None:
465 if value is not None:
463 value = urllib.quote(value)
466 value = urllib.quote(value)
464 par = '%s=%s' % (par, value)
467 par = '%s=%s' % (par, value)
465 blocks.append(par)
468 blocks.append(par)
466 return ' '.join(blocks)
469 return ' '.join(blocks)
467
470
468 class unpackermixin(object):
471 class unpackermixin(object):
469 """A mixin to extract bytes and struct data from a stream"""
472 """A mixin to extract bytes and struct data from a stream"""
470
473
471 def __init__(self, fp):
474 def __init__(self, fp):
472 self._fp = fp
475 self._fp = fp
473
476
474 def _unpack(self, format):
477 def _unpack(self, format):
475 """unpack this struct format from the stream"""
478 """unpack this struct format from the stream"""
476 data = self._readexact(struct.calcsize(format))
479 data = self._readexact(struct.calcsize(format))
477 return _unpack(format, data)
480 return _unpack(format, data)
478
481
479 def _readexact(self, size):
482 def _readexact(self, size):
480 """read exactly <size> bytes from the stream"""
483 """read exactly <size> bytes from the stream"""
481 return changegroup.readexactly(self._fp, size)
484 return changegroup.readexactly(self._fp, size)
482
485
483
486
484 class unbundle20(unpackermixin):
487 class unbundle20(unpackermixin):
485 """interpret a bundle2 stream
488 """interpret a bundle2 stream
486
489
487 This class is fed with a binary stream and yields parts through its
490 This class is fed with a binary stream and yields parts through its
488 `iterparts` methods."""
491 `iterparts` methods."""
489
492
490 def __init__(self, ui, fp, header=None):
493 def __init__(self, ui, fp, header=None):
491 """If header is specified, we do not read it out of the stream."""
494 """If header is specified, we do not read it out of the stream."""
492 self.ui = ui
495 self.ui = ui
493 super(unbundle20, self).__init__(fp)
496 super(unbundle20, self).__init__(fp)
494 if header is None:
497 if header is None:
495 header = self._readexact(4)
498 header = self._readexact(4)
496 magic, version = header[0:2], header[2:4]
499 magic, version = header[0:2], header[2:4]
497 if magic != 'HG':
500 if magic != 'HG':
498 raise util.Abort(_('not a Mercurial bundle'))
501 raise util.Abort(_('not a Mercurial bundle'))
499 if version != '2X':
502 if version != '2Y':
500 raise util.Abort(_('unknown bundle version %s') % version)
503 raise util.Abort(_('unknown bundle version %s') % version)
501 self.ui.debug('start processing of %s stream\n' % header)
504 self.ui.debug('start processing of %s stream\n' % header)
502
505
503 @util.propertycache
506 @util.propertycache
504 def params(self):
507 def params(self):
505 """dictionary of stream level parameters"""
508 """dictionary of stream level parameters"""
506 self.ui.debug('reading bundle2 stream parameters\n')
509 self.ui.debug('reading bundle2 stream parameters\n')
507 params = {}
510 params = {}
508 paramssize = self._unpack(_fstreamparamsize)[0]
511 paramssize = self._unpack(_fstreamparamsize)[0]
509 if paramssize:
512 if paramssize:
510 for p in self._readexact(paramssize).split(' '):
513 for p in self._readexact(paramssize).split(' '):
511 p = p.split('=', 1)
514 p = p.split('=', 1)
512 p = [urllib.unquote(i) for i in p]
515 p = [urllib.unquote(i) for i in p]
513 if len(p) < 2:
516 if len(p) < 2:
514 p.append(None)
517 p.append(None)
515 self._processparam(*p)
518 self._processparam(*p)
516 params[p[0]] = p[1]
519 params[p[0]] = p[1]
517 return params
520 return params
518
521
519 def _processparam(self, name, value):
522 def _processparam(self, name, value):
520 """process a parameter, applying its effect if needed
523 """process a parameter, applying its effect if needed
521
524
522 Parameter starting with a lower case letter are advisory and will be
525 Parameter starting with a lower case letter are advisory and will be
523 ignored when unknown. Those starting with an upper case letter are
526 ignored when unknown. Those starting with an upper case letter are
524 mandatory and will this function will raise a KeyError when unknown.
527 mandatory and will this function will raise a KeyError when unknown.
525
528
526 Note: no option are currently supported. Any input will be either
529 Note: no option are currently supported. Any input will be either
527 ignored or failing.
530 ignored or failing.
528 """
531 """
529 if not name:
532 if not name:
530 raise ValueError('empty parameter name')
533 raise ValueError('empty parameter name')
531 if name[0] not in string.letters:
534 if name[0] not in string.letters:
532 raise ValueError('non letter first character: %r' % name)
535 raise ValueError('non letter first character: %r' % name)
533 # Some logic will be later added here to try to process the option for
536 # Some logic will be later added here to try to process the option for
534 # a dict of known parameter.
537 # a dict of known parameter.
535 if name[0].islower():
538 if name[0].islower():
536 self.ui.debug("ignoring unknown parameter %r\n" % name)
539 self.ui.debug("ignoring unknown parameter %r\n" % name)
537 else:
540 else:
538 raise error.BundleValueError(params=(name,))
541 raise error.BundleValueError(params=(name,))
539
542
540
543
541 def iterparts(self):
544 def iterparts(self):
542 """yield all parts contained in the stream"""
545 """yield all parts contained in the stream"""
543 # make sure param have been loaded
546 # make sure param have been loaded
544 self.params
547 self.params
545 self.ui.debug('start extraction of bundle2 parts\n')
548 self.ui.debug('start extraction of bundle2 parts\n')
546 headerblock = self._readpartheader()
549 headerblock = self._readpartheader()
547 while headerblock is not None:
550 while headerblock is not None:
548 part = unbundlepart(self.ui, headerblock, self._fp)
551 part = unbundlepart(self.ui, headerblock, self._fp)
549 yield part
552 yield part
550 headerblock = self._readpartheader()
553 headerblock = self._readpartheader()
551 self.ui.debug('end of bundle2 stream\n')
554 self.ui.debug('end of bundle2 stream\n')
552
555
553 def _readpartheader(self):
556 def _readpartheader(self):
554 """reads a part header size and return the bytes blob
557 """reads a part header size and return the bytes blob
555
558
556 returns None if empty"""
559 returns None if empty"""
557 headersize = self._unpack(_fpartheadersize)[0]
560 headersize = self._unpack(_fpartheadersize)[0]
558 self.ui.debug('part header size: %i\n' % headersize)
561 self.ui.debug('part header size: %i\n' % headersize)
559 if headersize:
562 if headersize:
560 return self._readexact(headersize)
563 return self._readexact(headersize)
561 return None
564 return None
562
565
563
566
564 class bundlepart(object):
567 class bundlepart(object):
565 """A bundle2 part contains application level payload
568 """A bundle2 part contains application level payload
566
569
567 The part `type` is used to route the part to the application level
570 The part `type` is used to route the part to the application level
568 handler.
571 handler.
569
572
570 The part payload is contained in ``part.data``. It could be raw bytes or a
573 The part payload is contained in ``part.data``. It could be raw bytes or a
571 generator of byte chunks.
574 generator of byte chunks.
572
575
573 You can add parameters to the part using the ``addparam`` method.
576 You can add parameters to the part using the ``addparam`` method.
574 Parameters can be either mandatory (default) or advisory. Remote side
577 Parameters can be either mandatory (default) or advisory. Remote side
575 should be able to safely ignore the advisory ones.
578 should be able to safely ignore the advisory ones.
576
579
577 Both data and parameters cannot be modified after the generation has begun.
580 Both data and parameters cannot be modified after the generation has begun.
578 """
581 """
579
582
580 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
583 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
581 data=''):
584 data=''):
582 self.id = None
585 self.id = None
583 self.type = parttype
586 self.type = parttype
584 self._data = data
587 self._data = data
585 self._mandatoryparams = list(mandatoryparams)
588 self._mandatoryparams = list(mandatoryparams)
586 self._advisoryparams = list(advisoryparams)
589 self._advisoryparams = list(advisoryparams)
587 # checking for duplicated entries
590 # checking for duplicated entries
588 self._seenparams = set()
591 self._seenparams = set()
589 for pname, __ in self._mandatoryparams + self._advisoryparams:
592 for pname, __ in self._mandatoryparams + self._advisoryparams:
590 if pname in self._seenparams:
593 if pname in self._seenparams:
591 raise RuntimeError('duplicated params: %s' % pname)
594 raise RuntimeError('duplicated params: %s' % pname)
592 self._seenparams.add(pname)
595 self._seenparams.add(pname)
593 # status of the part's generation:
596 # status of the part's generation:
594 # - None: not started,
597 # - None: not started,
595 # - False: currently generated,
598 # - False: currently generated,
596 # - True: generation done.
599 # - True: generation done.
597 self._generated = None
600 self._generated = None
598
601
599 # methods used to defines the part content
602 # methods used to defines the part content
600 def __setdata(self, data):
603 def __setdata(self, data):
601 if self._generated is not None:
604 if self._generated is not None:
602 raise error.ReadOnlyPartError('part is being generated')
605 raise error.ReadOnlyPartError('part is being generated')
603 self._data = data
606 self._data = data
604 def __getdata(self):
607 def __getdata(self):
605 return self._data
608 return self._data
606 data = property(__getdata, __setdata)
609 data = property(__getdata, __setdata)
607
610
608 @property
611 @property
609 def mandatoryparams(self):
612 def mandatoryparams(self):
610 # make it an immutable tuple to force people through ``addparam``
613 # make it an immutable tuple to force people through ``addparam``
611 return tuple(self._mandatoryparams)
614 return tuple(self._mandatoryparams)
612
615
613 @property
616 @property
614 def advisoryparams(self):
617 def advisoryparams(self):
615 # make it an immutable tuple to force people through ``addparam``
618 # make it an immutable tuple to force people through ``addparam``
616 return tuple(self._advisoryparams)
619 return tuple(self._advisoryparams)
617
620
618 def addparam(self, name, value='', mandatory=True):
621 def addparam(self, name, value='', mandatory=True):
619 if self._generated is not None:
622 if self._generated is not None:
620 raise error.ReadOnlyPartError('part is being generated')
623 raise error.ReadOnlyPartError('part is being generated')
621 if name in self._seenparams:
624 if name in self._seenparams:
622 raise ValueError('duplicated params: %s' % name)
625 raise ValueError('duplicated params: %s' % name)
623 self._seenparams.add(name)
626 self._seenparams.add(name)
624 params = self._advisoryparams
627 params = self._advisoryparams
625 if mandatory:
628 if mandatory:
626 params = self._mandatoryparams
629 params = self._mandatoryparams
627 params.append((name, value))
630 params.append((name, value))
628
631
629 # methods used to generates the bundle2 stream
632 # methods used to generates the bundle2 stream
630 def getchunks(self):
633 def getchunks(self):
631 if self._generated is not None:
634 if self._generated is not None:
632 raise RuntimeError('part can only be consumed once')
635 raise RuntimeError('part can only be consumed once')
633 self._generated = False
636 self._generated = False
634 #### header
637 #### header
635 ## parttype
638 ## parttype
636 header = [_pack(_fparttypesize, len(self.type)),
639 header = [_pack(_fparttypesize, len(self.type)),
637 self.type, _pack(_fpartid, self.id),
640 self.type, _pack(_fpartid, self.id),
638 ]
641 ]
639 ## parameters
642 ## parameters
640 # count
643 # count
641 manpar = self.mandatoryparams
644 manpar = self.mandatoryparams
642 advpar = self.advisoryparams
645 advpar = self.advisoryparams
643 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
646 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
644 # size
647 # size
645 parsizes = []
648 parsizes = []
646 for key, value in manpar:
649 for key, value in manpar:
647 parsizes.append(len(key))
650 parsizes.append(len(key))
648 parsizes.append(len(value))
651 parsizes.append(len(value))
649 for key, value in advpar:
652 for key, value in advpar:
650 parsizes.append(len(key))
653 parsizes.append(len(key))
651 parsizes.append(len(value))
654 parsizes.append(len(value))
652 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
655 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
653 header.append(paramsizes)
656 header.append(paramsizes)
654 # key, value
657 # key, value
655 for key, value in manpar:
658 for key, value in manpar:
656 header.append(key)
659 header.append(key)
657 header.append(value)
660 header.append(value)
658 for key, value in advpar:
661 for key, value in advpar:
659 header.append(key)
662 header.append(key)
660 header.append(value)
663 header.append(value)
661 ## finalize header
664 ## finalize header
662 headerchunk = ''.join(header)
665 headerchunk = ''.join(header)
663 yield _pack(_fpartheadersize, len(headerchunk))
666 yield _pack(_fpartheadersize, len(headerchunk))
664 yield headerchunk
667 yield headerchunk
665 ## payload
668 ## payload
666 for chunk in self._payloadchunks():
669 for chunk in self._payloadchunks():
667 yield _pack(_fpayloadsize, len(chunk))
670 yield _pack(_fpayloadsize, len(chunk))
668 yield chunk
671 yield chunk
669 # end of payload
672 # end of payload
670 yield _pack(_fpayloadsize, 0)
673 yield _pack(_fpayloadsize, 0)
671 self._generated = True
674 self._generated = True
672
675
673 def _payloadchunks(self):
676 def _payloadchunks(self):
674 """yield chunks of a the part payload
677 """yield chunks of a the part payload
675
678
676 Exists to handle the different methods to provide data to a part."""
679 Exists to handle the different methods to provide data to a part."""
677 # we only support fixed size data now.
680 # we only support fixed size data now.
678 # This will be improved in the future.
681 # This will be improved in the future.
679 if util.safehasattr(self.data, 'next'):
682 if util.safehasattr(self.data, 'next'):
680 buff = util.chunkbuffer(self.data)
683 buff = util.chunkbuffer(self.data)
681 chunk = buff.read(preferedchunksize)
684 chunk = buff.read(preferedchunksize)
682 while chunk:
685 while chunk:
683 yield chunk
686 yield chunk
684 chunk = buff.read(preferedchunksize)
687 chunk = buff.read(preferedchunksize)
685 elif len(self.data):
688 elif len(self.data):
686 yield self.data
689 yield self.data
687
690
688 class unbundlepart(unpackermixin):
691 class unbundlepart(unpackermixin):
689 """a bundle part read from a bundle"""
692 """a bundle part read from a bundle"""
690
693
691 def __init__(self, ui, header, fp):
694 def __init__(self, ui, header, fp):
692 super(unbundlepart, self).__init__(fp)
695 super(unbundlepart, self).__init__(fp)
693 self.ui = ui
696 self.ui = ui
694 # unbundle state attr
697 # unbundle state attr
695 self._headerdata = header
698 self._headerdata = header
696 self._headeroffset = 0
699 self._headeroffset = 0
697 self._initialized = False
700 self._initialized = False
698 self.consumed = False
701 self.consumed = False
699 # part data
702 # part data
700 self.id = None
703 self.id = None
701 self.type = None
704 self.type = None
702 self.mandatoryparams = None
705 self.mandatoryparams = None
703 self.advisoryparams = None
706 self.advisoryparams = None
704 self.params = None
707 self.params = None
705 self.mandatorykeys = ()
708 self.mandatorykeys = ()
706 self._payloadstream = None
709 self._payloadstream = None
707 self._readheader()
710 self._readheader()
708
711
709 def _fromheader(self, size):
712 def _fromheader(self, size):
710 """return the next <size> byte from the header"""
713 """return the next <size> byte from the header"""
711 offset = self._headeroffset
714 offset = self._headeroffset
712 data = self._headerdata[offset:(offset + size)]
715 data = self._headerdata[offset:(offset + size)]
713 self._headeroffset = offset + size
716 self._headeroffset = offset + size
714 return data
717 return data
715
718
716 def _unpackheader(self, format):
719 def _unpackheader(self, format):
717 """read given format from header
720 """read given format from header
718
721
719 This automatically compute the size of the format to read."""
722 This automatically compute the size of the format to read."""
720 data = self._fromheader(struct.calcsize(format))
723 data = self._fromheader(struct.calcsize(format))
721 return _unpack(format, data)
724 return _unpack(format, data)
722
725
723 def _initparams(self, mandatoryparams, advisoryparams):
726 def _initparams(self, mandatoryparams, advisoryparams):
724 """internal function to setup all logic related parameters"""
727 """internal function to setup all logic related parameters"""
725 # make it read only to prevent people touching it by mistake.
728 # make it read only to prevent people touching it by mistake.
726 self.mandatoryparams = tuple(mandatoryparams)
729 self.mandatoryparams = tuple(mandatoryparams)
727 self.advisoryparams = tuple(advisoryparams)
730 self.advisoryparams = tuple(advisoryparams)
728 # user friendly UI
731 # user friendly UI
729 self.params = dict(self.mandatoryparams)
732 self.params = dict(self.mandatoryparams)
730 self.params.update(dict(self.advisoryparams))
733 self.params.update(dict(self.advisoryparams))
731 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
734 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
732
735
733 def _readheader(self):
736 def _readheader(self):
734 """read the header and setup the object"""
737 """read the header and setup the object"""
735 typesize = self._unpackheader(_fparttypesize)[0]
738 typesize = self._unpackheader(_fparttypesize)[0]
736 self.type = self._fromheader(typesize)
739 self.type = self._fromheader(typesize)
737 self.ui.debug('part type: "%s"\n' % self.type)
740 self.ui.debug('part type: "%s"\n' % self.type)
738 self.id = self._unpackheader(_fpartid)[0]
741 self.id = self._unpackheader(_fpartid)[0]
739 self.ui.debug('part id: "%s"\n' % self.id)
742 self.ui.debug('part id: "%s"\n' % self.id)
740 ## reading parameters
743 ## reading parameters
741 # param count
744 # param count
742 mancount, advcount = self._unpackheader(_fpartparamcount)
745 mancount, advcount = self._unpackheader(_fpartparamcount)
743 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
746 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
744 # param size
747 # param size
745 fparamsizes = _makefpartparamsizes(mancount + advcount)
748 fparamsizes = _makefpartparamsizes(mancount + advcount)
746 paramsizes = self._unpackheader(fparamsizes)
749 paramsizes = self._unpackheader(fparamsizes)
747 # make it a list of couple again
750 # make it a list of couple again
748 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
751 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
749 # split mandatory from advisory
752 # split mandatory from advisory
750 mansizes = paramsizes[:mancount]
753 mansizes = paramsizes[:mancount]
751 advsizes = paramsizes[mancount:]
754 advsizes = paramsizes[mancount:]
752 # retrive param value
755 # retrive param value
753 manparams = []
756 manparams = []
754 for key, value in mansizes:
757 for key, value in mansizes:
755 manparams.append((self._fromheader(key), self._fromheader(value)))
758 manparams.append((self._fromheader(key), self._fromheader(value)))
756 advparams = []
759 advparams = []
757 for key, value in advsizes:
760 for key, value in advsizes:
758 advparams.append((self._fromheader(key), self._fromheader(value)))
761 advparams.append((self._fromheader(key), self._fromheader(value)))
759 self._initparams(manparams, advparams)
762 self._initparams(manparams, advparams)
760 ## part payload
763 ## part payload
761 def payloadchunks():
764 def payloadchunks():
762 payloadsize = self._unpack(_fpayloadsize)[0]
765 payloadsize = self._unpack(_fpayloadsize)[0]
763 self.ui.debug('payload chunk size: %i\n' % payloadsize)
766 self.ui.debug('payload chunk size: %i\n' % payloadsize)
764 while payloadsize:
767 while payloadsize:
765 yield self._readexact(payloadsize)
768 yield self._readexact(payloadsize)
766 payloadsize = self._unpack(_fpayloadsize)[0]
769 payloadsize = self._unpack(_fpayloadsize)[0]
767 self.ui.debug('payload chunk size: %i\n' % payloadsize)
770 self.ui.debug('payload chunk size: %i\n' % payloadsize)
768 self._payloadstream = util.chunkbuffer(payloadchunks())
771 self._payloadstream = util.chunkbuffer(payloadchunks())
769 # we read the data, tell it
772 # we read the data, tell it
770 self._initialized = True
773 self._initialized = True
771
774
772 def read(self, size=None):
775 def read(self, size=None):
773 """read payload data"""
776 """read payload data"""
774 if not self._initialized:
777 if not self._initialized:
775 self._readheader()
778 self._readheader()
776 if size is None:
779 if size is None:
777 data = self._payloadstream.read()
780 data = self._payloadstream.read()
778 else:
781 else:
779 data = self._payloadstream.read(size)
782 data = self._payloadstream.read(size)
780 if size is None or len(data) < size:
783 if size is None or len(data) < size:
781 self.consumed = True
784 self.consumed = True
782 return data
785 return data
783
786
784 capabilities = {'HG2X': (),
787 capabilities = {'HG2Y': (),
785 'b2x:listkeys': (),
788 'b2x:listkeys': (),
786 'b2x:pushkey': (),
789 'b2x:pushkey': (),
787 'b2x:changegroup': (),
790 'b2x:changegroup': (),
788 }
791 }
789
792
790 def getrepocaps(repo):
793 def getrepocaps(repo):
791 """return the bundle2 capabilities for a given repo
794 """return the bundle2 capabilities for a given repo
792
795
793 Exists to allow extensions (like evolution) to mutate the capabilities.
796 Exists to allow extensions (like evolution) to mutate the capabilities.
794 """
797 """
795 caps = capabilities.copy()
798 caps = capabilities.copy()
796 if obsolete.isenabled(repo, obsolete.exchangeopt):
799 if obsolete.isenabled(repo, obsolete.exchangeopt):
797 supportedformat = tuple('V%i' % v for v in obsolete.formats)
800 supportedformat = tuple('V%i' % v for v in obsolete.formats)
798 caps['b2x:obsmarkers'] = supportedformat
801 caps['b2x:obsmarkers'] = supportedformat
799 return caps
802 return caps
800
803
801 def bundle2caps(remote):
804 def bundle2caps(remote):
802 """return the bundlecapabilities of a peer as dict"""
805 """return the bundlecapabilities of a peer as dict"""
803 raw = remote.capable('bundle2-exp')
806 raw = remote.capable('bundle2-exp')
804 if not raw and raw != '':
807 if not raw and raw != '':
805 return {}
808 return {}
806 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
809 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
807 return decodecaps(capsblob)
810 return decodecaps(capsblob)
808
811
809 def obsmarkersversion(caps):
812 def obsmarkersversion(caps):
810 """extract the list of supported obsmarkers versions from a bundle2caps dict
813 """extract the list of supported obsmarkers versions from a bundle2caps dict
811 """
814 """
812 obscaps = caps.get('b2x:obsmarkers', ())
815 obscaps = caps.get('b2x:obsmarkers', ())
813 return [int(c[1:]) for c in obscaps if c.startswith('V')]
816 return [int(c[1:]) for c in obscaps if c.startswith('V')]
814
817
815 @parthandler('b2x:changegroup')
818 @parthandler('b2x:changegroup')
816 def handlechangegroup(op, inpart):
819 def handlechangegroup(op, inpart):
817 """apply a changegroup part on the repo
820 """apply a changegroup part on the repo
818
821
819 This is a very early implementation that will massive rework before being
822 This is a very early implementation that will massive rework before being
820 inflicted to any end-user.
823 inflicted to any end-user.
821 """
824 """
822 # Make sure we trigger a transaction creation
825 # Make sure we trigger a transaction creation
823 #
826 #
824 # The addchangegroup function will get a transaction object by itself, but
827 # The addchangegroup function will get a transaction object by itself, but
825 # we need to make sure we trigger the creation of a transaction object used
828 # we need to make sure we trigger the creation of a transaction object used
826 # for the whole processing scope.
829 # for the whole processing scope.
827 op.gettransaction()
830 op.gettransaction()
828 cg = changegroup.cg1unpacker(inpart, 'UN')
831 cg = changegroup.cg1unpacker(inpart, 'UN')
829 # the source and url passed here are overwritten by the one contained in
832 # the source and url passed here are overwritten by the one contained in
830 # the transaction.hookargs argument. So 'bundle2' is a placeholder
833 # the transaction.hookargs argument. So 'bundle2' is a placeholder
831 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
834 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
832 op.records.add('changegroup', {'return': ret})
835 op.records.add('changegroup', {'return': ret})
833 if op.reply is not None:
836 if op.reply is not None:
834 # This is definitly not the final form of this
837 # This is definitly not the final form of this
835 # return. But one need to start somewhere.
838 # return. But one need to start somewhere.
836 part = op.reply.newpart('b2x:reply:changegroup')
839 part = op.reply.newpart('b2x:reply:changegroup')
837 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
840 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
838 part.addparam('return', '%i' % ret, mandatory=False)
841 part.addparam('return', '%i' % ret, mandatory=False)
839 assert not inpart.read()
842 assert not inpart.read()
840
843
841 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
844 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
842 def handlereplychangegroup(op, inpart):
845 def handlereplychangegroup(op, inpart):
843 ret = int(inpart.params['return'])
846 ret = int(inpart.params['return'])
844 replyto = int(inpart.params['in-reply-to'])
847 replyto = int(inpart.params['in-reply-to'])
845 op.records.add('changegroup', {'return': ret}, replyto)
848 op.records.add('changegroup', {'return': ret}, replyto)
846
849
847 @parthandler('b2x:check:heads')
850 @parthandler('b2x:check:heads')
848 def handlecheckheads(op, inpart):
851 def handlecheckheads(op, inpart):
849 """check that head of the repo did not change
852 """check that head of the repo did not change
850
853
851 This is used to detect a push race when using unbundle.
854 This is used to detect a push race when using unbundle.
852 This replaces the "heads" argument of unbundle."""
855 This replaces the "heads" argument of unbundle."""
853 h = inpart.read(20)
856 h = inpart.read(20)
854 heads = []
857 heads = []
855 while len(h) == 20:
858 while len(h) == 20:
856 heads.append(h)
859 heads.append(h)
857 h = inpart.read(20)
860 h = inpart.read(20)
858 assert not h
861 assert not h
859 if heads != op.repo.heads():
862 if heads != op.repo.heads():
860 raise error.PushRaced('repository changed while pushing - '
863 raise error.PushRaced('repository changed while pushing - '
861 'please try again')
864 'please try again')
862
865
863 @parthandler('b2x:output')
866 @parthandler('b2x:output')
864 def handleoutput(op, inpart):
867 def handleoutput(op, inpart):
865 """forward output captured on the server to the client"""
868 """forward output captured on the server to the client"""
866 for line in inpart.read().splitlines():
869 for line in inpart.read().splitlines():
867 op.ui.write(('remote: %s\n' % line))
870 op.ui.write(('remote: %s\n' % line))
868
871
869 @parthandler('b2x:replycaps')
872 @parthandler('b2x:replycaps')
870 def handlereplycaps(op, inpart):
873 def handlereplycaps(op, inpart):
871 """Notify that a reply bundle should be created
874 """Notify that a reply bundle should be created
872
875
873 The payload contains the capabilities information for the reply"""
876 The payload contains the capabilities information for the reply"""
874 caps = decodecaps(inpart.read())
877 caps = decodecaps(inpart.read())
875 if op.reply is None:
878 if op.reply is None:
876 op.reply = bundle20(op.ui, caps)
879 op.reply = bundle20(op.ui, caps)
877
880
878 @parthandler('b2x:error:abort', ('message', 'hint'))
881 @parthandler('b2x:error:abort', ('message', 'hint'))
879 def handlereplycaps(op, inpart):
882 def handlereplycaps(op, inpart):
880 """Used to transmit abort error over the wire"""
883 """Used to transmit abort error over the wire"""
881 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
884 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
882
885
883 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
886 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
884 def handlereplycaps(op, inpart):
887 def handlereplycaps(op, inpart):
885 """Used to transmit unknown content error over the wire"""
888 """Used to transmit unknown content error over the wire"""
886 kwargs = {}
889 kwargs = {}
887 parttype = inpart.params.get('parttype')
890 parttype = inpart.params.get('parttype')
888 if parttype is not None:
891 if parttype is not None:
889 kwargs['parttype'] = parttype
892 kwargs['parttype'] = parttype
890 params = inpart.params.get('params')
893 params = inpart.params.get('params')
891 if params is not None:
894 if params is not None:
892 kwargs['params'] = params.split('\0')
895 kwargs['params'] = params.split('\0')
893
896
894 raise error.BundleValueError(**kwargs)
897 raise error.BundleValueError(**kwargs)
895
898
896 @parthandler('b2x:error:pushraced', ('message',))
899 @parthandler('b2x:error:pushraced', ('message',))
897 def handlereplycaps(op, inpart):
900 def handlereplycaps(op, inpart):
898 """Used to transmit push race error over the wire"""
901 """Used to transmit push race error over the wire"""
899 raise error.ResponseError(_('push failed:'), inpart.params['message'])
902 raise error.ResponseError(_('push failed:'), inpart.params['message'])
900
903
901 @parthandler('b2x:listkeys', ('namespace',))
904 @parthandler('b2x:listkeys', ('namespace',))
902 def handlelistkeys(op, inpart):
905 def handlelistkeys(op, inpart):
903 """retrieve pushkey namespace content stored in a bundle2"""
906 """retrieve pushkey namespace content stored in a bundle2"""
904 namespace = inpart.params['namespace']
907 namespace = inpart.params['namespace']
905 r = pushkey.decodekeys(inpart.read())
908 r = pushkey.decodekeys(inpart.read())
906 op.records.add('listkeys', (namespace, r))
909 op.records.add('listkeys', (namespace, r))
907
910
908 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
911 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
909 def handlepushkey(op, inpart):
912 def handlepushkey(op, inpart):
910 """process a pushkey request"""
913 """process a pushkey request"""
911 dec = pushkey.decode
914 dec = pushkey.decode
912 namespace = dec(inpart.params['namespace'])
915 namespace = dec(inpart.params['namespace'])
913 key = dec(inpart.params['key'])
916 key = dec(inpart.params['key'])
914 old = dec(inpart.params['old'])
917 old = dec(inpart.params['old'])
915 new = dec(inpart.params['new'])
918 new = dec(inpart.params['new'])
916 ret = op.repo.pushkey(namespace, key, old, new)
919 ret = op.repo.pushkey(namespace, key, old, new)
917 record = {'namespace': namespace,
920 record = {'namespace': namespace,
918 'key': key,
921 'key': key,
919 'old': old,
922 'old': old,
920 'new': new}
923 'new': new}
921 op.records.add('pushkey', record)
924 op.records.add('pushkey', record)
922 if op.reply is not None:
925 if op.reply is not None:
923 rpart = op.reply.newpart('b2x:reply:pushkey')
926 rpart = op.reply.newpart('b2x:reply:pushkey')
924 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
927 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
925 rpart.addparam('return', '%i' % ret, mandatory=False)
928 rpart.addparam('return', '%i' % ret, mandatory=False)
926
929
927 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
930 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
928 def handlepushkeyreply(op, inpart):
931 def handlepushkeyreply(op, inpart):
929 """retrieve the result of a pushkey request"""
932 """retrieve the result of a pushkey request"""
930 ret = int(inpart.params['return'])
933 ret = int(inpart.params['return'])
931 partid = int(inpart.params['in-reply-to'])
934 partid = int(inpart.params['in-reply-to'])
932 op.records.add('pushkey', {'return': ret}, partid)
935 op.records.add('pushkey', {'return': ret}, partid)
933
936
934 @parthandler('b2x:obsmarkers')
937 @parthandler('b2x:obsmarkers')
935 def handleobsmarker(op, inpart):
938 def handleobsmarker(op, inpart):
936 """add a stream of obsmarkers to the repo"""
939 """add a stream of obsmarkers to the repo"""
937 tr = op.gettransaction()
940 tr = op.gettransaction()
938 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
941 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
939 if new:
942 if new:
940 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
943 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
941 op.records.add('obsmarkers', {'new': new})
944 op.records.add('obsmarkers', {'new': new})
942 if op.reply is not None:
945 if op.reply is not None:
943 rpart = op.reply.newpart('b2x:reply:obsmarkers')
946 rpart = op.reply.newpart('b2x:reply:obsmarkers')
944 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
947 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
945 rpart.addparam('new', '%i' % new, mandatory=False)
948 rpart.addparam('new', '%i' % new, mandatory=False)
946
949
947
950
948 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
951 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
949 def handlepushkeyreply(op, inpart):
952 def handlepushkeyreply(op, inpart):
950 """retrieve the result of a pushkey request"""
953 """retrieve the result of a pushkey request"""
951 ret = int(inpart.params['new'])
954 ret = int(inpart.params['new'])
952 partid = int(inpart.params['in-reply-to'])
955 partid = int(inpart.params['in-reply-to'])
953 op.records.add('obsmarkers', {'new': ret}, partid)
956 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,1266 +1,1266 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
12 import discovery, phases, obsolete, bookmarks as bookmod, bundle2, pushkey
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.cg1unpacker(fh, alg)
34 return changegroup.cg1unpacker(fh, alg)
35 elif version == '2X':
35 elif version == '2Y':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40 def buildobsmarkerspart(bundler, markers):
40 def buildobsmarkerspart(bundler, markers):
41 """add an obsmarker part to the bundler with <markers>
41 """add an obsmarker part to the bundler with <markers>
42
42
43 No part is created if markers is empty.
43 No part is created if markers is empty.
44 Raises ValueError if the bundler doesn't support any known obsmarker format.
44 Raises ValueError if the bundler doesn't support any known obsmarker format.
45 """
45 """
46 if markers:
46 if markers:
47 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
47 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
48 version = obsolete.commonversion(remoteversions)
48 version = obsolete.commonversion(remoteversions)
49 if version is None:
49 if version is None:
50 raise ValueError('bundler do not support common obsmarker format')
50 raise ValueError('bundler do not support common obsmarker format')
51 stream = obsolete.encodemarkers(markers, True, version=version)
51 stream = obsolete.encodemarkers(markers, True, version=version)
52 return bundler.newpart('B2X:OBSMARKERS', data=stream)
52 return bundler.newpart('B2X:OBSMARKERS', data=stream)
53 return None
53 return None
54
54
55 class pushoperation(object):
55 class pushoperation(object):
56 """A object that represent a single push operation
56 """A object that represent a single push operation
57
57
58 It purpose is to carry push related state and very common operation.
58 It purpose is to carry push related state and very common operation.
59
59
60 A new should be created at the beginning of each push and discarded
60 A new should be created at the beginning of each push and discarded
61 afterward.
61 afterward.
62 """
62 """
63
63
64 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
64 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
65 bookmarks=()):
65 bookmarks=()):
66 # repo we push from
66 # repo we push from
67 self.repo = repo
67 self.repo = repo
68 self.ui = repo.ui
68 self.ui = repo.ui
69 # repo we push to
69 # repo we push to
70 self.remote = remote
70 self.remote = remote
71 # force option provided
71 # force option provided
72 self.force = force
72 self.force = force
73 # revs to be pushed (None is "all")
73 # revs to be pushed (None is "all")
74 self.revs = revs
74 self.revs = revs
75 # bookmark explicitly pushed
75 # bookmark explicitly pushed
76 self.bookmarks = bookmarks
76 self.bookmarks = bookmarks
77 # allow push of new branch
77 # allow push of new branch
78 self.newbranch = newbranch
78 self.newbranch = newbranch
79 # did a local lock get acquired?
79 # did a local lock get acquired?
80 self.locallocked = None
80 self.locallocked = None
81 # step already performed
81 # step already performed
82 # (used to check what steps have been already performed through bundle2)
82 # (used to check what steps have been already performed through bundle2)
83 self.stepsdone = set()
83 self.stepsdone = set()
84 # Integer version of the changegroup push result
84 # Integer version of the changegroup push result
85 # - None means nothing to push
85 # - None means nothing to push
86 # - 0 means HTTP error
86 # - 0 means HTTP error
87 # - 1 means we pushed and remote head count is unchanged *or*
87 # - 1 means we pushed and remote head count is unchanged *or*
88 # we have outgoing changesets but refused to push
88 # we have outgoing changesets but refused to push
89 # - other values as described by addchangegroup()
89 # - other values as described by addchangegroup()
90 self.cgresult = None
90 self.cgresult = None
91 # Boolean value for the bookmark push
91 # Boolean value for the bookmark push
92 self.bkresult = None
92 self.bkresult = None
93 # discover.outgoing object (contains common and outgoing data)
93 # discover.outgoing object (contains common and outgoing data)
94 self.outgoing = None
94 self.outgoing = None
95 # all remote heads before the push
95 # all remote heads before the push
96 self.remoteheads = None
96 self.remoteheads = None
97 # testable as a boolean indicating if any nodes are missing locally.
97 # testable as a boolean indicating if any nodes are missing locally.
98 self.incoming = None
98 self.incoming = None
99 # phases changes that must be pushed along side the changesets
99 # phases changes that must be pushed along side the changesets
100 self.outdatedphases = None
100 self.outdatedphases = None
101 # phases changes that must be pushed if changeset push fails
101 # phases changes that must be pushed if changeset push fails
102 self.fallbackoutdatedphases = None
102 self.fallbackoutdatedphases = None
103 # outgoing obsmarkers
103 # outgoing obsmarkers
104 self.outobsmarkers = set()
104 self.outobsmarkers = set()
105 # outgoing bookmarks
105 # outgoing bookmarks
106 self.outbookmarks = []
106 self.outbookmarks = []
107
107
108 @util.propertycache
108 @util.propertycache
109 def futureheads(self):
109 def futureheads(self):
110 """future remote heads if the changeset push succeeds"""
110 """future remote heads if the changeset push succeeds"""
111 return self.outgoing.missingheads
111 return self.outgoing.missingheads
112
112
113 @util.propertycache
113 @util.propertycache
114 def fallbackheads(self):
114 def fallbackheads(self):
115 """future remote heads if the changeset push fails"""
115 """future remote heads if the changeset push fails"""
116 if self.revs is None:
116 if self.revs is None:
117 # not target to push, all common are relevant
117 # not target to push, all common are relevant
118 return self.outgoing.commonheads
118 return self.outgoing.commonheads
119 unfi = self.repo.unfiltered()
119 unfi = self.repo.unfiltered()
120 # I want cheads = heads(::missingheads and ::commonheads)
120 # I want cheads = heads(::missingheads and ::commonheads)
121 # (missingheads is revs with secret changeset filtered out)
121 # (missingheads is revs with secret changeset filtered out)
122 #
122 #
123 # This can be expressed as:
123 # This can be expressed as:
124 # cheads = ( (missingheads and ::commonheads)
124 # cheads = ( (missingheads and ::commonheads)
125 # + (commonheads and ::missingheads))"
125 # + (commonheads and ::missingheads))"
126 # )
126 # )
127 #
127 #
128 # while trying to push we already computed the following:
128 # while trying to push we already computed the following:
129 # common = (::commonheads)
129 # common = (::commonheads)
130 # missing = ((commonheads::missingheads) - commonheads)
130 # missing = ((commonheads::missingheads) - commonheads)
131 #
131 #
132 # We can pick:
132 # We can pick:
133 # * missingheads part of common (::commonheads)
133 # * missingheads part of common (::commonheads)
134 common = set(self.outgoing.common)
134 common = set(self.outgoing.common)
135 nm = self.repo.changelog.nodemap
135 nm = self.repo.changelog.nodemap
136 cheads = [node for node in self.revs if nm[node] in common]
136 cheads = [node for node in self.revs if nm[node] in common]
137 # and
137 # and
138 # * commonheads parents on missing
138 # * commonheads parents on missing
139 revset = unfi.set('%ln and parents(roots(%ln))',
139 revset = unfi.set('%ln and parents(roots(%ln))',
140 self.outgoing.commonheads,
140 self.outgoing.commonheads,
141 self.outgoing.missing)
141 self.outgoing.missing)
142 cheads.extend(c.node() for c in revset)
142 cheads.extend(c.node() for c in revset)
143 return cheads
143 return cheads
144
144
145 @property
145 @property
146 def commonheads(self):
146 def commonheads(self):
147 """set of all common heads after changeset bundle push"""
147 """set of all common heads after changeset bundle push"""
148 if self.cgresult:
148 if self.cgresult:
149 return self.futureheads
149 return self.futureheads
150 else:
150 else:
151 return self.fallbackheads
151 return self.fallbackheads
152
152
153 # mapping of message used when pushing bookmark
153 # mapping of message used when pushing bookmark
154 bookmsgmap = {'update': (_("updating bookmark %s\n"),
154 bookmsgmap = {'update': (_("updating bookmark %s\n"),
155 _('updating bookmark %s failed!\n')),
155 _('updating bookmark %s failed!\n')),
156 'export': (_("exporting bookmark %s\n"),
156 'export': (_("exporting bookmark %s\n"),
157 _('exporting bookmark %s failed!\n')),
157 _('exporting bookmark %s failed!\n')),
158 'delete': (_("deleting remote bookmark %s\n"),
158 'delete': (_("deleting remote bookmark %s\n"),
159 _('deleting remote bookmark %s failed!\n')),
159 _('deleting remote bookmark %s failed!\n')),
160 }
160 }
161
161
162
162
163 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
163 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=()):
164 '''Push outgoing changesets (limited by revs) from a local
164 '''Push outgoing changesets (limited by revs) from a local
165 repository to remote. Return an integer:
165 repository to remote. Return an integer:
166 - None means nothing to push
166 - None means nothing to push
167 - 0 means HTTP error
167 - 0 means HTTP error
168 - 1 means we pushed and remote head count is unchanged *or*
168 - 1 means we pushed and remote head count is unchanged *or*
169 we have outgoing changesets but refused to push
169 we have outgoing changesets but refused to push
170 - other values as described by addchangegroup()
170 - other values as described by addchangegroup()
171 '''
171 '''
172 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
172 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks)
173 if pushop.remote.local():
173 if pushop.remote.local():
174 missing = (set(pushop.repo.requirements)
174 missing = (set(pushop.repo.requirements)
175 - pushop.remote.local().supported)
175 - pushop.remote.local().supported)
176 if missing:
176 if missing:
177 msg = _("required features are not"
177 msg = _("required features are not"
178 " supported in the destination:"
178 " supported in the destination:"
179 " %s") % (', '.join(sorted(missing)))
179 " %s") % (', '.join(sorted(missing)))
180 raise util.Abort(msg)
180 raise util.Abort(msg)
181
181
182 # there are two ways to push to remote repo:
182 # there are two ways to push to remote repo:
183 #
183 #
184 # addchangegroup assumes local user can lock remote
184 # addchangegroup assumes local user can lock remote
185 # repo (local filesystem, old ssh servers).
185 # repo (local filesystem, old ssh servers).
186 #
186 #
187 # unbundle assumes local user cannot lock remote repo (new ssh
187 # unbundle assumes local user cannot lock remote repo (new ssh
188 # servers, http servers).
188 # servers, http servers).
189
189
190 if not pushop.remote.canpush():
190 if not pushop.remote.canpush():
191 raise util.Abort(_("destination does not support push"))
191 raise util.Abort(_("destination does not support push"))
192 # get local lock as we might write phase data
192 # get local lock as we might write phase data
193 locallock = None
193 locallock = None
194 try:
194 try:
195 locallock = pushop.repo.lock()
195 locallock = pushop.repo.lock()
196 pushop.locallocked = True
196 pushop.locallocked = True
197 except IOError, err:
197 except IOError, err:
198 pushop.locallocked = False
198 pushop.locallocked = False
199 if err.errno != errno.EACCES:
199 if err.errno != errno.EACCES:
200 raise
200 raise
201 # source repo cannot be locked.
201 # source repo cannot be locked.
202 # We do not abort the push, but just disable the local phase
202 # We do not abort the push, but just disable the local phase
203 # synchronisation.
203 # synchronisation.
204 msg = 'cannot lock source repository: %s\n' % err
204 msg = 'cannot lock source repository: %s\n' % err
205 pushop.ui.debug(msg)
205 pushop.ui.debug(msg)
206 try:
206 try:
207 pushop.repo.checkpush(pushop)
207 pushop.repo.checkpush(pushop)
208 lock = None
208 lock = None
209 unbundle = pushop.remote.capable('unbundle')
209 unbundle = pushop.remote.capable('unbundle')
210 if not unbundle:
210 if not unbundle:
211 lock = pushop.remote.lock()
211 lock = pushop.remote.lock()
212 try:
212 try:
213 _pushdiscovery(pushop)
213 _pushdiscovery(pushop)
214 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
214 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
215 False)
215 False)
216 and pushop.remote.capable('bundle2-exp')):
216 and pushop.remote.capable('bundle2-exp')):
217 _pushbundle2(pushop)
217 _pushbundle2(pushop)
218 _pushchangeset(pushop)
218 _pushchangeset(pushop)
219 _pushsyncphase(pushop)
219 _pushsyncphase(pushop)
220 _pushobsolete(pushop)
220 _pushobsolete(pushop)
221 _pushbookmark(pushop)
221 _pushbookmark(pushop)
222 finally:
222 finally:
223 if lock is not None:
223 if lock is not None:
224 lock.release()
224 lock.release()
225 finally:
225 finally:
226 if locallock is not None:
226 if locallock is not None:
227 locallock.release()
227 locallock.release()
228
228
229 return pushop
229 return pushop
230
230
231 # list of steps to perform discovery before push
231 # list of steps to perform discovery before push
232 pushdiscoveryorder = []
232 pushdiscoveryorder = []
233
233
234 # Mapping between step name and function
234 # Mapping between step name and function
235 #
235 #
236 # This exists to help extensions wrap steps if necessary
236 # This exists to help extensions wrap steps if necessary
237 pushdiscoverymapping = {}
237 pushdiscoverymapping = {}
238
238
239 def pushdiscovery(stepname):
239 def pushdiscovery(stepname):
240 """decorator for function performing discovery before push
240 """decorator for function performing discovery before push
241
241
242 The function is added to the step -> function mapping and appended to the
242 The function is added to the step -> function mapping and appended to the
243 list of steps. Beware that decorated function will be added in order (this
243 list of steps. Beware that decorated function will be added in order (this
244 may matter).
244 may matter).
245
245
246 You can only use this decorator for a new step, if you want to wrap a step
246 You can only use this decorator for a new step, if you want to wrap a step
247 from an extension, change the pushdiscovery dictionary directly."""
247 from an extension, change the pushdiscovery dictionary directly."""
248 def dec(func):
248 def dec(func):
249 assert stepname not in pushdiscoverymapping
249 assert stepname not in pushdiscoverymapping
250 pushdiscoverymapping[stepname] = func
250 pushdiscoverymapping[stepname] = func
251 pushdiscoveryorder.append(stepname)
251 pushdiscoveryorder.append(stepname)
252 return func
252 return func
253 return dec
253 return dec
254
254
255 def _pushdiscovery(pushop):
255 def _pushdiscovery(pushop):
256 """Run all discovery steps"""
256 """Run all discovery steps"""
257 for stepname in pushdiscoveryorder:
257 for stepname in pushdiscoveryorder:
258 step = pushdiscoverymapping[stepname]
258 step = pushdiscoverymapping[stepname]
259 step(pushop)
259 step(pushop)
260
260
261 @pushdiscovery('changeset')
261 @pushdiscovery('changeset')
262 def _pushdiscoverychangeset(pushop):
262 def _pushdiscoverychangeset(pushop):
263 """discover the changeset that need to be pushed"""
263 """discover the changeset that need to be pushed"""
264 unfi = pushop.repo.unfiltered()
264 unfi = pushop.repo.unfiltered()
265 fci = discovery.findcommonincoming
265 fci = discovery.findcommonincoming
266 commoninc = fci(unfi, pushop.remote, force=pushop.force)
266 commoninc = fci(unfi, pushop.remote, force=pushop.force)
267 common, inc, remoteheads = commoninc
267 common, inc, remoteheads = commoninc
268 fco = discovery.findcommonoutgoing
268 fco = discovery.findcommonoutgoing
269 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
269 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
270 commoninc=commoninc, force=pushop.force)
270 commoninc=commoninc, force=pushop.force)
271 pushop.outgoing = outgoing
271 pushop.outgoing = outgoing
272 pushop.remoteheads = remoteheads
272 pushop.remoteheads = remoteheads
273 pushop.incoming = inc
273 pushop.incoming = inc
274
274
275 @pushdiscovery('phase')
275 @pushdiscovery('phase')
276 def _pushdiscoveryphase(pushop):
276 def _pushdiscoveryphase(pushop):
277 """discover the phase that needs to be pushed
277 """discover the phase that needs to be pushed
278
278
279 (computed for both success and failure case for changesets push)"""
279 (computed for both success and failure case for changesets push)"""
280 outgoing = pushop.outgoing
280 outgoing = pushop.outgoing
281 unfi = pushop.repo.unfiltered()
281 unfi = pushop.repo.unfiltered()
282 remotephases = pushop.remote.listkeys('phases')
282 remotephases = pushop.remote.listkeys('phases')
283 publishing = remotephases.get('publishing', False)
283 publishing = remotephases.get('publishing', False)
284 ana = phases.analyzeremotephases(pushop.repo,
284 ana = phases.analyzeremotephases(pushop.repo,
285 pushop.fallbackheads,
285 pushop.fallbackheads,
286 remotephases)
286 remotephases)
287 pheads, droots = ana
287 pheads, droots = ana
288 extracond = ''
288 extracond = ''
289 if not publishing:
289 if not publishing:
290 extracond = ' and public()'
290 extracond = ' and public()'
291 revset = 'heads((%%ln::%%ln) %s)' % extracond
291 revset = 'heads((%%ln::%%ln) %s)' % extracond
292 # Get the list of all revs draft on remote by public here.
292 # Get the list of all revs draft on remote by public here.
293 # XXX Beware that revset break if droots is not strictly
293 # XXX Beware that revset break if droots is not strictly
294 # XXX root we may want to ensure it is but it is costly
294 # XXX root we may want to ensure it is but it is costly
295 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
295 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
296 if not outgoing.missing:
296 if not outgoing.missing:
297 future = fallback
297 future = fallback
298 else:
298 else:
299 # adds changeset we are going to push as draft
299 # adds changeset we are going to push as draft
300 #
300 #
301 # should not be necessary for pushblishing server, but because of an
301 # should not be necessary for pushblishing server, but because of an
302 # issue fixed in xxxxx we have to do it anyway.
302 # issue fixed in xxxxx we have to do it anyway.
303 fdroots = list(unfi.set('roots(%ln + %ln::)',
303 fdroots = list(unfi.set('roots(%ln + %ln::)',
304 outgoing.missing, droots))
304 outgoing.missing, droots))
305 fdroots = [f.node() for f in fdroots]
305 fdroots = [f.node() for f in fdroots]
306 future = list(unfi.set(revset, fdroots, pushop.futureheads))
306 future = list(unfi.set(revset, fdroots, pushop.futureheads))
307 pushop.outdatedphases = future
307 pushop.outdatedphases = future
308 pushop.fallbackoutdatedphases = fallback
308 pushop.fallbackoutdatedphases = fallback
309
309
310 @pushdiscovery('obsmarker')
310 @pushdiscovery('obsmarker')
311 def _pushdiscoveryobsmarkers(pushop):
311 def _pushdiscoveryobsmarkers(pushop):
312 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
312 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
313 and pushop.repo.obsstore
313 and pushop.repo.obsstore
314 and 'obsolete' in pushop.remote.listkeys('namespaces')):
314 and 'obsolete' in pushop.remote.listkeys('namespaces')):
315 repo = pushop.repo
315 repo = pushop.repo
316 # very naive computation, that can be quite expensive on big repo.
316 # very naive computation, that can be quite expensive on big repo.
317 # However: evolution is currently slow on them anyway.
317 # However: evolution is currently slow on them anyway.
318 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
318 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
319 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
319 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
320
320
321 @pushdiscovery('bookmarks')
321 @pushdiscovery('bookmarks')
322 def _pushdiscoverybookmarks(pushop):
322 def _pushdiscoverybookmarks(pushop):
323 ui = pushop.ui
323 ui = pushop.ui
324 repo = pushop.repo.unfiltered()
324 repo = pushop.repo.unfiltered()
325 remote = pushop.remote
325 remote = pushop.remote
326 ui.debug("checking for updated bookmarks\n")
326 ui.debug("checking for updated bookmarks\n")
327 ancestors = ()
327 ancestors = ()
328 if pushop.revs:
328 if pushop.revs:
329 revnums = map(repo.changelog.rev, pushop.revs)
329 revnums = map(repo.changelog.rev, pushop.revs)
330 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
330 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
331 remotebookmark = remote.listkeys('bookmarks')
331 remotebookmark = remote.listkeys('bookmarks')
332
332
333 explicit = set(pushop.bookmarks)
333 explicit = set(pushop.bookmarks)
334
334
335 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
335 comp = bookmod.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
336 addsrc, adddst, advsrc, advdst, diverge, differ, invalid = comp
336 addsrc, adddst, advsrc, advdst, diverge, differ, invalid = comp
337 for b, scid, dcid in advsrc:
337 for b, scid, dcid in advsrc:
338 if b in explicit:
338 if b in explicit:
339 explicit.remove(b)
339 explicit.remove(b)
340 if not ancestors or repo[scid].rev() in ancestors:
340 if not ancestors or repo[scid].rev() in ancestors:
341 pushop.outbookmarks.append((b, dcid, scid))
341 pushop.outbookmarks.append((b, dcid, scid))
342 # search added bookmark
342 # search added bookmark
343 for b, scid, dcid in addsrc:
343 for b, scid, dcid in addsrc:
344 if b in explicit:
344 if b in explicit:
345 explicit.remove(b)
345 explicit.remove(b)
346 pushop.outbookmarks.append((b, '', scid))
346 pushop.outbookmarks.append((b, '', scid))
347 # search for overwritten bookmark
347 # search for overwritten bookmark
348 for b, scid, dcid in advdst + diverge + differ:
348 for b, scid, dcid in advdst + diverge + differ:
349 if b in explicit:
349 if b in explicit:
350 explicit.remove(b)
350 explicit.remove(b)
351 pushop.outbookmarks.append((b, dcid, scid))
351 pushop.outbookmarks.append((b, dcid, scid))
352 # search for bookmark to delete
352 # search for bookmark to delete
353 for b, scid, dcid in adddst:
353 for b, scid, dcid in adddst:
354 if b in explicit:
354 if b in explicit:
355 explicit.remove(b)
355 explicit.remove(b)
356 # treat as "deleted locally"
356 # treat as "deleted locally"
357 pushop.outbookmarks.append((b, dcid, ''))
357 pushop.outbookmarks.append((b, dcid, ''))
358
358
359 if explicit:
359 if explicit:
360 explicit = sorted(explicit)
360 explicit = sorted(explicit)
361 # we should probably list all of them
361 # we should probably list all of them
362 ui.warn(_('bookmark %s does not exist on the local '
362 ui.warn(_('bookmark %s does not exist on the local '
363 'or remote repository!\n') % explicit[0])
363 'or remote repository!\n') % explicit[0])
364 pushop.bkresult = 2
364 pushop.bkresult = 2
365
365
366 pushop.outbookmarks.sort()
366 pushop.outbookmarks.sort()
367
367
368 def _pushcheckoutgoing(pushop):
368 def _pushcheckoutgoing(pushop):
369 outgoing = pushop.outgoing
369 outgoing = pushop.outgoing
370 unfi = pushop.repo.unfiltered()
370 unfi = pushop.repo.unfiltered()
371 if not outgoing.missing:
371 if not outgoing.missing:
372 # nothing to push
372 # nothing to push
373 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
373 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
374 return False
374 return False
375 # something to push
375 # something to push
376 if not pushop.force:
376 if not pushop.force:
377 # if repo.obsstore == False --> no obsolete
377 # if repo.obsstore == False --> no obsolete
378 # then, save the iteration
378 # then, save the iteration
379 if unfi.obsstore:
379 if unfi.obsstore:
380 # this message are here for 80 char limit reason
380 # this message are here for 80 char limit reason
381 mso = _("push includes obsolete changeset: %s!")
381 mso = _("push includes obsolete changeset: %s!")
382 mst = {"unstable": _("push includes unstable changeset: %s!"),
382 mst = {"unstable": _("push includes unstable changeset: %s!"),
383 "bumped": _("push includes bumped changeset: %s!"),
383 "bumped": _("push includes bumped changeset: %s!"),
384 "divergent": _("push includes divergent changeset: %s!")}
384 "divergent": _("push includes divergent changeset: %s!")}
385 # If we are to push if there is at least one
385 # If we are to push if there is at least one
386 # obsolete or unstable changeset in missing, at
386 # obsolete or unstable changeset in missing, at
387 # least one of the missinghead will be obsolete or
387 # least one of the missinghead will be obsolete or
388 # unstable. So checking heads only is ok
388 # unstable. So checking heads only is ok
389 for node in outgoing.missingheads:
389 for node in outgoing.missingheads:
390 ctx = unfi[node]
390 ctx = unfi[node]
391 if ctx.obsolete():
391 if ctx.obsolete():
392 raise util.Abort(mso % ctx)
392 raise util.Abort(mso % ctx)
393 elif ctx.troubled():
393 elif ctx.troubled():
394 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
394 raise util.Abort(mst[ctx.troubles()[0]] % ctx)
395 newbm = pushop.ui.configlist('bookmarks', 'pushing')
395 newbm = pushop.ui.configlist('bookmarks', 'pushing')
396 discovery.checkheads(unfi, pushop.remote, outgoing,
396 discovery.checkheads(unfi, pushop.remote, outgoing,
397 pushop.remoteheads,
397 pushop.remoteheads,
398 pushop.newbranch,
398 pushop.newbranch,
399 bool(pushop.incoming),
399 bool(pushop.incoming),
400 newbm)
400 newbm)
401 return True
401 return True
402
402
403 # List of names of steps to perform for an outgoing bundle2, order matters.
403 # List of names of steps to perform for an outgoing bundle2, order matters.
404 b2partsgenorder = []
404 b2partsgenorder = []
405
405
406 # Mapping between step name and function
406 # Mapping between step name and function
407 #
407 #
408 # This exists to help extensions wrap steps if necessary
408 # This exists to help extensions wrap steps if necessary
409 b2partsgenmapping = {}
409 b2partsgenmapping = {}
410
410
411 def b2partsgenerator(stepname):
411 def b2partsgenerator(stepname):
412 """decorator for function generating bundle2 part
412 """decorator for function generating bundle2 part
413
413
414 The function is added to the step -> function mapping and appended to the
414 The function is added to the step -> function mapping and appended to the
415 list of steps. Beware that decorated functions will be added in order
415 list of steps. Beware that decorated functions will be added in order
416 (this may matter).
416 (this may matter).
417
417
418 You can only use this decorator for new steps, if you want to wrap a step
418 You can only use this decorator for new steps, if you want to wrap a step
419 from an extension, attack the b2partsgenmapping dictionary directly."""
419 from an extension, attack the b2partsgenmapping dictionary directly."""
420 def dec(func):
420 def dec(func):
421 assert stepname not in b2partsgenmapping
421 assert stepname not in b2partsgenmapping
422 b2partsgenmapping[stepname] = func
422 b2partsgenmapping[stepname] = func
423 b2partsgenorder.append(stepname)
423 b2partsgenorder.append(stepname)
424 return func
424 return func
425 return dec
425 return dec
426
426
427 @b2partsgenerator('changeset')
427 @b2partsgenerator('changeset')
428 def _pushb2ctx(pushop, bundler):
428 def _pushb2ctx(pushop, bundler):
429 """handle changegroup push through bundle2
429 """handle changegroup push through bundle2
430
430
431 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
431 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
432 """
432 """
433 if 'changesets' in pushop.stepsdone:
433 if 'changesets' in pushop.stepsdone:
434 return
434 return
435 pushop.stepsdone.add('changesets')
435 pushop.stepsdone.add('changesets')
436 # Send known heads to the server for race detection.
436 # Send known heads to the server for race detection.
437 if not _pushcheckoutgoing(pushop):
437 if not _pushcheckoutgoing(pushop):
438 return
438 return
439 pushop.repo.prepushoutgoinghooks(pushop.repo,
439 pushop.repo.prepushoutgoinghooks(pushop.repo,
440 pushop.remote,
440 pushop.remote,
441 pushop.outgoing)
441 pushop.outgoing)
442 if not pushop.force:
442 if not pushop.force:
443 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
443 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
444 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', pushop.outgoing)
444 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', pushop.outgoing)
445 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
445 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
446 def handlereply(op):
446 def handlereply(op):
447 """extract addchangroup returns from server reply"""
447 """extract addchangroup returns from server reply"""
448 cgreplies = op.records.getreplies(cgpart.id)
448 cgreplies = op.records.getreplies(cgpart.id)
449 assert len(cgreplies['changegroup']) == 1
449 assert len(cgreplies['changegroup']) == 1
450 pushop.cgresult = cgreplies['changegroup'][0]['return']
450 pushop.cgresult = cgreplies['changegroup'][0]['return']
451 return handlereply
451 return handlereply
452
452
453 @b2partsgenerator('phase')
453 @b2partsgenerator('phase')
454 def _pushb2phases(pushop, bundler):
454 def _pushb2phases(pushop, bundler):
455 """handle phase push through bundle2"""
455 """handle phase push through bundle2"""
456 if 'phases' in pushop.stepsdone:
456 if 'phases' in pushop.stepsdone:
457 return
457 return
458 b2caps = bundle2.bundle2caps(pushop.remote)
458 b2caps = bundle2.bundle2caps(pushop.remote)
459 if not 'b2x:pushkey' in b2caps:
459 if not 'b2x:pushkey' in b2caps:
460 return
460 return
461 pushop.stepsdone.add('phases')
461 pushop.stepsdone.add('phases')
462 part2node = []
462 part2node = []
463 enc = pushkey.encode
463 enc = pushkey.encode
464 for newremotehead in pushop.outdatedphases:
464 for newremotehead in pushop.outdatedphases:
465 part = bundler.newpart('b2x:pushkey')
465 part = bundler.newpart('b2x:pushkey')
466 part.addparam('namespace', enc('phases'))
466 part.addparam('namespace', enc('phases'))
467 part.addparam('key', enc(newremotehead.hex()))
467 part.addparam('key', enc(newremotehead.hex()))
468 part.addparam('old', enc(str(phases.draft)))
468 part.addparam('old', enc(str(phases.draft)))
469 part.addparam('new', enc(str(phases.public)))
469 part.addparam('new', enc(str(phases.public)))
470 part2node.append((part.id, newremotehead))
470 part2node.append((part.id, newremotehead))
471 def handlereply(op):
471 def handlereply(op):
472 for partid, node in part2node:
472 for partid, node in part2node:
473 partrep = op.records.getreplies(partid)
473 partrep = op.records.getreplies(partid)
474 results = partrep['pushkey']
474 results = partrep['pushkey']
475 assert len(results) <= 1
475 assert len(results) <= 1
476 msg = None
476 msg = None
477 if not results:
477 if not results:
478 msg = _('server ignored update of %s to public!\n') % node
478 msg = _('server ignored update of %s to public!\n') % node
479 elif not int(results[0]['return']):
479 elif not int(results[0]['return']):
480 msg = _('updating %s to public failed!\n') % node
480 msg = _('updating %s to public failed!\n') % node
481 if msg is not None:
481 if msg is not None:
482 pushop.ui.warn(msg)
482 pushop.ui.warn(msg)
483 return handlereply
483 return handlereply
484
484
485 @b2partsgenerator('obsmarkers')
485 @b2partsgenerator('obsmarkers')
486 def _pushb2obsmarkers(pushop, bundler):
486 def _pushb2obsmarkers(pushop, bundler):
487 if 'obsmarkers' in pushop.stepsdone:
487 if 'obsmarkers' in pushop.stepsdone:
488 return
488 return
489 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
489 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
490 if obsolete.commonversion(remoteversions) is None:
490 if obsolete.commonversion(remoteversions) is None:
491 return
491 return
492 pushop.stepsdone.add('obsmarkers')
492 pushop.stepsdone.add('obsmarkers')
493 if pushop.outobsmarkers:
493 if pushop.outobsmarkers:
494 buildobsmarkerspart(bundler, pushop.outobsmarkers)
494 buildobsmarkerspart(bundler, pushop.outobsmarkers)
495
495
496 @b2partsgenerator('bookmarks')
496 @b2partsgenerator('bookmarks')
497 def _pushb2bookmarks(pushop, bundler):
497 def _pushb2bookmarks(pushop, bundler):
498 """handle phase push through bundle2"""
498 """handle phase push through bundle2"""
499 if 'bookmarks' in pushop.stepsdone:
499 if 'bookmarks' in pushop.stepsdone:
500 return
500 return
501 b2caps = bundle2.bundle2caps(pushop.remote)
501 b2caps = bundle2.bundle2caps(pushop.remote)
502 if 'b2x:pushkey' not in b2caps:
502 if 'b2x:pushkey' not in b2caps:
503 return
503 return
504 pushop.stepsdone.add('bookmarks')
504 pushop.stepsdone.add('bookmarks')
505 part2book = []
505 part2book = []
506 enc = pushkey.encode
506 enc = pushkey.encode
507 for book, old, new in pushop.outbookmarks:
507 for book, old, new in pushop.outbookmarks:
508 part = bundler.newpart('b2x:pushkey')
508 part = bundler.newpart('b2x:pushkey')
509 part.addparam('namespace', enc('bookmarks'))
509 part.addparam('namespace', enc('bookmarks'))
510 part.addparam('key', enc(book))
510 part.addparam('key', enc(book))
511 part.addparam('old', enc(old))
511 part.addparam('old', enc(old))
512 part.addparam('new', enc(new))
512 part.addparam('new', enc(new))
513 action = 'update'
513 action = 'update'
514 if not old:
514 if not old:
515 action = 'export'
515 action = 'export'
516 elif not new:
516 elif not new:
517 action = 'delete'
517 action = 'delete'
518 part2book.append((part.id, book, action))
518 part2book.append((part.id, book, action))
519
519
520
520
521 def handlereply(op):
521 def handlereply(op):
522 ui = pushop.ui
522 ui = pushop.ui
523 for partid, book, action in part2book:
523 for partid, book, action in part2book:
524 partrep = op.records.getreplies(partid)
524 partrep = op.records.getreplies(partid)
525 results = partrep['pushkey']
525 results = partrep['pushkey']
526 assert len(results) <= 1
526 assert len(results) <= 1
527 if not results:
527 if not results:
528 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
528 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
529 else:
529 else:
530 ret = int(results[0]['return'])
530 ret = int(results[0]['return'])
531 if ret:
531 if ret:
532 ui.status(bookmsgmap[action][0] % book)
532 ui.status(bookmsgmap[action][0] % book)
533 else:
533 else:
534 ui.warn(bookmsgmap[action][1] % book)
534 ui.warn(bookmsgmap[action][1] % book)
535 if pushop.bkresult is not None:
535 if pushop.bkresult is not None:
536 pushop.bkresult = 1
536 pushop.bkresult = 1
537 return handlereply
537 return handlereply
538
538
539
539
540 def _pushbundle2(pushop):
540 def _pushbundle2(pushop):
541 """push data to the remote using bundle2
541 """push data to the remote using bundle2
542
542
543 The only currently supported type of data is changegroup but this will
543 The only currently supported type of data is changegroup but this will
544 evolve in the future."""
544 evolve in the future."""
545 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
545 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
546 # create reply capability
546 # create reply capability
547 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
547 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
548 bundler.newpart('b2x:replycaps', data=capsblob)
548 bundler.newpart('b2x:replycaps', data=capsblob)
549 replyhandlers = []
549 replyhandlers = []
550 for partgenname in b2partsgenorder:
550 for partgenname in b2partsgenorder:
551 partgen = b2partsgenmapping[partgenname]
551 partgen = b2partsgenmapping[partgenname]
552 ret = partgen(pushop, bundler)
552 ret = partgen(pushop, bundler)
553 if callable(ret):
553 if callable(ret):
554 replyhandlers.append(ret)
554 replyhandlers.append(ret)
555 # do not push if nothing to push
555 # do not push if nothing to push
556 if bundler.nbparts <= 1:
556 if bundler.nbparts <= 1:
557 return
557 return
558 stream = util.chunkbuffer(bundler.getchunks())
558 stream = util.chunkbuffer(bundler.getchunks())
559 try:
559 try:
560 reply = pushop.remote.unbundle(stream, ['force'], 'push')
560 reply = pushop.remote.unbundle(stream, ['force'], 'push')
561 except error.BundleValueError, exc:
561 except error.BundleValueError, exc:
562 raise util.Abort('missing support for %s' % exc)
562 raise util.Abort('missing support for %s' % exc)
563 try:
563 try:
564 op = bundle2.processbundle(pushop.repo, reply)
564 op = bundle2.processbundle(pushop.repo, reply)
565 except error.BundleValueError, exc:
565 except error.BundleValueError, exc:
566 raise util.Abort('missing support for %s' % exc)
566 raise util.Abort('missing support for %s' % exc)
567 for rephand in replyhandlers:
567 for rephand in replyhandlers:
568 rephand(op)
568 rephand(op)
569
569
570 def _pushchangeset(pushop):
570 def _pushchangeset(pushop):
571 """Make the actual push of changeset bundle to remote repo"""
571 """Make the actual push of changeset bundle to remote repo"""
572 if 'changesets' in pushop.stepsdone:
572 if 'changesets' in pushop.stepsdone:
573 return
573 return
574 pushop.stepsdone.add('changesets')
574 pushop.stepsdone.add('changesets')
575 if not _pushcheckoutgoing(pushop):
575 if not _pushcheckoutgoing(pushop):
576 return
576 return
577 pushop.repo.prepushoutgoinghooks(pushop.repo,
577 pushop.repo.prepushoutgoinghooks(pushop.repo,
578 pushop.remote,
578 pushop.remote,
579 pushop.outgoing)
579 pushop.outgoing)
580 outgoing = pushop.outgoing
580 outgoing = pushop.outgoing
581 unbundle = pushop.remote.capable('unbundle')
581 unbundle = pushop.remote.capable('unbundle')
582 # TODO: get bundlecaps from remote
582 # TODO: get bundlecaps from remote
583 bundlecaps = None
583 bundlecaps = None
584 # create a changegroup from local
584 # create a changegroup from local
585 if pushop.revs is None and not (outgoing.excluded
585 if pushop.revs is None and not (outgoing.excluded
586 or pushop.repo.changelog.filteredrevs):
586 or pushop.repo.changelog.filteredrevs):
587 # push everything,
587 # push everything,
588 # use the fast path, no race possible on push
588 # use the fast path, no race possible on push
589 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
589 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
590 cg = changegroup.getsubset(pushop.repo,
590 cg = changegroup.getsubset(pushop.repo,
591 outgoing,
591 outgoing,
592 bundler,
592 bundler,
593 'push',
593 'push',
594 fastpath=True)
594 fastpath=True)
595 else:
595 else:
596 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
596 cg = changegroup.getlocalchangegroup(pushop.repo, 'push', outgoing,
597 bundlecaps)
597 bundlecaps)
598
598
599 # apply changegroup to remote
599 # apply changegroup to remote
600 if unbundle:
600 if unbundle:
601 # local repo finds heads on server, finds out what
601 # local repo finds heads on server, finds out what
602 # revs it must push. once revs transferred, if server
602 # revs it must push. once revs transferred, if server
603 # finds it has different heads (someone else won
603 # finds it has different heads (someone else won
604 # commit/push race), server aborts.
604 # commit/push race), server aborts.
605 if pushop.force:
605 if pushop.force:
606 remoteheads = ['force']
606 remoteheads = ['force']
607 else:
607 else:
608 remoteheads = pushop.remoteheads
608 remoteheads = pushop.remoteheads
609 # ssh: return remote's addchangegroup()
609 # ssh: return remote's addchangegroup()
610 # http: return remote's addchangegroup() or 0 for error
610 # http: return remote's addchangegroup() or 0 for error
611 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
611 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
612 pushop.repo.url())
612 pushop.repo.url())
613 else:
613 else:
614 # we return an integer indicating remote head count
614 # we return an integer indicating remote head count
615 # change
615 # change
616 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
616 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
617 pushop.repo.url())
617 pushop.repo.url())
618
618
619 def _pushsyncphase(pushop):
619 def _pushsyncphase(pushop):
620 """synchronise phase information locally and remotely"""
620 """synchronise phase information locally and remotely"""
621 cheads = pushop.commonheads
621 cheads = pushop.commonheads
622 # even when we don't push, exchanging phase data is useful
622 # even when we don't push, exchanging phase data is useful
623 remotephases = pushop.remote.listkeys('phases')
623 remotephases = pushop.remote.listkeys('phases')
624 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
624 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
625 and remotephases # server supports phases
625 and remotephases # server supports phases
626 and pushop.cgresult is None # nothing was pushed
626 and pushop.cgresult is None # nothing was pushed
627 and remotephases.get('publishing', False)):
627 and remotephases.get('publishing', False)):
628 # When:
628 # When:
629 # - this is a subrepo push
629 # - this is a subrepo push
630 # - and remote support phase
630 # - and remote support phase
631 # - and no changeset was pushed
631 # - and no changeset was pushed
632 # - and remote is publishing
632 # - and remote is publishing
633 # We may be in issue 3871 case!
633 # We may be in issue 3871 case!
634 # We drop the possible phase synchronisation done by
634 # We drop the possible phase synchronisation done by
635 # courtesy to publish changesets possibly locally draft
635 # courtesy to publish changesets possibly locally draft
636 # on the remote.
636 # on the remote.
637 remotephases = {'publishing': 'True'}
637 remotephases = {'publishing': 'True'}
638 if not remotephases: # old server or public only reply from non-publishing
638 if not remotephases: # old server or public only reply from non-publishing
639 _localphasemove(pushop, cheads)
639 _localphasemove(pushop, cheads)
640 # don't push any phase data as there is nothing to push
640 # don't push any phase data as there is nothing to push
641 else:
641 else:
642 ana = phases.analyzeremotephases(pushop.repo, cheads,
642 ana = phases.analyzeremotephases(pushop.repo, cheads,
643 remotephases)
643 remotephases)
644 pheads, droots = ana
644 pheads, droots = ana
645 ### Apply remote phase on local
645 ### Apply remote phase on local
646 if remotephases.get('publishing', False):
646 if remotephases.get('publishing', False):
647 _localphasemove(pushop, cheads)
647 _localphasemove(pushop, cheads)
648 else: # publish = False
648 else: # publish = False
649 _localphasemove(pushop, pheads)
649 _localphasemove(pushop, pheads)
650 _localphasemove(pushop, cheads, phases.draft)
650 _localphasemove(pushop, cheads, phases.draft)
651 ### Apply local phase on remote
651 ### Apply local phase on remote
652
652
653 if pushop.cgresult:
653 if pushop.cgresult:
654 if 'phases' in pushop.stepsdone:
654 if 'phases' in pushop.stepsdone:
655 # phases already pushed though bundle2
655 # phases already pushed though bundle2
656 return
656 return
657 outdated = pushop.outdatedphases
657 outdated = pushop.outdatedphases
658 else:
658 else:
659 outdated = pushop.fallbackoutdatedphases
659 outdated = pushop.fallbackoutdatedphases
660
660
661 pushop.stepsdone.add('phases')
661 pushop.stepsdone.add('phases')
662
662
663 # filter heads already turned public by the push
663 # filter heads already turned public by the push
664 outdated = [c for c in outdated if c.node() not in pheads]
664 outdated = [c for c in outdated if c.node() not in pheads]
665 b2caps = bundle2.bundle2caps(pushop.remote)
665 b2caps = bundle2.bundle2caps(pushop.remote)
666 if 'b2x:pushkey' in b2caps:
666 if 'b2x:pushkey' in b2caps:
667 # server supports bundle2, let's do a batched push through it
667 # server supports bundle2, let's do a batched push through it
668 #
668 #
669 # This will eventually be unified with the changesets bundle2 push
669 # This will eventually be unified with the changesets bundle2 push
670 bundler = bundle2.bundle20(pushop.ui, b2caps)
670 bundler = bundle2.bundle20(pushop.ui, b2caps)
671 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
671 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo))
672 bundler.newpart('b2x:replycaps', data=capsblob)
672 bundler.newpart('b2x:replycaps', data=capsblob)
673 part2node = []
673 part2node = []
674 enc = pushkey.encode
674 enc = pushkey.encode
675 for newremotehead in outdated:
675 for newremotehead in outdated:
676 part = bundler.newpart('b2x:pushkey')
676 part = bundler.newpart('b2x:pushkey')
677 part.addparam('namespace', enc('phases'))
677 part.addparam('namespace', enc('phases'))
678 part.addparam('key', enc(newremotehead.hex()))
678 part.addparam('key', enc(newremotehead.hex()))
679 part.addparam('old', enc(str(phases.draft)))
679 part.addparam('old', enc(str(phases.draft)))
680 part.addparam('new', enc(str(phases.public)))
680 part.addparam('new', enc(str(phases.public)))
681 part2node.append((part.id, newremotehead))
681 part2node.append((part.id, newremotehead))
682 stream = util.chunkbuffer(bundler.getchunks())
682 stream = util.chunkbuffer(bundler.getchunks())
683 try:
683 try:
684 reply = pushop.remote.unbundle(stream, ['force'], 'push')
684 reply = pushop.remote.unbundle(stream, ['force'], 'push')
685 op = bundle2.processbundle(pushop.repo, reply)
685 op = bundle2.processbundle(pushop.repo, reply)
686 except error.BundleValueError, exc:
686 except error.BundleValueError, exc:
687 raise util.Abort('missing support for %s' % exc)
687 raise util.Abort('missing support for %s' % exc)
688 for partid, node in part2node:
688 for partid, node in part2node:
689 partrep = op.records.getreplies(partid)
689 partrep = op.records.getreplies(partid)
690 results = partrep['pushkey']
690 results = partrep['pushkey']
691 assert len(results) <= 1
691 assert len(results) <= 1
692 msg = None
692 msg = None
693 if not results:
693 if not results:
694 msg = _('server ignored update of %s to public!\n') % node
694 msg = _('server ignored update of %s to public!\n') % node
695 elif not int(results[0]['return']):
695 elif not int(results[0]['return']):
696 msg = _('updating %s to public failed!\n') % node
696 msg = _('updating %s to public failed!\n') % node
697 if msg is not None:
697 if msg is not None:
698 pushop.ui.warn(msg)
698 pushop.ui.warn(msg)
699
699
700 else:
700 else:
701 # fallback to independant pushkey command
701 # fallback to independant pushkey command
702 for newremotehead in outdated:
702 for newremotehead in outdated:
703 r = pushop.remote.pushkey('phases',
703 r = pushop.remote.pushkey('phases',
704 newremotehead.hex(),
704 newremotehead.hex(),
705 str(phases.draft),
705 str(phases.draft),
706 str(phases.public))
706 str(phases.public))
707 if not r:
707 if not r:
708 pushop.ui.warn(_('updating %s to public failed!\n')
708 pushop.ui.warn(_('updating %s to public failed!\n')
709 % newremotehead)
709 % newremotehead)
710
710
711 def _localphasemove(pushop, nodes, phase=phases.public):
711 def _localphasemove(pushop, nodes, phase=phases.public):
712 """move <nodes> to <phase> in the local source repo"""
712 """move <nodes> to <phase> in the local source repo"""
713 if pushop.locallocked:
713 if pushop.locallocked:
714 tr = pushop.repo.transaction('push-phase-sync')
714 tr = pushop.repo.transaction('push-phase-sync')
715 try:
715 try:
716 phases.advanceboundary(pushop.repo, tr, phase, nodes)
716 phases.advanceboundary(pushop.repo, tr, phase, nodes)
717 tr.close()
717 tr.close()
718 finally:
718 finally:
719 tr.release()
719 tr.release()
720 else:
720 else:
721 # repo is not locked, do not change any phases!
721 # repo is not locked, do not change any phases!
722 # Informs the user that phases should have been moved when
722 # Informs the user that phases should have been moved when
723 # applicable.
723 # applicable.
724 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
724 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
725 phasestr = phases.phasenames[phase]
725 phasestr = phases.phasenames[phase]
726 if actualmoves:
726 if actualmoves:
727 pushop.ui.status(_('cannot lock source repo, skipping '
727 pushop.ui.status(_('cannot lock source repo, skipping '
728 'local %s phase update\n') % phasestr)
728 'local %s phase update\n') % phasestr)
729
729
730 def _pushobsolete(pushop):
730 def _pushobsolete(pushop):
731 """utility function to push obsolete markers to a remote"""
731 """utility function to push obsolete markers to a remote"""
732 if 'obsmarkers' in pushop.stepsdone:
732 if 'obsmarkers' in pushop.stepsdone:
733 return
733 return
734 pushop.ui.debug('try to push obsolete markers to remote\n')
734 pushop.ui.debug('try to push obsolete markers to remote\n')
735 repo = pushop.repo
735 repo = pushop.repo
736 remote = pushop.remote
736 remote = pushop.remote
737 pushop.stepsdone.add('obsmarkers')
737 pushop.stepsdone.add('obsmarkers')
738 if pushop.outobsmarkers:
738 if pushop.outobsmarkers:
739 rslts = []
739 rslts = []
740 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
740 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
741 for key in sorted(remotedata, reverse=True):
741 for key in sorted(remotedata, reverse=True):
742 # reverse sort to ensure we end with dump0
742 # reverse sort to ensure we end with dump0
743 data = remotedata[key]
743 data = remotedata[key]
744 rslts.append(remote.pushkey('obsolete', key, '', data))
744 rslts.append(remote.pushkey('obsolete', key, '', data))
745 if [r for r in rslts if not r]:
745 if [r for r in rslts if not r]:
746 msg = _('failed to push some obsolete markers!\n')
746 msg = _('failed to push some obsolete markers!\n')
747 repo.ui.warn(msg)
747 repo.ui.warn(msg)
748
748
749 def _pushbookmark(pushop):
749 def _pushbookmark(pushop):
750 """Update bookmark position on remote"""
750 """Update bookmark position on remote"""
751 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
751 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
752 return
752 return
753 pushop.stepsdone.add('bookmarks')
753 pushop.stepsdone.add('bookmarks')
754 ui = pushop.ui
754 ui = pushop.ui
755 remote = pushop.remote
755 remote = pushop.remote
756
756
757 for b, old, new in pushop.outbookmarks:
757 for b, old, new in pushop.outbookmarks:
758 action = 'update'
758 action = 'update'
759 if not old:
759 if not old:
760 action = 'export'
760 action = 'export'
761 elif not new:
761 elif not new:
762 action = 'delete'
762 action = 'delete'
763 if remote.pushkey('bookmarks', b, old, new):
763 if remote.pushkey('bookmarks', b, old, new):
764 ui.status(bookmsgmap[action][0] % b)
764 ui.status(bookmsgmap[action][0] % b)
765 else:
765 else:
766 ui.warn(bookmsgmap[action][1] % b)
766 ui.warn(bookmsgmap[action][1] % b)
767 # discovery can have set the value form invalid entry
767 # discovery can have set the value form invalid entry
768 if pushop.bkresult is not None:
768 if pushop.bkresult is not None:
769 pushop.bkresult = 1
769 pushop.bkresult = 1
770
770
771 class pulloperation(object):
771 class pulloperation(object):
772 """A object that represent a single pull operation
772 """A object that represent a single pull operation
773
773
774 It purpose is to carry push related state and very common operation.
774 It purpose is to carry push related state and very common operation.
775
775
776 A new should be created at the beginning of each pull and discarded
776 A new should be created at the beginning of each pull and discarded
777 afterward.
777 afterward.
778 """
778 """
779
779
780 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
780 def __init__(self, repo, remote, heads=None, force=False, bookmarks=()):
781 # repo we pull into
781 # repo we pull into
782 self.repo = repo
782 self.repo = repo
783 # repo we pull from
783 # repo we pull from
784 self.remote = remote
784 self.remote = remote
785 # revision we try to pull (None is "all")
785 # revision we try to pull (None is "all")
786 self.heads = heads
786 self.heads = heads
787 # bookmark pulled explicitly
787 # bookmark pulled explicitly
788 self.explicitbookmarks = bookmarks
788 self.explicitbookmarks = bookmarks
789 # do we force pull?
789 # do we force pull?
790 self.force = force
790 self.force = force
791 # the name the pull transaction
791 # the name the pull transaction
792 self._trname = 'pull\n' + util.hidepassword(remote.url())
792 self._trname = 'pull\n' + util.hidepassword(remote.url())
793 # hold the transaction once created
793 # hold the transaction once created
794 self._tr = None
794 self._tr = None
795 # set of common changeset between local and remote before pull
795 # set of common changeset between local and remote before pull
796 self.common = None
796 self.common = None
797 # set of pulled head
797 # set of pulled head
798 self.rheads = None
798 self.rheads = None
799 # list of missing changeset to fetch remotely
799 # list of missing changeset to fetch remotely
800 self.fetch = None
800 self.fetch = None
801 # remote bookmarks data
801 # remote bookmarks data
802 self.remotebookmarks = None
802 self.remotebookmarks = None
803 # result of changegroup pulling (used as return code by pull)
803 # result of changegroup pulling (used as return code by pull)
804 self.cgresult = None
804 self.cgresult = None
805 # list of step already done
805 # list of step already done
806 self.stepsdone = set()
806 self.stepsdone = set()
807
807
808 @util.propertycache
808 @util.propertycache
809 def pulledsubset(self):
809 def pulledsubset(self):
810 """heads of the set of changeset target by the pull"""
810 """heads of the set of changeset target by the pull"""
811 # compute target subset
811 # compute target subset
812 if self.heads is None:
812 if self.heads is None:
813 # We pulled every thing possible
813 # We pulled every thing possible
814 # sync on everything common
814 # sync on everything common
815 c = set(self.common)
815 c = set(self.common)
816 ret = list(self.common)
816 ret = list(self.common)
817 for n in self.rheads:
817 for n in self.rheads:
818 if n not in c:
818 if n not in c:
819 ret.append(n)
819 ret.append(n)
820 return ret
820 return ret
821 else:
821 else:
822 # We pulled a specific subset
822 # We pulled a specific subset
823 # sync on this subset
823 # sync on this subset
824 return self.heads
824 return self.heads
825
825
826 def gettransaction(self):
826 def gettransaction(self):
827 """get appropriate pull transaction, creating it if needed"""
827 """get appropriate pull transaction, creating it if needed"""
828 if self._tr is None:
828 if self._tr is None:
829 self._tr = self.repo.transaction(self._trname)
829 self._tr = self.repo.transaction(self._trname)
830 self._tr.hookargs['source'] = 'pull'
830 self._tr.hookargs['source'] = 'pull'
831 self._tr.hookargs['url'] = self.remote.url()
831 self._tr.hookargs['url'] = self.remote.url()
832 return self._tr
832 return self._tr
833
833
834 def closetransaction(self):
834 def closetransaction(self):
835 """close transaction if created"""
835 """close transaction if created"""
836 if self._tr is not None:
836 if self._tr is not None:
837 repo = self.repo
837 repo = self.repo
838 cl = repo.unfiltered().changelog
838 cl = repo.unfiltered().changelog
839 p = cl.writepending() and repo.root or ""
839 p = cl.writepending() and repo.root or ""
840 p = cl.writepending() and repo.root or ""
840 p = cl.writepending() and repo.root or ""
841 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
841 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
842 **self._tr.hookargs)
842 **self._tr.hookargs)
843 self._tr.close()
843 self._tr.close()
844 repo.hook('b2x-transactionclose', **self._tr.hookargs)
844 repo.hook('b2x-transactionclose', **self._tr.hookargs)
845
845
846 def releasetransaction(self):
846 def releasetransaction(self):
847 """release transaction if created"""
847 """release transaction if created"""
848 if self._tr is not None:
848 if self._tr is not None:
849 self._tr.release()
849 self._tr.release()
850
850
851 def pull(repo, remote, heads=None, force=False, bookmarks=()):
851 def pull(repo, remote, heads=None, force=False, bookmarks=()):
852 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
852 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks)
853 if pullop.remote.local():
853 if pullop.remote.local():
854 missing = set(pullop.remote.requirements) - pullop.repo.supported
854 missing = set(pullop.remote.requirements) - pullop.repo.supported
855 if missing:
855 if missing:
856 msg = _("required features are not"
856 msg = _("required features are not"
857 " supported in the destination:"
857 " supported in the destination:"
858 " %s") % (', '.join(sorted(missing)))
858 " %s") % (', '.join(sorted(missing)))
859 raise util.Abort(msg)
859 raise util.Abort(msg)
860
860
861 pullop.remotebookmarks = remote.listkeys('bookmarks')
861 pullop.remotebookmarks = remote.listkeys('bookmarks')
862 lock = pullop.repo.lock()
862 lock = pullop.repo.lock()
863 try:
863 try:
864 _pulldiscovery(pullop)
864 _pulldiscovery(pullop)
865 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
865 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
866 and pullop.remote.capable('bundle2-exp')):
866 and pullop.remote.capable('bundle2-exp')):
867 _pullbundle2(pullop)
867 _pullbundle2(pullop)
868 _pullchangeset(pullop)
868 _pullchangeset(pullop)
869 _pullphase(pullop)
869 _pullphase(pullop)
870 _pullbookmarks(pullop)
870 _pullbookmarks(pullop)
871 _pullobsolete(pullop)
871 _pullobsolete(pullop)
872 pullop.closetransaction()
872 pullop.closetransaction()
873 finally:
873 finally:
874 pullop.releasetransaction()
874 pullop.releasetransaction()
875 lock.release()
875 lock.release()
876
876
877 return pullop
877 return pullop
878
878
879 # list of steps to perform discovery before pull
879 # list of steps to perform discovery before pull
880 pulldiscoveryorder = []
880 pulldiscoveryorder = []
881
881
882 # Mapping between step name and function
882 # Mapping between step name and function
883 #
883 #
884 # This exists to help extensions wrap steps if necessary
884 # This exists to help extensions wrap steps if necessary
885 pulldiscoverymapping = {}
885 pulldiscoverymapping = {}
886
886
887 def pulldiscovery(stepname):
887 def pulldiscovery(stepname):
888 """decorator for function performing discovery before pull
888 """decorator for function performing discovery before pull
889
889
890 The function is added to the step -> function mapping and appended to the
890 The function is added to the step -> function mapping and appended to the
891 list of steps. Beware that decorated function will be added in order (this
891 list of steps. Beware that decorated function will be added in order (this
892 may matter).
892 may matter).
893
893
894 You can only use this decorator for a new step, if you want to wrap a step
894 You can only use this decorator for a new step, if you want to wrap a step
895 from an extension, change the pulldiscovery dictionary directly."""
895 from an extension, change the pulldiscovery dictionary directly."""
896 def dec(func):
896 def dec(func):
897 assert stepname not in pulldiscoverymapping
897 assert stepname not in pulldiscoverymapping
898 pulldiscoverymapping[stepname] = func
898 pulldiscoverymapping[stepname] = func
899 pulldiscoveryorder.append(stepname)
899 pulldiscoveryorder.append(stepname)
900 return func
900 return func
901 return dec
901 return dec
902
902
903 def _pulldiscovery(pullop):
903 def _pulldiscovery(pullop):
904 """Run all discovery steps"""
904 """Run all discovery steps"""
905 for stepname in pulldiscoveryorder:
905 for stepname in pulldiscoveryorder:
906 step = pulldiscoverymapping[stepname]
906 step = pulldiscoverymapping[stepname]
907 step(pullop)
907 step(pullop)
908
908
909 @pulldiscovery('changegroup')
909 @pulldiscovery('changegroup')
910 def _pulldiscoverychangegroup(pullop):
910 def _pulldiscoverychangegroup(pullop):
911 """discovery phase for the pull
911 """discovery phase for the pull
912
912
913 Current handle changeset discovery only, will change handle all discovery
913 Current handle changeset discovery only, will change handle all discovery
914 at some point."""
914 at some point."""
915 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
915 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
916 pullop.remote,
916 pullop.remote,
917 heads=pullop.heads,
917 heads=pullop.heads,
918 force=pullop.force)
918 force=pullop.force)
919 pullop.common, pullop.fetch, pullop.rheads = tmp
919 pullop.common, pullop.fetch, pullop.rheads = tmp
920
920
921 def _pullbundle2(pullop):
921 def _pullbundle2(pullop):
922 """pull data using bundle2
922 """pull data using bundle2
923
923
924 For now, the only supported data are changegroup."""
924 For now, the only supported data are changegroup."""
925 remotecaps = bundle2.bundle2caps(pullop.remote)
925 remotecaps = bundle2.bundle2caps(pullop.remote)
926 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
926 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
927 # pulling changegroup
927 # pulling changegroup
928 pullop.stepsdone.add('changegroup')
928 pullop.stepsdone.add('changegroup')
929
929
930 kwargs['common'] = pullop.common
930 kwargs['common'] = pullop.common
931 kwargs['heads'] = pullop.heads or pullop.rheads
931 kwargs['heads'] = pullop.heads or pullop.rheads
932 kwargs['cg'] = pullop.fetch
932 kwargs['cg'] = pullop.fetch
933 if 'b2x:listkeys' in remotecaps:
933 if 'b2x:listkeys' in remotecaps:
934 kwargs['listkeys'] = ['phase', 'bookmarks']
934 kwargs['listkeys'] = ['phase', 'bookmarks']
935 if not pullop.fetch:
935 if not pullop.fetch:
936 pullop.repo.ui.status(_("no changes found\n"))
936 pullop.repo.ui.status(_("no changes found\n"))
937 pullop.cgresult = 0
937 pullop.cgresult = 0
938 else:
938 else:
939 if pullop.heads is None and list(pullop.common) == [nullid]:
939 if pullop.heads is None and list(pullop.common) == [nullid]:
940 pullop.repo.ui.status(_("requesting all changes\n"))
940 pullop.repo.ui.status(_("requesting all changes\n"))
941 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
941 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
942 remoteversions = bundle2.obsmarkersversion(remotecaps)
942 remoteversions = bundle2.obsmarkersversion(remotecaps)
943 if obsolete.commonversion(remoteversions) is not None:
943 if obsolete.commonversion(remoteversions) is not None:
944 kwargs['obsmarkers'] = True
944 kwargs['obsmarkers'] = True
945 pullop.stepsdone.add('obsmarkers')
945 pullop.stepsdone.add('obsmarkers')
946 _pullbundle2extraprepare(pullop, kwargs)
946 _pullbundle2extraprepare(pullop, kwargs)
947 if kwargs.keys() == ['format']:
947 if kwargs.keys() == ['format']:
948 return # nothing to pull
948 return # nothing to pull
949 bundle = pullop.remote.getbundle('pull', **kwargs)
949 bundle = pullop.remote.getbundle('pull', **kwargs)
950 try:
950 try:
951 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
951 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
952 except error.BundleValueError, exc:
952 except error.BundleValueError, exc:
953 raise util.Abort('missing support for %s' % exc)
953 raise util.Abort('missing support for %s' % exc)
954
954
955 if pullop.fetch:
955 if pullop.fetch:
956 changedheads = 0
956 changedheads = 0
957 pullop.cgresult = 1
957 pullop.cgresult = 1
958 for cg in op.records['changegroup']:
958 for cg in op.records['changegroup']:
959 ret = cg['return']
959 ret = cg['return']
960 # If any changegroup result is 0, return 0
960 # If any changegroup result is 0, return 0
961 if ret == 0:
961 if ret == 0:
962 pullop.cgresult = 0
962 pullop.cgresult = 0
963 break
963 break
964 if ret < -1:
964 if ret < -1:
965 changedheads += ret + 1
965 changedheads += ret + 1
966 elif ret > 1:
966 elif ret > 1:
967 changedheads += ret - 1
967 changedheads += ret - 1
968 if changedheads > 0:
968 if changedheads > 0:
969 pullop.cgresult = 1 + changedheads
969 pullop.cgresult = 1 + changedheads
970 elif changedheads < 0:
970 elif changedheads < 0:
971 pullop.cgresult = -1 + changedheads
971 pullop.cgresult = -1 + changedheads
972
972
973 # processing phases change
973 # processing phases change
974 for namespace, value in op.records['listkeys']:
974 for namespace, value in op.records['listkeys']:
975 if namespace == 'phases':
975 if namespace == 'phases':
976 _pullapplyphases(pullop, value)
976 _pullapplyphases(pullop, value)
977
977
978 # processing bookmark update
978 # processing bookmark update
979 for namespace, value in op.records['listkeys']:
979 for namespace, value in op.records['listkeys']:
980 if namespace == 'bookmarks':
980 if namespace == 'bookmarks':
981 pullop.remotebookmarks = value
981 pullop.remotebookmarks = value
982 _pullbookmarks(pullop)
982 _pullbookmarks(pullop)
983
983
984 def _pullbundle2extraprepare(pullop, kwargs):
984 def _pullbundle2extraprepare(pullop, kwargs):
985 """hook function so that extensions can extend the getbundle call"""
985 """hook function so that extensions can extend the getbundle call"""
986 pass
986 pass
987
987
988 def _pullchangeset(pullop):
988 def _pullchangeset(pullop):
989 """pull changeset from unbundle into the local repo"""
989 """pull changeset from unbundle into the local repo"""
990 # We delay the open of the transaction as late as possible so we
990 # We delay the open of the transaction as late as possible so we
991 # don't open transaction for nothing or you break future useful
991 # don't open transaction for nothing or you break future useful
992 # rollback call
992 # rollback call
993 if 'changegroup' in pullop.stepsdone:
993 if 'changegroup' in pullop.stepsdone:
994 return
994 return
995 pullop.stepsdone.add('changegroup')
995 pullop.stepsdone.add('changegroup')
996 if not pullop.fetch:
996 if not pullop.fetch:
997 pullop.repo.ui.status(_("no changes found\n"))
997 pullop.repo.ui.status(_("no changes found\n"))
998 pullop.cgresult = 0
998 pullop.cgresult = 0
999 return
999 return
1000 pullop.gettransaction()
1000 pullop.gettransaction()
1001 if pullop.heads is None and list(pullop.common) == [nullid]:
1001 if pullop.heads is None and list(pullop.common) == [nullid]:
1002 pullop.repo.ui.status(_("requesting all changes\n"))
1002 pullop.repo.ui.status(_("requesting all changes\n"))
1003 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1003 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1004 # issue1320, avoid a race if remote changed after discovery
1004 # issue1320, avoid a race if remote changed after discovery
1005 pullop.heads = pullop.rheads
1005 pullop.heads = pullop.rheads
1006
1006
1007 if pullop.remote.capable('getbundle'):
1007 if pullop.remote.capable('getbundle'):
1008 # TODO: get bundlecaps from remote
1008 # TODO: get bundlecaps from remote
1009 cg = pullop.remote.getbundle('pull', common=pullop.common,
1009 cg = pullop.remote.getbundle('pull', common=pullop.common,
1010 heads=pullop.heads or pullop.rheads)
1010 heads=pullop.heads or pullop.rheads)
1011 elif pullop.heads is None:
1011 elif pullop.heads is None:
1012 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1012 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1013 elif not pullop.remote.capable('changegroupsubset'):
1013 elif not pullop.remote.capable('changegroupsubset'):
1014 raise util.Abort(_("partial pull cannot be done because "
1014 raise util.Abort(_("partial pull cannot be done because "
1015 "other repository doesn't support "
1015 "other repository doesn't support "
1016 "changegroupsubset."))
1016 "changegroupsubset."))
1017 else:
1017 else:
1018 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1018 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1019 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1019 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
1020 pullop.remote.url())
1020 pullop.remote.url())
1021
1021
1022 def _pullphase(pullop):
1022 def _pullphase(pullop):
1023 # Get remote phases data from remote
1023 # Get remote phases data from remote
1024 if 'phases' in pullop.stepsdone:
1024 if 'phases' in pullop.stepsdone:
1025 return
1025 return
1026 remotephases = pullop.remote.listkeys('phases')
1026 remotephases = pullop.remote.listkeys('phases')
1027 _pullapplyphases(pullop, remotephases)
1027 _pullapplyphases(pullop, remotephases)
1028
1028
1029 def _pullapplyphases(pullop, remotephases):
1029 def _pullapplyphases(pullop, remotephases):
1030 """apply phase movement from observed remote state"""
1030 """apply phase movement from observed remote state"""
1031 if 'phases' in pullop.stepsdone:
1031 if 'phases' in pullop.stepsdone:
1032 return
1032 return
1033 pullop.stepsdone.add('phases')
1033 pullop.stepsdone.add('phases')
1034 publishing = bool(remotephases.get('publishing', False))
1034 publishing = bool(remotephases.get('publishing', False))
1035 if remotephases and not publishing:
1035 if remotephases and not publishing:
1036 # remote is new and unpublishing
1036 # remote is new and unpublishing
1037 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1037 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1038 pullop.pulledsubset,
1038 pullop.pulledsubset,
1039 remotephases)
1039 remotephases)
1040 dheads = pullop.pulledsubset
1040 dheads = pullop.pulledsubset
1041 else:
1041 else:
1042 # Remote is old or publishing all common changesets
1042 # Remote is old or publishing all common changesets
1043 # should be seen as public
1043 # should be seen as public
1044 pheads = pullop.pulledsubset
1044 pheads = pullop.pulledsubset
1045 dheads = []
1045 dheads = []
1046 unfi = pullop.repo.unfiltered()
1046 unfi = pullop.repo.unfiltered()
1047 phase = unfi._phasecache.phase
1047 phase = unfi._phasecache.phase
1048 rev = unfi.changelog.nodemap.get
1048 rev = unfi.changelog.nodemap.get
1049 public = phases.public
1049 public = phases.public
1050 draft = phases.draft
1050 draft = phases.draft
1051
1051
1052 # exclude changesets already public locally and update the others
1052 # exclude changesets already public locally and update the others
1053 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1053 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1054 if pheads:
1054 if pheads:
1055 tr = pullop.gettransaction()
1055 tr = pullop.gettransaction()
1056 phases.advanceboundary(pullop.repo, tr, public, pheads)
1056 phases.advanceboundary(pullop.repo, tr, public, pheads)
1057
1057
1058 # exclude changesets already draft locally and update the others
1058 # exclude changesets already draft locally and update the others
1059 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1059 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1060 if dheads:
1060 if dheads:
1061 tr = pullop.gettransaction()
1061 tr = pullop.gettransaction()
1062 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1062 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1063
1063
1064 def _pullbookmarks(pullop):
1064 def _pullbookmarks(pullop):
1065 """process the remote bookmark information to update the local one"""
1065 """process the remote bookmark information to update the local one"""
1066 if 'bookmarks' in pullop.stepsdone:
1066 if 'bookmarks' in pullop.stepsdone:
1067 return
1067 return
1068 pullop.stepsdone.add('bookmarks')
1068 pullop.stepsdone.add('bookmarks')
1069 repo = pullop.repo
1069 repo = pullop.repo
1070 remotebookmarks = pullop.remotebookmarks
1070 remotebookmarks = pullop.remotebookmarks
1071 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1071 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1072 pullop.remote.url(),
1072 pullop.remote.url(),
1073 pullop.gettransaction,
1073 pullop.gettransaction,
1074 explicit=pullop.explicitbookmarks)
1074 explicit=pullop.explicitbookmarks)
1075
1075
1076 def _pullobsolete(pullop):
1076 def _pullobsolete(pullop):
1077 """utility function to pull obsolete markers from a remote
1077 """utility function to pull obsolete markers from a remote
1078
1078
1079 The `gettransaction` is function that return the pull transaction, creating
1079 The `gettransaction` is function that return the pull transaction, creating
1080 one if necessary. We return the transaction to inform the calling code that
1080 one if necessary. We return the transaction to inform the calling code that
1081 a new transaction have been created (when applicable).
1081 a new transaction have been created (when applicable).
1082
1082
1083 Exists mostly to allow overriding for experimentation purpose"""
1083 Exists mostly to allow overriding for experimentation purpose"""
1084 if 'obsmarkers' in pullop.stepsdone:
1084 if 'obsmarkers' in pullop.stepsdone:
1085 return
1085 return
1086 pullop.stepsdone.add('obsmarkers')
1086 pullop.stepsdone.add('obsmarkers')
1087 tr = None
1087 tr = None
1088 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1088 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1089 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1089 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1090 remoteobs = pullop.remote.listkeys('obsolete')
1090 remoteobs = pullop.remote.listkeys('obsolete')
1091 if 'dump0' in remoteobs:
1091 if 'dump0' in remoteobs:
1092 tr = pullop.gettransaction()
1092 tr = pullop.gettransaction()
1093 for key in sorted(remoteobs, reverse=True):
1093 for key in sorted(remoteobs, reverse=True):
1094 if key.startswith('dump'):
1094 if key.startswith('dump'):
1095 data = base85.b85decode(remoteobs[key])
1095 data = base85.b85decode(remoteobs[key])
1096 pullop.repo.obsstore.mergemarkers(tr, data)
1096 pullop.repo.obsstore.mergemarkers(tr, data)
1097 pullop.repo.invalidatevolatilesets()
1097 pullop.repo.invalidatevolatilesets()
1098 return tr
1098 return tr
1099
1099
1100 def caps20to10(repo):
1100 def caps20to10(repo):
1101 """return a set with appropriate options to use bundle20 during getbundle"""
1101 """return a set with appropriate options to use bundle20 during getbundle"""
1102 caps = set(['HG2X'])
1102 caps = set(['HG2Y'])
1103 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1103 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1104 caps.add('bundle2=' + urllib.quote(capsblob))
1104 caps.add('bundle2=' + urllib.quote(capsblob))
1105 return caps
1105 return caps
1106
1106
1107 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1107 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1108 getbundle2partsorder = []
1108 getbundle2partsorder = []
1109
1109
1110 # Mapping between step name and function
1110 # Mapping between step name and function
1111 #
1111 #
1112 # This exists to help extensions wrap steps if necessary
1112 # This exists to help extensions wrap steps if necessary
1113 getbundle2partsmapping = {}
1113 getbundle2partsmapping = {}
1114
1114
1115 def getbundle2partsgenerator(stepname):
1115 def getbundle2partsgenerator(stepname):
1116 """decorator for function generating bundle2 part for getbundle
1116 """decorator for function generating bundle2 part for getbundle
1117
1117
1118 The function is added to the step -> function mapping and appended to the
1118 The function is added to the step -> function mapping and appended to the
1119 list of steps. Beware that decorated functions will be added in order
1119 list of steps. Beware that decorated functions will be added in order
1120 (this may matter).
1120 (this may matter).
1121
1121
1122 You can only use this decorator for new steps, if you want to wrap a step
1122 You can only use this decorator for new steps, if you want to wrap a step
1123 from an extension, attack the getbundle2partsmapping dictionary directly."""
1123 from an extension, attack the getbundle2partsmapping dictionary directly."""
1124 def dec(func):
1124 def dec(func):
1125 assert stepname not in getbundle2partsmapping
1125 assert stepname not in getbundle2partsmapping
1126 getbundle2partsmapping[stepname] = func
1126 getbundle2partsmapping[stepname] = func
1127 getbundle2partsorder.append(stepname)
1127 getbundle2partsorder.append(stepname)
1128 return func
1128 return func
1129 return dec
1129 return dec
1130
1130
1131 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1131 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
1132 **kwargs):
1132 **kwargs):
1133 """return a full bundle (with potentially multiple kind of parts)
1133 """return a full bundle (with potentially multiple kind of parts)
1134
1134
1135 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
1135 Could be a bundle HG10 or a bundle HG2Y depending on bundlecaps
1136 passed. For now, the bundle can contain only changegroup, but this will
1136 passed. For now, the bundle can contain only changegroup, but this will
1137 changes when more part type will be available for bundle2.
1137 changes when more part type will be available for bundle2.
1138
1138
1139 This is different from changegroup.getchangegroup that only returns an HG10
1139 This is different from changegroup.getchangegroup that only returns an HG10
1140 changegroup bundle. They may eventually get reunited in the future when we
1140 changegroup bundle. They may eventually get reunited in the future when we
1141 have a clearer idea of the API we what to query different data.
1141 have a clearer idea of the API we what to query different data.
1142
1142
1143 The implementation is at a very early stage and will get massive rework
1143 The implementation is at a very early stage and will get massive rework
1144 when the API of bundle is refined.
1144 when the API of bundle is refined.
1145 """
1145 """
1146 # bundle10 case
1146 # bundle10 case
1147 if bundlecaps is None or 'HG2X' not in bundlecaps:
1147 if bundlecaps is None or 'HG2Y' not in bundlecaps:
1148 if bundlecaps and not kwargs.get('cg', True):
1148 if bundlecaps and not kwargs.get('cg', True):
1149 raise ValueError(_('request for bundle10 must include changegroup'))
1149 raise ValueError(_('request for bundle10 must include changegroup'))
1150
1150
1151 if kwargs:
1151 if kwargs:
1152 raise ValueError(_('unsupported getbundle arguments: %s')
1152 raise ValueError(_('unsupported getbundle arguments: %s')
1153 % ', '.join(sorted(kwargs.keys())))
1153 % ', '.join(sorted(kwargs.keys())))
1154 return changegroup.getchangegroup(repo, source, heads=heads,
1154 return changegroup.getchangegroup(repo, source, heads=heads,
1155 common=common, bundlecaps=bundlecaps)
1155 common=common, bundlecaps=bundlecaps)
1156
1156
1157 # bundle20 case
1157 # bundle20 case
1158 b2caps = {}
1158 b2caps = {}
1159 for bcaps in bundlecaps:
1159 for bcaps in bundlecaps:
1160 if bcaps.startswith('bundle2='):
1160 if bcaps.startswith('bundle2='):
1161 blob = urllib.unquote(bcaps[len('bundle2='):])
1161 blob = urllib.unquote(bcaps[len('bundle2='):])
1162 b2caps.update(bundle2.decodecaps(blob))
1162 b2caps.update(bundle2.decodecaps(blob))
1163 bundler = bundle2.bundle20(repo.ui, b2caps)
1163 bundler = bundle2.bundle20(repo.ui, b2caps)
1164
1164
1165 for name in getbundle2partsorder:
1165 for name in getbundle2partsorder:
1166 func = getbundle2partsmapping[name]
1166 func = getbundle2partsmapping[name]
1167 kwargs['heads'] = heads
1167 kwargs['heads'] = heads
1168 kwargs['common'] = common
1168 kwargs['common'] = common
1169 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1169 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1170 **kwargs)
1170 **kwargs)
1171
1171
1172 return util.chunkbuffer(bundler.getchunks())
1172 return util.chunkbuffer(bundler.getchunks())
1173
1173
1174 @getbundle2partsgenerator('changegroup')
1174 @getbundle2partsgenerator('changegroup')
1175 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1175 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1176 b2caps=None, heads=None, common=None, **kwargs):
1176 b2caps=None, heads=None, common=None, **kwargs):
1177 """add a changegroup part to the requested bundle"""
1177 """add a changegroup part to the requested bundle"""
1178 cg = None
1178 cg = None
1179 if kwargs.get('cg', True):
1179 if kwargs.get('cg', True):
1180 # build changegroup bundle here.
1180 # build changegroup bundle here.
1181 cg = changegroup.getchangegroup(repo, source, heads=heads,
1181 cg = changegroup.getchangegroup(repo, source, heads=heads,
1182 common=common, bundlecaps=bundlecaps)
1182 common=common, bundlecaps=bundlecaps)
1183
1183
1184 if cg:
1184 if cg:
1185 bundler.newpart('b2x:changegroup', data=cg.getchunks())
1185 bundler.newpart('b2x:changegroup', data=cg.getchunks())
1186
1186
1187 @getbundle2partsgenerator('listkeys')
1187 @getbundle2partsgenerator('listkeys')
1188 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1188 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1189 b2caps=None, **kwargs):
1189 b2caps=None, **kwargs):
1190 """add parts containing listkeys namespaces to the requested bundle"""
1190 """add parts containing listkeys namespaces to the requested bundle"""
1191 listkeys = kwargs.get('listkeys', ())
1191 listkeys = kwargs.get('listkeys', ())
1192 for namespace in listkeys:
1192 for namespace in listkeys:
1193 part = bundler.newpart('b2x:listkeys')
1193 part = bundler.newpart('b2x:listkeys')
1194 part.addparam('namespace', namespace)
1194 part.addparam('namespace', namespace)
1195 keys = repo.listkeys(namespace).items()
1195 keys = repo.listkeys(namespace).items()
1196 part.data = pushkey.encodekeys(keys)
1196 part.data = pushkey.encodekeys(keys)
1197
1197
1198 @getbundle2partsgenerator('obsmarkers')
1198 @getbundle2partsgenerator('obsmarkers')
1199 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1199 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1200 b2caps=None, heads=None, **kwargs):
1200 b2caps=None, heads=None, **kwargs):
1201 """add an obsolescence markers part to the requested bundle"""
1201 """add an obsolescence markers part to the requested bundle"""
1202 if kwargs.get('obsmarkers', False):
1202 if kwargs.get('obsmarkers', False):
1203 if heads is None:
1203 if heads is None:
1204 heads = repo.heads()
1204 heads = repo.heads()
1205 subset = [c.node() for c in repo.set('::%ln', heads)]
1205 subset = [c.node() for c in repo.set('::%ln', heads)]
1206 markers = repo.obsstore.relevantmarkers(subset)
1206 markers = repo.obsstore.relevantmarkers(subset)
1207 buildobsmarkerspart(bundler, markers)
1207 buildobsmarkerspart(bundler, markers)
1208
1208
1209 @getbundle2partsgenerator('extra')
1209 @getbundle2partsgenerator('extra')
1210 def _getbundleextrapart(bundler, repo, source, bundlecaps=None,
1210 def _getbundleextrapart(bundler, repo, source, bundlecaps=None,
1211 b2caps=None, **kwargs):
1211 b2caps=None, **kwargs):
1212 """hook function to let extensions add parts to the requested bundle"""
1212 """hook function to let extensions add parts to the requested bundle"""
1213 pass
1213 pass
1214
1214
1215 def check_heads(repo, their_heads, context):
1215 def check_heads(repo, their_heads, context):
1216 """check if the heads of a repo have been modified
1216 """check if the heads of a repo have been modified
1217
1217
1218 Used by peer for unbundling.
1218 Used by peer for unbundling.
1219 """
1219 """
1220 heads = repo.heads()
1220 heads = repo.heads()
1221 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1221 heads_hash = util.sha1(''.join(sorted(heads))).digest()
1222 if not (their_heads == ['force'] or their_heads == heads or
1222 if not (their_heads == ['force'] or their_heads == heads or
1223 their_heads == ['hashed', heads_hash]):
1223 their_heads == ['hashed', heads_hash]):
1224 # someone else committed/pushed/unbundled while we
1224 # someone else committed/pushed/unbundled while we
1225 # were transferring data
1225 # were transferring data
1226 raise error.PushRaced('repository changed while %s - '
1226 raise error.PushRaced('repository changed while %s - '
1227 'please try again' % context)
1227 'please try again' % context)
1228
1228
1229 def unbundle(repo, cg, heads, source, url):
1229 def unbundle(repo, cg, heads, source, url):
1230 """Apply a bundle to a repo.
1230 """Apply a bundle to a repo.
1231
1231
1232 this function makes sure the repo is locked during the application and have
1232 this function makes sure the repo is locked during the application and have
1233 mechanism to check that no push race occurred between the creation of the
1233 mechanism to check that no push race occurred between the creation of the
1234 bundle and its application.
1234 bundle and its application.
1235
1235
1236 If the push was raced as PushRaced exception is raised."""
1236 If the push was raced as PushRaced exception is raised."""
1237 r = 0
1237 r = 0
1238 # need a transaction when processing a bundle2 stream
1238 # need a transaction when processing a bundle2 stream
1239 tr = None
1239 tr = None
1240 lock = repo.lock()
1240 lock = repo.lock()
1241 try:
1241 try:
1242 check_heads(repo, heads, 'uploading changes')
1242 check_heads(repo, heads, 'uploading changes')
1243 # push can proceed
1243 # push can proceed
1244 if util.safehasattr(cg, 'params'):
1244 if util.safehasattr(cg, 'params'):
1245 try:
1245 try:
1246 tr = repo.transaction('unbundle')
1246 tr = repo.transaction('unbundle')
1247 tr.hookargs['source'] = source
1247 tr.hookargs['source'] = source
1248 tr.hookargs['url'] = url
1248 tr.hookargs['url'] = url
1249 tr.hookargs['bundle2-exp'] = '1'
1249 tr.hookargs['bundle2-exp'] = '1'
1250 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1250 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1251 cl = repo.unfiltered().changelog
1251 cl = repo.unfiltered().changelog
1252 p = cl.writepending() and repo.root or ""
1252 p = cl.writepending() and repo.root or ""
1253 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
1253 repo.hook('b2x-pretransactionclose', throw=True, pending=p,
1254 **tr.hookargs)
1254 **tr.hookargs)
1255 tr.close()
1255 tr.close()
1256 repo.hook('b2x-transactionclose', **tr.hookargs)
1256 repo.hook('b2x-transactionclose', **tr.hookargs)
1257 except Exception, exc:
1257 except Exception, exc:
1258 exc.duringunbundle2 = True
1258 exc.duringunbundle2 = True
1259 raise
1259 raise
1260 else:
1260 else:
1261 r = changegroup.addchangegroup(repo, cg, source, url)
1261 r = changegroup.addchangegroup(repo, cg, source, url)
1262 finally:
1262 finally:
1263 if tr is not None:
1263 if tr is not None:
1264 tr.release()
1264 tr.release()
1265 lock.release()
1265 lock.release()
1266 return r
1266 return r
@@ -1,1792 +1,1792 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from node import hex, nullid, short
7 from node import hex, nullid, short
8 from i18n import _
8 from i18n import _
9 import urllib
9 import urllib
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
12 import lock as lockmod
12 import lock as lockmod
13 import transaction, store, encoding, exchange, bundle2
13 import transaction, store, encoding, exchange, bundle2
14 import scmutil, util, extensions, hook, error, revset
14 import scmutil, util, extensions, hook, error, revset
15 import match as matchmod
15 import match as matchmod
16 import merge as mergemod
16 import merge as mergemod
17 import tags as tagsmod
17 import tags as tagsmod
18 from lock import release
18 from lock import release
19 import weakref, errno, os, time, inspect
19 import weakref, errno, os, time, inspect
20 import branchmap, pathutil
20 import branchmap, pathutil
21 propertycache = util.propertycache
21 propertycache = util.propertycache
22 filecache = scmutil.filecache
22 filecache = scmutil.filecache
23
23
24 class repofilecache(filecache):
24 class repofilecache(filecache):
25 """All filecache usage on repo are done for logic that should be unfiltered
25 """All filecache usage on repo are done for logic that should be unfiltered
26 """
26 """
27
27
28 def __get__(self, repo, type=None):
28 def __get__(self, repo, type=None):
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
30 def __set__(self, repo, value):
30 def __set__(self, repo, value):
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
32 def __delete__(self, repo):
32 def __delete__(self, repo):
33 return super(repofilecache, self).__delete__(repo.unfiltered())
33 return super(repofilecache, self).__delete__(repo.unfiltered())
34
34
35 class storecache(repofilecache):
35 class storecache(repofilecache):
36 """filecache for files in the store"""
36 """filecache for files in the store"""
37 def join(self, obj, fname):
37 def join(self, obj, fname):
38 return obj.sjoin(fname)
38 return obj.sjoin(fname)
39
39
40 class unfilteredpropertycache(propertycache):
40 class unfilteredpropertycache(propertycache):
41 """propertycache that apply to unfiltered repo only"""
41 """propertycache that apply to unfiltered repo only"""
42
42
43 def __get__(self, repo, type=None):
43 def __get__(self, repo, type=None):
44 unfi = repo.unfiltered()
44 unfi = repo.unfiltered()
45 if unfi is repo:
45 if unfi is repo:
46 return super(unfilteredpropertycache, self).__get__(unfi)
46 return super(unfilteredpropertycache, self).__get__(unfi)
47 return getattr(unfi, self.name)
47 return getattr(unfi, self.name)
48
48
49 class filteredpropertycache(propertycache):
49 class filteredpropertycache(propertycache):
50 """propertycache that must take filtering in account"""
50 """propertycache that must take filtering in account"""
51
51
52 def cachevalue(self, obj, value):
52 def cachevalue(self, obj, value):
53 object.__setattr__(obj, self.name, value)
53 object.__setattr__(obj, self.name, value)
54
54
55
55
56 def hasunfilteredcache(repo, name):
56 def hasunfilteredcache(repo, name):
57 """check if a repo has an unfilteredpropertycache value for <name>"""
57 """check if a repo has an unfilteredpropertycache value for <name>"""
58 return name in vars(repo.unfiltered())
58 return name in vars(repo.unfiltered())
59
59
60 def unfilteredmethod(orig):
60 def unfilteredmethod(orig):
61 """decorate method that always need to be run on unfiltered version"""
61 """decorate method that always need to be run on unfiltered version"""
62 def wrapper(repo, *args, **kwargs):
62 def wrapper(repo, *args, **kwargs):
63 return orig(repo.unfiltered(), *args, **kwargs)
63 return orig(repo.unfiltered(), *args, **kwargs)
64 return wrapper
64 return wrapper
65
65
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
67 'unbundle'))
67 'unbundle'))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
69
69
70 class localpeer(peer.peerrepository):
70 class localpeer(peer.peerrepository):
71 '''peer for a local repo; reflects only the most recent API'''
71 '''peer for a local repo; reflects only the most recent API'''
72
72
73 def __init__(self, repo, caps=moderncaps):
73 def __init__(self, repo, caps=moderncaps):
74 peer.peerrepository.__init__(self)
74 peer.peerrepository.__init__(self)
75 self._repo = repo.filtered('served')
75 self._repo = repo.filtered('served')
76 self.ui = repo.ui
76 self.ui = repo.ui
77 self._caps = repo._restrictcapabilities(caps)
77 self._caps = repo._restrictcapabilities(caps)
78 self.requirements = repo.requirements
78 self.requirements = repo.requirements
79 self.supportedformats = repo.supportedformats
79 self.supportedformats = repo.supportedformats
80
80
81 def close(self):
81 def close(self):
82 self._repo.close()
82 self._repo.close()
83
83
84 def _capabilities(self):
84 def _capabilities(self):
85 return self._caps
85 return self._caps
86
86
87 def local(self):
87 def local(self):
88 return self._repo
88 return self._repo
89
89
90 def canpush(self):
90 def canpush(self):
91 return True
91 return True
92
92
93 def url(self):
93 def url(self):
94 return self._repo.url()
94 return self._repo.url()
95
95
96 def lookup(self, key):
96 def lookup(self, key):
97 return self._repo.lookup(key)
97 return self._repo.lookup(key)
98
98
99 def branchmap(self):
99 def branchmap(self):
100 return self._repo.branchmap()
100 return self._repo.branchmap()
101
101
102 def heads(self):
102 def heads(self):
103 return self._repo.heads()
103 return self._repo.heads()
104
104
105 def known(self, nodes):
105 def known(self, nodes):
106 return self._repo.known(nodes)
106 return self._repo.known(nodes)
107
107
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
109 format='HG10', **kwargs):
109 format='HG10', **kwargs):
110 cg = exchange.getbundle(self._repo, source, heads=heads,
110 cg = exchange.getbundle(self._repo, source, heads=heads,
111 common=common, bundlecaps=bundlecaps, **kwargs)
111 common=common, bundlecaps=bundlecaps, **kwargs)
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
112 if bundlecaps is not None and 'HG2Y' in bundlecaps:
113 # When requesting a bundle2, getbundle returns a stream to make the
113 # When requesting a bundle2, getbundle returns a stream to make the
114 # wire level function happier. We need to build a proper object
114 # wire level function happier. We need to build a proper object
115 # from it in local peer.
115 # from it in local peer.
116 cg = bundle2.unbundle20(self.ui, cg)
116 cg = bundle2.unbundle20(self.ui, cg)
117 return cg
117 return cg
118
118
119 # TODO We might want to move the next two calls into legacypeer and add
119 # TODO We might want to move the next two calls into legacypeer and add
120 # unbundle instead.
120 # unbundle instead.
121
121
122 def unbundle(self, cg, heads, url):
122 def unbundle(self, cg, heads, url):
123 """apply a bundle on a repo
123 """apply a bundle on a repo
124
124
125 This function handles the repo locking itself."""
125 This function handles the repo locking itself."""
126 try:
126 try:
127 cg = exchange.readbundle(self.ui, cg, None)
127 cg = exchange.readbundle(self.ui, cg, None)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
129 if util.safehasattr(ret, 'getchunks'):
129 if util.safehasattr(ret, 'getchunks'):
130 # This is a bundle20 object, turn it into an unbundler.
130 # This is a bundle20 object, turn it into an unbundler.
131 # This little dance should be dropped eventually when the API
131 # This little dance should be dropped eventually when the API
132 # is finally improved.
132 # is finally improved.
133 stream = util.chunkbuffer(ret.getchunks())
133 stream = util.chunkbuffer(ret.getchunks())
134 ret = bundle2.unbundle20(self.ui, stream)
134 ret = bundle2.unbundle20(self.ui, stream)
135 return ret
135 return ret
136 except error.PushRaced, exc:
136 except error.PushRaced, exc:
137 raise error.ResponseError(_('push failed:'), str(exc))
137 raise error.ResponseError(_('push failed:'), str(exc))
138
138
139 def lock(self):
139 def lock(self):
140 return self._repo.lock()
140 return self._repo.lock()
141
141
142 def addchangegroup(self, cg, source, url):
142 def addchangegroup(self, cg, source, url):
143 return changegroup.addchangegroup(self._repo, cg, source, url)
143 return changegroup.addchangegroup(self._repo, cg, source, url)
144
144
145 def pushkey(self, namespace, key, old, new):
145 def pushkey(self, namespace, key, old, new):
146 return self._repo.pushkey(namespace, key, old, new)
146 return self._repo.pushkey(namespace, key, old, new)
147
147
148 def listkeys(self, namespace):
148 def listkeys(self, namespace):
149 return self._repo.listkeys(namespace)
149 return self._repo.listkeys(namespace)
150
150
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
152 '''used to test argument passing over the wire'''
152 '''used to test argument passing over the wire'''
153 return "%s %s %s %s %s" % (one, two, three, four, five)
153 return "%s %s %s %s %s" % (one, two, three, four, five)
154
154
155 class locallegacypeer(localpeer):
155 class locallegacypeer(localpeer):
156 '''peer extension which implements legacy methods too; used for tests with
156 '''peer extension which implements legacy methods too; used for tests with
157 restricted capabilities'''
157 restricted capabilities'''
158
158
159 def __init__(self, repo):
159 def __init__(self, repo):
160 localpeer.__init__(self, repo, caps=legacycaps)
160 localpeer.__init__(self, repo, caps=legacycaps)
161
161
162 def branches(self, nodes):
162 def branches(self, nodes):
163 return self._repo.branches(nodes)
163 return self._repo.branches(nodes)
164
164
165 def between(self, pairs):
165 def between(self, pairs):
166 return self._repo.between(pairs)
166 return self._repo.between(pairs)
167
167
168 def changegroup(self, basenodes, source):
168 def changegroup(self, basenodes, source):
169 return changegroup.changegroup(self._repo, basenodes, source)
169 return changegroup.changegroup(self._repo, basenodes, source)
170
170
171 def changegroupsubset(self, bases, heads, source):
171 def changegroupsubset(self, bases, heads, source):
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
173
173
174 class localrepository(object):
174 class localrepository(object):
175
175
176 supportedformats = set(('revlogv1', 'generaldelta'))
176 supportedformats = set(('revlogv1', 'generaldelta'))
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
178 'dotencode'))
178 'dotencode'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
180 requirements = ['revlogv1']
180 requirements = ['revlogv1']
181 filtername = None
181 filtername = None
182
182
183 # a list of (ui, featureset) functions.
183 # a list of (ui, featureset) functions.
184 # only functions defined in module of enabled extensions are invoked
184 # only functions defined in module of enabled extensions are invoked
185 featuresetupfuncs = set()
185 featuresetupfuncs = set()
186
186
187 def _baserequirements(self, create):
187 def _baserequirements(self, create):
188 return self.requirements[:]
188 return self.requirements[:]
189
189
190 def __init__(self, baseui, path=None, create=False):
190 def __init__(self, baseui, path=None, create=False):
191 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
191 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
192 self.wopener = self.wvfs
192 self.wopener = self.wvfs
193 self.root = self.wvfs.base
193 self.root = self.wvfs.base
194 self.path = self.wvfs.join(".hg")
194 self.path = self.wvfs.join(".hg")
195 self.origroot = path
195 self.origroot = path
196 self.auditor = pathutil.pathauditor(self.root, self._checknested)
196 self.auditor = pathutil.pathauditor(self.root, self._checknested)
197 self.vfs = scmutil.vfs(self.path)
197 self.vfs = scmutil.vfs(self.path)
198 self.opener = self.vfs
198 self.opener = self.vfs
199 self.baseui = baseui
199 self.baseui = baseui
200 self.ui = baseui.copy()
200 self.ui = baseui.copy()
201 self.ui.copy = baseui.copy # prevent copying repo configuration
201 self.ui.copy = baseui.copy # prevent copying repo configuration
202 # A list of callback to shape the phase if no data were found.
202 # A list of callback to shape the phase if no data were found.
203 # Callback are in the form: func(repo, roots) --> processed root.
203 # Callback are in the form: func(repo, roots) --> processed root.
204 # This list it to be filled by extension during repo setup
204 # This list it to be filled by extension during repo setup
205 self._phasedefaults = []
205 self._phasedefaults = []
206 try:
206 try:
207 self.ui.readconfig(self.join("hgrc"), self.root)
207 self.ui.readconfig(self.join("hgrc"), self.root)
208 extensions.loadall(self.ui)
208 extensions.loadall(self.ui)
209 except IOError:
209 except IOError:
210 pass
210 pass
211
211
212 if self.featuresetupfuncs:
212 if self.featuresetupfuncs:
213 self.supported = set(self._basesupported) # use private copy
213 self.supported = set(self._basesupported) # use private copy
214 extmods = set(m.__name__ for n, m
214 extmods = set(m.__name__ for n, m
215 in extensions.extensions(self.ui))
215 in extensions.extensions(self.ui))
216 for setupfunc in self.featuresetupfuncs:
216 for setupfunc in self.featuresetupfuncs:
217 if setupfunc.__module__ in extmods:
217 if setupfunc.__module__ in extmods:
218 setupfunc(self.ui, self.supported)
218 setupfunc(self.ui, self.supported)
219 else:
219 else:
220 self.supported = self._basesupported
220 self.supported = self._basesupported
221
221
222 if not self.vfs.isdir():
222 if not self.vfs.isdir():
223 if create:
223 if create:
224 if not self.wvfs.exists():
224 if not self.wvfs.exists():
225 self.wvfs.makedirs()
225 self.wvfs.makedirs()
226 self.vfs.makedir(notindexed=True)
226 self.vfs.makedir(notindexed=True)
227 requirements = self._baserequirements(create)
227 requirements = self._baserequirements(create)
228 if self.ui.configbool('format', 'usestore', True):
228 if self.ui.configbool('format', 'usestore', True):
229 self.vfs.mkdir("store")
229 self.vfs.mkdir("store")
230 requirements.append("store")
230 requirements.append("store")
231 if self.ui.configbool('format', 'usefncache', True):
231 if self.ui.configbool('format', 'usefncache', True):
232 requirements.append("fncache")
232 requirements.append("fncache")
233 if self.ui.configbool('format', 'dotencode', True):
233 if self.ui.configbool('format', 'dotencode', True):
234 requirements.append('dotencode')
234 requirements.append('dotencode')
235 # create an invalid changelog
235 # create an invalid changelog
236 self.vfs.append(
236 self.vfs.append(
237 "00changelog.i",
237 "00changelog.i",
238 '\0\0\0\2' # represents revlogv2
238 '\0\0\0\2' # represents revlogv2
239 ' dummy changelog to prevent using the old repo layout'
239 ' dummy changelog to prevent using the old repo layout'
240 )
240 )
241 if self.ui.configbool('format', 'generaldelta', False):
241 if self.ui.configbool('format', 'generaldelta', False):
242 requirements.append("generaldelta")
242 requirements.append("generaldelta")
243 requirements = set(requirements)
243 requirements = set(requirements)
244 else:
244 else:
245 raise error.RepoError(_("repository %s not found") % path)
245 raise error.RepoError(_("repository %s not found") % path)
246 elif create:
246 elif create:
247 raise error.RepoError(_("repository %s already exists") % path)
247 raise error.RepoError(_("repository %s already exists") % path)
248 else:
248 else:
249 try:
249 try:
250 requirements = scmutil.readrequires(self.vfs, self.supported)
250 requirements = scmutil.readrequires(self.vfs, self.supported)
251 except IOError, inst:
251 except IOError, inst:
252 if inst.errno != errno.ENOENT:
252 if inst.errno != errno.ENOENT:
253 raise
253 raise
254 requirements = set()
254 requirements = set()
255
255
256 self.sharedpath = self.path
256 self.sharedpath = self.path
257 try:
257 try:
258 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
258 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
259 realpath=True)
259 realpath=True)
260 s = vfs.base
260 s = vfs.base
261 if not vfs.exists():
261 if not vfs.exists():
262 raise error.RepoError(
262 raise error.RepoError(
263 _('.hg/sharedpath points to nonexistent directory %s') % s)
263 _('.hg/sharedpath points to nonexistent directory %s') % s)
264 self.sharedpath = s
264 self.sharedpath = s
265 except IOError, inst:
265 except IOError, inst:
266 if inst.errno != errno.ENOENT:
266 if inst.errno != errno.ENOENT:
267 raise
267 raise
268
268
269 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
269 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
270 self.spath = self.store.path
270 self.spath = self.store.path
271 self.svfs = self.store.vfs
271 self.svfs = self.store.vfs
272 self.sopener = self.svfs
272 self.sopener = self.svfs
273 self.sjoin = self.store.join
273 self.sjoin = self.store.join
274 self.vfs.createmode = self.store.createmode
274 self.vfs.createmode = self.store.createmode
275 self._applyrequirements(requirements)
275 self._applyrequirements(requirements)
276 if create:
276 if create:
277 self._writerequirements()
277 self._writerequirements()
278
278
279
279
280 self._branchcaches = {}
280 self._branchcaches = {}
281 self.filterpats = {}
281 self.filterpats = {}
282 self._datafilters = {}
282 self._datafilters = {}
283 self._transref = self._lockref = self._wlockref = None
283 self._transref = self._lockref = self._wlockref = None
284
284
285 # A cache for various files under .hg/ that tracks file changes,
285 # A cache for various files under .hg/ that tracks file changes,
286 # (used by the filecache decorator)
286 # (used by the filecache decorator)
287 #
287 #
288 # Maps a property name to its util.filecacheentry
288 # Maps a property name to its util.filecacheentry
289 self._filecache = {}
289 self._filecache = {}
290
290
291 # hold sets of revision to be filtered
291 # hold sets of revision to be filtered
292 # should be cleared when something might have changed the filter value:
292 # should be cleared when something might have changed the filter value:
293 # - new changesets,
293 # - new changesets,
294 # - phase change,
294 # - phase change,
295 # - new obsolescence marker,
295 # - new obsolescence marker,
296 # - working directory parent change,
296 # - working directory parent change,
297 # - bookmark changes
297 # - bookmark changes
298 self.filteredrevcache = {}
298 self.filteredrevcache = {}
299
299
300 def close(self):
300 def close(self):
301 pass
301 pass
302
302
303 def _restrictcapabilities(self, caps):
303 def _restrictcapabilities(self, caps):
304 # bundle2 is not ready for prime time, drop it unless explicitly
304 # bundle2 is not ready for prime time, drop it unless explicitly
305 # required by the tests (or some brave tester)
305 # required by the tests (or some brave tester)
306 if self.ui.configbool('experimental', 'bundle2-exp', False):
306 if self.ui.configbool('experimental', 'bundle2-exp', False):
307 caps = set(caps)
307 caps = set(caps)
308 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self))
308 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self))
309 caps.add('bundle2-exp=' + urllib.quote(capsblob))
309 caps.add('bundle2-exp=' + urllib.quote(capsblob))
310 return caps
310 return caps
311
311
312 def _applyrequirements(self, requirements):
312 def _applyrequirements(self, requirements):
313 self.requirements = requirements
313 self.requirements = requirements
314 self.sopener.options = dict((r, 1) for r in requirements
314 self.sopener.options = dict((r, 1) for r in requirements
315 if r in self.openerreqs)
315 if r in self.openerreqs)
316 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
316 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
317 if chunkcachesize is not None:
317 if chunkcachesize is not None:
318 self.sopener.options['chunkcachesize'] = chunkcachesize
318 self.sopener.options['chunkcachesize'] = chunkcachesize
319
319
320 def _writerequirements(self):
320 def _writerequirements(self):
321 reqfile = self.opener("requires", "w")
321 reqfile = self.opener("requires", "w")
322 for r in sorted(self.requirements):
322 for r in sorted(self.requirements):
323 reqfile.write("%s\n" % r)
323 reqfile.write("%s\n" % r)
324 reqfile.close()
324 reqfile.close()
325
325
326 def _checknested(self, path):
326 def _checknested(self, path):
327 """Determine if path is a legal nested repository."""
327 """Determine if path is a legal nested repository."""
328 if not path.startswith(self.root):
328 if not path.startswith(self.root):
329 return False
329 return False
330 subpath = path[len(self.root) + 1:]
330 subpath = path[len(self.root) + 1:]
331 normsubpath = util.pconvert(subpath)
331 normsubpath = util.pconvert(subpath)
332
332
333 # XXX: Checking against the current working copy is wrong in
333 # XXX: Checking against the current working copy is wrong in
334 # the sense that it can reject things like
334 # the sense that it can reject things like
335 #
335 #
336 # $ hg cat -r 10 sub/x.txt
336 # $ hg cat -r 10 sub/x.txt
337 #
337 #
338 # if sub/ is no longer a subrepository in the working copy
338 # if sub/ is no longer a subrepository in the working copy
339 # parent revision.
339 # parent revision.
340 #
340 #
341 # However, it can of course also allow things that would have
341 # However, it can of course also allow things that would have
342 # been rejected before, such as the above cat command if sub/
342 # been rejected before, such as the above cat command if sub/
343 # is a subrepository now, but was a normal directory before.
343 # is a subrepository now, but was a normal directory before.
344 # The old path auditor would have rejected by mistake since it
344 # The old path auditor would have rejected by mistake since it
345 # panics when it sees sub/.hg/.
345 # panics when it sees sub/.hg/.
346 #
346 #
347 # All in all, checking against the working copy seems sensible
347 # All in all, checking against the working copy seems sensible
348 # since we want to prevent access to nested repositories on
348 # since we want to prevent access to nested repositories on
349 # the filesystem *now*.
349 # the filesystem *now*.
350 ctx = self[None]
350 ctx = self[None]
351 parts = util.splitpath(subpath)
351 parts = util.splitpath(subpath)
352 while parts:
352 while parts:
353 prefix = '/'.join(parts)
353 prefix = '/'.join(parts)
354 if prefix in ctx.substate:
354 if prefix in ctx.substate:
355 if prefix == normsubpath:
355 if prefix == normsubpath:
356 return True
356 return True
357 else:
357 else:
358 sub = ctx.sub(prefix)
358 sub = ctx.sub(prefix)
359 return sub.checknested(subpath[len(prefix) + 1:])
359 return sub.checknested(subpath[len(prefix) + 1:])
360 else:
360 else:
361 parts.pop()
361 parts.pop()
362 return False
362 return False
363
363
364 def peer(self):
364 def peer(self):
365 return localpeer(self) # not cached to avoid reference cycle
365 return localpeer(self) # not cached to avoid reference cycle
366
366
367 def unfiltered(self):
367 def unfiltered(self):
368 """Return unfiltered version of the repository
368 """Return unfiltered version of the repository
369
369
370 Intended to be overwritten by filtered repo."""
370 Intended to be overwritten by filtered repo."""
371 return self
371 return self
372
372
373 def filtered(self, name):
373 def filtered(self, name):
374 """Return a filtered version of a repository"""
374 """Return a filtered version of a repository"""
375 # build a new class with the mixin and the current class
375 # build a new class with the mixin and the current class
376 # (possibly subclass of the repo)
376 # (possibly subclass of the repo)
377 class proxycls(repoview.repoview, self.unfiltered().__class__):
377 class proxycls(repoview.repoview, self.unfiltered().__class__):
378 pass
378 pass
379 return proxycls(self, name)
379 return proxycls(self, name)
380
380
381 @repofilecache('bookmarks')
381 @repofilecache('bookmarks')
382 def _bookmarks(self):
382 def _bookmarks(self):
383 return bookmarks.bmstore(self)
383 return bookmarks.bmstore(self)
384
384
385 @repofilecache('bookmarks.current')
385 @repofilecache('bookmarks.current')
386 def _bookmarkcurrent(self):
386 def _bookmarkcurrent(self):
387 return bookmarks.readcurrent(self)
387 return bookmarks.readcurrent(self)
388
388
389 def bookmarkheads(self, bookmark):
389 def bookmarkheads(self, bookmark):
390 name = bookmark.split('@', 1)[0]
390 name = bookmark.split('@', 1)[0]
391 heads = []
391 heads = []
392 for mark, n in self._bookmarks.iteritems():
392 for mark, n in self._bookmarks.iteritems():
393 if mark.split('@', 1)[0] == name:
393 if mark.split('@', 1)[0] == name:
394 heads.append(n)
394 heads.append(n)
395 return heads
395 return heads
396
396
397 @storecache('phaseroots')
397 @storecache('phaseroots')
398 def _phasecache(self):
398 def _phasecache(self):
399 return phases.phasecache(self, self._phasedefaults)
399 return phases.phasecache(self, self._phasedefaults)
400
400
401 @storecache('obsstore')
401 @storecache('obsstore')
402 def obsstore(self):
402 def obsstore(self):
403 # read default format for new obsstore.
403 # read default format for new obsstore.
404 defaultformat = self.ui.configint('format', 'obsstore-version', None)
404 defaultformat = self.ui.configint('format', 'obsstore-version', None)
405 # rely on obsstore class default when possible.
405 # rely on obsstore class default when possible.
406 kwargs = {}
406 kwargs = {}
407 if defaultformat is not None:
407 if defaultformat is not None:
408 kwargs['defaultformat'] = defaultformat
408 kwargs['defaultformat'] = defaultformat
409 readonly = not obsolete.isenabled(self, obsolete.createmarkersopt)
409 readonly = not obsolete.isenabled(self, obsolete.createmarkersopt)
410 store = obsolete.obsstore(self.sopener, readonly=readonly,
410 store = obsolete.obsstore(self.sopener, readonly=readonly,
411 **kwargs)
411 **kwargs)
412 if store and readonly:
412 if store and readonly:
413 # message is rare enough to not be translated
413 # message is rare enough to not be translated
414 msg = 'obsolete feature not enabled but %i markers found!\n'
414 msg = 'obsolete feature not enabled but %i markers found!\n'
415 self.ui.warn(msg % len(list(store)))
415 self.ui.warn(msg % len(list(store)))
416 return store
416 return store
417
417
418 @storecache('00changelog.i')
418 @storecache('00changelog.i')
419 def changelog(self):
419 def changelog(self):
420 c = changelog.changelog(self.sopener)
420 c = changelog.changelog(self.sopener)
421 if 'HG_PENDING' in os.environ:
421 if 'HG_PENDING' in os.environ:
422 p = os.environ['HG_PENDING']
422 p = os.environ['HG_PENDING']
423 if p.startswith(self.root):
423 if p.startswith(self.root):
424 c.readpending('00changelog.i.a')
424 c.readpending('00changelog.i.a')
425 return c
425 return c
426
426
427 @storecache('00manifest.i')
427 @storecache('00manifest.i')
428 def manifest(self):
428 def manifest(self):
429 return manifest.manifest(self.sopener)
429 return manifest.manifest(self.sopener)
430
430
431 @repofilecache('dirstate')
431 @repofilecache('dirstate')
432 def dirstate(self):
432 def dirstate(self):
433 warned = [0]
433 warned = [0]
434 def validate(node):
434 def validate(node):
435 try:
435 try:
436 self.changelog.rev(node)
436 self.changelog.rev(node)
437 return node
437 return node
438 except error.LookupError:
438 except error.LookupError:
439 if not warned[0]:
439 if not warned[0]:
440 warned[0] = True
440 warned[0] = True
441 self.ui.warn(_("warning: ignoring unknown"
441 self.ui.warn(_("warning: ignoring unknown"
442 " working parent %s!\n") % short(node))
442 " working parent %s!\n") % short(node))
443 return nullid
443 return nullid
444
444
445 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
445 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
446
446
447 def __getitem__(self, changeid):
447 def __getitem__(self, changeid):
448 if changeid is None:
448 if changeid is None:
449 return context.workingctx(self)
449 return context.workingctx(self)
450 return context.changectx(self, changeid)
450 return context.changectx(self, changeid)
451
451
452 def __contains__(self, changeid):
452 def __contains__(self, changeid):
453 try:
453 try:
454 return bool(self.lookup(changeid))
454 return bool(self.lookup(changeid))
455 except error.RepoLookupError:
455 except error.RepoLookupError:
456 return False
456 return False
457
457
458 def __nonzero__(self):
458 def __nonzero__(self):
459 return True
459 return True
460
460
461 def __len__(self):
461 def __len__(self):
462 return len(self.changelog)
462 return len(self.changelog)
463
463
464 def __iter__(self):
464 def __iter__(self):
465 return iter(self.changelog)
465 return iter(self.changelog)
466
466
467 def revs(self, expr, *args):
467 def revs(self, expr, *args):
468 '''Return a list of revisions matching the given revset'''
468 '''Return a list of revisions matching the given revset'''
469 expr = revset.formatspec(expr, *args)
469 expr = revset.formatspec(expr, *args)
470 m = revset.match(None, expr)
470 m = revset.match(None, expr)
471 return m(self, revset.spanset(self))
471 return m(self, revset.spanset(self))
472
472
473 def set(self, expr, *args):
473 def set(self, expr, *args):
474 '''
474 '''
475 Yield a context for each matching revision, after doing arg
475 Yield a context for each matching revision, after doing arg
476 replacement via revset.formatspec
476 replacement via revset.formatspec
477 '''
477 '''
478 for r in self.revs(expr, *args):
478 for r in self.revs(expr, *args):
479 yield self[r]
479 yield self[r]
480
480
481 def url(self):
481 def url(self):
482 return 'file:' + self.root
482 return 'file:' + self.root
483
483
484 def hook(self, name, throw=False, **args):
484 def hook(self, name, throw=False, **args):
485 """Call a hook, passing this repo instance.
485 """Call a hook, passing this repo instance.
486
486
487 This a convenience method to aid invoking hooks. Extensions likely
487 This a convenience method to aid invoking hooks. Extensions likely
488 won't call this unless they have registered a custom hook or are
488 won't call this unless they have registered a custom hook or are
489 replacing code that is expected to call a hook.
489 replacing code that is expected to call a hook.
490 """
490 """
491 return hook.hook(self.ui, self, name, throw, **args)
491 return hook.hook(self.ui, self, name, throw, **args)
492
492
493 @unfilteredmethod
493 @unfilteredmethod
494 def _tag(self, names, node, message, local, user, date, extra={},
494 def _tag(self, names, node, message, local, user, date, extra={},
495 editor=False):
495 editor=False):
496 if isinstance(names, str):
496 if isinstance(names, str):
497 names = (names,)
497 names = (names,)
498
498
499 branches = self.branchmap()
499 branches = self.branchmap()
500 for name in names:
500 for name in names:
501 self.hook('pretag', throw=True, node=hex(node), tag=name,
501 self.hook('pretag', throw=True, node=hex(node), tag=name,
502 local=local)
502 local=local)
503 if name in branches:
503 if name in branches:
504 self.ui.warn(_("warning: tag %s conflicts with existing"
504 self.ui.warn(_("warning: tag %s conflicts with existing"
505 " branch name\n") % name)
505 " branch name\n") % name)
506
506
507 def writetags(fp, names, munge, prevtags):
507 def writetags(fp, names, munge, prevtags):
508 fp.seek(0, 2)
508 fp.seek(0, 2)
509 if prevtags and prevtags[-1] != '\n':
509 if prevtags and prevtags[-1] != '\n':
510 fp.write('\n')
510 fp.write('\n')
511 for name in names:
511 for name in names:
512 m = munge and munge(name) or name
512 m = munge and munge(name) or name
513 if (self._tagscache.tagtypes and
513 if (self._tagscache.tagtypes and
514 name in self._tagscache.tagtypes):
514 name in self._tagscache.tagtypes):
515 old = self.tags().get(name, nullid)
515 old = self.tags().get(name, nullid)
516 fp.write('%s %s\n' % (hex(old), m))
516 fp.write('%s %s\n' % (hex(old), m))
517 fp.write('%s %s\n' % (hex(node), m))
517 fp.write('%s %s\n' % (hex(node), m))
518 fp.close()
518 fp.close()
519
519
520 prevtags = ''
520 prevtags = ''
521 if local:
521 if local:
522 try:
522 try:
523 fp = self.opener('localtags', 'r+')
523 fp = self.opener('localtags', 'r+')
524 except IOError:
524 except IOError:
525 fp = self.opener('localtags', 'a')
525 fp = self.opener('localtags', 'a')
526 else:
526 else:
527 prevtags = fp.read()
527 prevtags = fp.read()
528
528
529 # local tags are stored in the current charset
529 # local tags are stored in the current charset
530 writetags(fp, names, None, prevtags)
530 writetags(fp, names, None, prevtags)
531 for name in names:
531 for name in names:
532 self.hook('tag', node=hex(node), tag=name, local=local)
532 self.hook('tag', node=hex(node), tag=name, local=local)
533 return
533 return
534
534
535 try:
535 try:
536 fp = self.wfile('.hgtags', 'rb+')
536 fp = self.wfile('.hgtags', 'rb+')
537 except IOError, e:
537 except IOError, e:
538 if e.errno != errno.ENOENT:
538 if e.errno != errno.ENOENT:
539 raise
539 raise
540 fp = self.wfile('.hgtags', 'ab')
540 fp = self.wfile('.hgtags', 'ab')
541 else:
541 else:
542 prevtags = fp.read()
542 prevtags = fp.read()
543
543
544 # committed tags are stored in UTF-8
544 # committed tags are stored in UTF-8
545 writetags(fp, names, encoding.fromlocal, prevtags)
545 writetags(fp, names, encoding.fromlocal, prevtags)
546
546
547 fp.close()
547 fp.close()
548
548
549 self.invalidatecaches()
549 self.invalidatecaches()
550
550
551 if '.hgtags' not in self.dirstate:
551 if '.hgtags' not in self.dirstate:
552 self[None].add(['.hgtags'])
552 self[None].add(['.hgtags'])
553
553
554 m = matchmod.exact(self.root, '', ['.hgtags'])
554 m = matchmod.exact(self.root, '', ['.hgtags'])
555 tagnode = self.commit(message, user, date, extra=extra, match=m,
555 tagnode = self.commit(message, user, date, extra=extra, match=m,
556 editor=editor)
556 editor=editor)
557
557
558 for name in names:
558 for name in names:
559 self.hook('tag', node=hex(node), tag=name, local=local)
559 self.hook('tag', node=hex(node), tag=name, local=local)
560
560
561 return tagnode
561 return tagnode
562
562
563 def tag(self, names, node, message, local, user, date, editor=False):
563 def tag(self, names, node, message, local, user, date, editor=False):
564 '''tag a revision with one or more symbolic names.
564 '''tag a revision with one or more symbolic names.
565
565
566 names is a list of strings or, when adding a single tag, names may be a
566 names is a list of strings or, when adding a single tag, names may be a
567 string.
567 string.
568
568
569 if local is True, the tags are stored in a per-repository file.
569 if local is True, the tags are stored in a per-repository file.
570 otherwise, they are stored in the .hgtags file, and a new
570 otherwise, they are stored in the .hgtags file, and a new
571 changeset is committed with the change.
571 changeset is committed with the change.
572
572
573 keyword arguments:
573 keyword arguments:
574
574
575 local: whether to store tags in non-version-controlled file
575 local: whether to store tags in non-version-controlled file
576 (default False)
576 (default False)
577
577
578 message: commit message to use if committing
578 message: commit message to use if committing
579
579
580 user: name of user to use if committing
580 user: name of user to use if committing
581
581
582 date: date tuple to use if committing'''
582 date: date tuple to use if committing'''
583
583
584 if not local:
584 if not local:
585 m = matchmod.exact(self.root, '', ['.hgtags'])
585 m = matchmod.exact(self.root, '', ['.hgtags'])
586 if util.any(self.status(match=m, unknown=True, ignored=True)):
586 if util.any(self.status(match=m, unknown=True, ignored=True)):
587 raise util.Abort(_('working copy of .hgtags is changed'),
587 raise util.Abort(_('working copy of .hgtags is changed'),
588 hint=_('please commit .hgtags manually'))
588 hint=_('please commit .hgtags manually'))
589
589
590 self.tags() # instantiate the cache
590 self.tags() # instantiate the cache
591 self._tag(names, node, message, local, user, date, editor=editor)
591 self._tag(names, node, message, local, user, date, editor=editor)
592
592
593 @filteredpropertycache
593 @filteredpropertycache
594 def _tagscache(self):
594 def _tagscache(self):
595 '''Returns a tagscache object that contains various tags related
595 '''Returns a tagscache object that contains various tags related
596 caches.'''
596 caches.'''
597
597
598 # This simplifies its cache management by having one decorated
598 # This simplifies its cache management by having one decorated
599 # function (this one) and the rest simply fetch things from it.
599 # function (this one) and the rest simply fetch things from it.
600 class tagscache(object):
600 class tagscache(object):
601 def __init__(self):
601 def __init__(self):
602 # These two define the set of tags for this repository. tags
602 # These two define the set of tags for this repository. tags
603 # maps tag name to node; tagtypes maps tag name to 'global' or
603 # maps tag name to node; tagtypes maps tag name to 'global' or
604 # 'local'. (Global tags are defined by .hgtags across all
604 # 'local'. (Global tags are defined by .hgtags across all
605 # heads, and local tags are defined in .hg/localtags.)
605 # heads, and local tags are defined in .hg/localtags.)
606 # They constitute the in-memory cache of tags.
606 # They constitute the in-memory cache of tags.
607 self.tags = self.tagtypes = None
607 self.tags = self.tagtypes = None
608
608
609 self.nodetagscache = self.tagslist = None
609 self.nodetagscache = self.tagslist = None
610
610
611 cache = tagscache()
611 cache = tagscache()
612 cache.tags, cache.tagtypes = self._findtags()
612 cache.tags, cache.tagtypes = self._findtags()
613
613
614 return cache
614 return cache
615
615
616 def tags(self):
616 def tags(self):
617 '''return a mapping of tag to node'''
617 '''return a mapping of tag to node'''
618 t = {}
618 t = {}
619 if self.changelog.filteredrevs:
619 if self.changelog.filteredrevs:
620 tags, tt = self._findtags()
620 tags, tt = self._findtags()
621 else:
621 else:
622 tags = self._tagscache.tags
622 tags = self._tagscache.tags
623 for k, v in tags.iteritems():
623 for k, v in tags.iteritems():
624 try:
624 try:
625 # ignore tags to unknown nodes
625 # ignore tags to unknown nodes
626 self.changelog.rev(v)
626 self.changelog.rev(v)
627 t[k] = v
627 t[k] = v
628 except (error.LookupError, ValueError):
628 except (error.LookupError, ValueError):
629 pass
629 pass
630 return t
630 return t
631
631
632 def _findtags(self):
632 def _findtags(self):
633 '''Do the hard work of finding tags. Return a pair of dicts
633 '''Do the hard work of finding tags. Return a pair of dicts
634 (tags, tagtypes) where tags maps tag name to node, and tagtypes
634 (tags, tagtypes) where tags maps tag name to node, and tagtypes
635 maps tag name to a string like \'global\' or \'local\'.
635 maps tag name to a string like \'global\' or \'local\'.
636 Subclasses or extensions are free to add their own tags, but
636 Subclasses or extensions are free to add their own tags, but
637 should be aware that the returned dicts will be retained for the
637 should be aware that the returned dicts will be retained for the
638 duration of the localrepo object.'''
638 duration of the localrepo object.'''
639
639
640 # XXX what tagtype should subclasses/extensions use? Currently
640 # XXX what tagtype should subclasses/extensions use? Currently
641 # mq and bookmarks add tags, but do not set the tagtype at all.
641 # mq and bookmarks add tags, but do not set the tagtype at all.
642 # Should each extension invent its own tag type? Should there
642 # Should each extension invent its own tag type? Should there
643 # be one tagtype for all such "virtual" tags? Or is the status
643 # be one tagtype for all such "virtual" tags? Or is the status
644 # quo fine?
644 # quo fine?
645
645
646 alltags = {} # map tag name to (node, hist)
646 alltags = {} # map tag name to (node, hist)
647 tagtypes = {}
647 tagtypes = {}
648
648
649 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
649 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
650 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
650 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
651
651
652 # Build the return dicts. Have to re-encode tag names because
652 # Build the return dicts. Have to re-encode tag names because
653 # the tags module always uses UTF-8 (in order not to lose info
653 # the tags module always uses UTF-8 (in order not to lose info
654 # writing to the cache), but the rest of Mercurial wants them in
654 # writing to the cache), but the rest of Mercurial wants them in
655 # local encoding.
655 # local encoding.
656 tags = {}
656 tags = {}
657 for (name, (node, hist)) in alltags.iteritems():
657 for (name, (node, hist)) in alltags.iteritems():
658 if node != nullid:
658 if node != nullid:
659 tags[encoding.tolocal(name)] = node
659 tags[encoding.tolocal(name)] = node
660 tags['tip'] = self.changelog.tip()
660 tags['tip'] = self.changelog.tip()
661 tagtypes = dict([(encoding.tolocal(name), value)
661 tagtypes = dict([(encoding.tolocal(name), value)
662 for (name, value) in tagtypes.iteritems()])
662 for (name, value) in tagtypes.iteritems()])
663 return (tags, tagtypes)
663 return (tags, tagtypes)
664
664
665 def tagtype(self, tagname):
665 def tagtype(self, tagname):
666 '''
666 '''
667 return the type of the given tag. result can be:
667 return the type of the given tag. result can be:
668
668
669 'local' : a local tag
669 'local' : a local tag
670 'global' : a global tag
670 'global' : a global tag
671 None : tag does not exist
671 None : tag does not exist
672 '''
672 '''
673
673
674 return self._tagscache.tagtypes.get(tagname)
674 return self._tagscache.tagtypes.get(tagname)
675
675
676 def tagslist(self):
676 def tagslist(self):
677 '''return a list of tags ordered by revision'''
677 '''return a list of tags ordered by revision'''
678 if not self._tagscache.tagslist:
678 if not self._tagscache.tagslist:
679 l = []
679 l = []
680 for t, n in self.tags().iteritems():
680 for t, n in self.tags().iteritems():
681 l.append((self.changelog.rev(n), t, n))
681 l.append((self.changelog.rev(n), t, n))
682 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
682 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
683
683
684 return self._tagscache.tagslist
684 return self._tagscache.tagslist
685
685
686 def nodetags(self, node):
686 def nodetags(self, node):
687 '''return the tags associated with a node'''
687 '''return the tags associated with a node'''
688 if not self._tagscache.nodetagscache:
688 if not self._tagscache.nodetagscache:
689 nodetagscache = {}
689 nodetagscache = {}
690 for t, n in self._tagscache.tags.iteritems():
690 for t, n in self._tagscache.tags.iteritems():
691 nodetagscache.setdefault(n, []).append(t)
691 nodetagscache.setdefault(n, []).append(t)
692 for tags in nodetagscache.itervalues():
692 for tags in nodetagscache.itervalues():
693 tags.sort()
693 tags.sort()
694 self._tagscache.nodetagscache = nodetagscache
694 self._tagscache.nodetagscache = nodetagscache
695 return self._tagscache.nodetagscache.get(node, [])
695 return self._tagscache.nodetagscache.get(node, [])
696
696
697 def nodebookmarks(self, node):
697 def nodebookmarks(self, node):
698 marks = []
698 marks = []
699 for bookmark, n in self._bookmarks.iteritems():
699 for bookmark, n in self._bookmarks.iteritems():
700 if n == node:
700 if n == node:
701 marks.append(bookmark)
701 marks.append(bookmark)
702 return sorted(marks)
702 return sorted(marks)
703
703
704 def branchmap(self):
704 def branchmap(self):
705 '''returns a dictionary {branch: [branchheads]} with branchheads
705 '''returns a dictionary {branch: [branchheads]} with branchheads
706 ordered by increasing revision number'''
706 ordered by increasing revision number'''
707 branchmap.updatecache(self)
707 branchmap.updatecache(self)
708 return self._branchcaches[self.filtername]
708 return self._branchcaches[self.filtername]
709
709
710 def branchtip(self, branch):
710 def branchtip(self, branch):
711 '''return the tip node for a given branch'''
711 '''return the tip node for a given branch'''
712 try:
712 try:
713 return self.branchmap().branchtip(branch)
713 return self.branchmap().branchtip(branch)
714 except KeyError:
714 except KeyError:
715 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
715 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
716
716
717 def lookup(self, key):
717 def lookup(self, key):
718 return self[key].node()
718 return self[key].node()
719
719
720 def lookupbranch(self, key, remote=None):
720 def lookupbranch(self, key, remote=None):
721 repo = remote or self
721 repo = remote or self
722 if key in repo.branchmap():
722 if key in repo.branchmap():
723 return key
723 return key
724
724
725 repo = (remote and remote.local()) and remote or self
725 repo = (remote and remote.local()) and remote or self
726 return repo[key].branch()
726 return repo[key].branch()
727
727
728 def known(self, nodes):
728 def known(self, nodes):
729 nm = self.changelog.nodemap
729 nm = self.changelog.nodemap
730 pc = self._phasecache
730 pc = self._phasecache
731 result = []
731 result = []
732 for n in nodes:
732 for n in nodes:
733 r = nm.get(n)
733 r = nm.get(n)
734 resp = not (r is None or pc.phase(self, r) >= phases.secret)
734 resp = not (r is None or pc.phase(self, r) >= phases.secret)
735 result.append(resp)
735 result.append(resp)
736 return result
736 return result
737
737
738 def local(self):
738 def local(self):
739 return self
739 return self
740
740
741 def cancopy(self):
741 def cancopy(self):
742 # so statichttprepo's override of local() works
742 # so statichttprepo's override of local() works
743 if not self.local():
743 if not self.local():
744 return False
744 return False
745 if not self.ui.configbool('phases', 'publish', True):
745 if not self.ui.configbool('phases', 'publish', True):
746 return True
746 return True
747 # if publishing we can't copy if there is filtered content
747 # if publishing we can't copy if there is filtered content
748 return not self.filtered('visible').changelog.filteredrevs
748 return not self.filtered('visible').changelog.filteredrevs
749
749
750 def join(self, f, *insidef):
750 def join(self, f, *insidef):
751 return os.path.join(self.path, f, *insidef)
751 return os.path.join(self.path, f, *insidef)
752
752
753 def wjoin(self, f, *insidef):
753 def wjoin(self, f, *insidef):
754 return os.path.join(self.root, f, *insidef)
754 return os.path.join(self.root, f, *insidef)
755
755
756 def file(self, f):
756 def file(self, f):
757 if f[0] == '/':
757 if f[0] == '/':
758 f = f[1:]
758 f = f[1:]
759 return filelog.filelog(self.sopener, f)
759 return filelog.filelog(self.sopener, f)
760
760
761 def changectx(self, changeid):
761 def changectx(self, changeid):
762 return self[changeid]
762 return self[changeid]
763
763
764 def parents(self, changeid=None):
764 def parents(self, changeid=None):
765 '''get list of changectxs for parents of changeid'''
765 '''get list of changectxs for parents of changeid'''
766 return self[changeid].parents()
766 return self[changeid].parents()
767
767
768 def setparents(self, p1, p2=nullid):
768 def setparents(self, p1, p2=nullid):
769 self.dirstate.beginparentchange()
769 self.dirstate.beginparentchange()
770 copies = self.dirstate.setparents(p1, p2)
770 copies = self.dirstate.setparents(p1, p2)
771 pctx = self[p1]
771 pctx = self[p1]
772 if copies:
772 if copies:
773 # Adjust copy records, the dirstate cannot do it, it
773 # Adjust copy records, the dirstate cannot do it, it
774 # requires access to parents manifests. Preserve them
774 # requires access to parents manifests. Preserve them
775 # only for entries added to first parent.
775 # only for entries added to first parent.
776 for f in copies:
776 for f in copies:
777 if f not in pctx and copies[f] in pctx:
777 if f not in pctx and copies[f] in pctx:
778 self.dirstate.copy(copies[f], f)
778 self.dirstate.copy(copies[f], f)
779 if p2 == nullid:
779 if p2 == nullid:
780 for f, s in sorted(self.dirstate.copies().items()):
780 for f, s in sorted(self.dirstate.copies().items()):
781 if f not in pctx and s not in pctx:
781 if f not in pctx and s not in pctx:
782 self.dirstate.copy(None, f)
782 self.dirstate.copy(None, f)
783 self.dirstate.endparentchange()
783 self.dirstate.endparentchange()
784
784
785 def filectx(self, path, changeid=None, fileid=None):
785 def filectx(self, path, changeid=None, fileid=None):
786 """changeid can be a changeset revision, node, or tag.
786 """changeid can be a changeset revision, node, or tag.
787 fileid can be a file revision or node."""
787 fileid can be a file revision or node."""
788 return context.filectx(self, path, changeid, fileid)
788 return context.filectx(self, path, changeid, fileid)
789
789
790 def getcwd(self):
790 def getcwd(self):
791 return self.dirstate.getcwd()
791 return self.dirstate.getcwd()
792
792
793 def pathto(self, f, cwd=None):
793 def pathto(self, f, cwd=None):
794 return self.dirstate.pathto(f, cwd)
794 return self.dirstate.pathto(f, cwd)
795
795
796 def wfile(self, f, mode='r'):
796 def wfile(self, f, mode='r'):
797 return self.wopener(f, mode)
797 return self.wopener(f, mode)
798
798
799 def _link(self, f):
799 def _link(self, f):
800 return self.wvfs.islink(f)
800 return self.wvfs.islink(f)
801
801
802 def _loadfilter(self, filter):
802 def _loadfilter(self, filter):
803 if filter not in self.filterpats:
803 if filter not in self.filterpats:
804 l = []
804 l = []
805 for pat, cmd in self.ui.configitems(filter):
805 for pat, cmd in self.ui.configitems(filter):
806 if cmd == '!':
806 if cmd == '!':
807 continue
807 continue
808 mf = matchmod.match(self.root, '', [pat])
808 mf = matchmod.match(self.root, '', [pat])
809 fn = None
809 fn = None
810 params = cmd
810 params = cmd
811 for name, filterfn in self._datafilters.iteritems():
811 for name, filterfn in self._datafilters.iteritems():
812 if cmd.startswith(name):
812 if cmd.startswith(name):
813 fn = filterfn
813 fn = filterfn
814 params = cmd[len(name):].lstrip()
814 params = cmd[len(name):].lstrip()
815 break
815 break
816 if not fn:
816 if not fn:
817 fn = lambda s, c, **kwargs: util.filter(s, c)
817 fn = lambda s, c, **kwargs: util.filter(s, c)
818 # Wrap old filters not supporting keyword arguments
818 # Wrap old filters not supporting keyword arguments
819 if not inspect.getargspec(fn)[2]:
819 if not inspect.getargspec(fn)[2]:
820 oldfn = fn
820 oldfn = fn
821 fn = lambda s, c, **kwargs: oldfn(s, c)
821 fn = lambda s, c, **kwargs: oldfn(s, c)
822 l.append((mf, fn, params))
822 l.append((mf, fn, params))
823 self.filterpats[filter] = l
823 self.filterpats[filter] = l
824 return self.filterpats[filter]
824 return self.filterpats[filter]
825
825
826 def _filter(self, filterpats, filename, data):
826 def _filter(self, filterpats, filename, data):
827 for mf, fn, cmd in filterpats:
827 for mf, fn, cmd in filterpats:
828 if mf(filename):
828 if mf(filename):
829 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
829 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
830 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
830 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
831 break
831 break
832
832
833 return data
833 return data
834
834
835 @unfilteredpropertycache
835 @unfilteredpropertycache
836 def _encodefilterpats(self):
836 def _encodefilterpats(self):
837 return self._loadfilter('encode')
837 return self._loadfilter('encode')
838
838
839 @unfilteredpropertycache
839 @unfilteredpropertycache
840 def _decodefilterpats(self):
840 def _decodefilterpats(self):
841 return self._loadfilter('decode')
841 return self._loadfilter('decode')
842
842
843 def adddatafilter(self, name, filter):
843 def adddatafilter(self, name, filter):
844 self._datafilters[name] = filter
844 self._datafilters[name] = filter
845
845
846 def wread(self, filename):
846 def wread(self, filename):
847 if self._link(filename):
847 if self._link(filename):
848 data = self.wvfs.readlink(filename)
848 data = self.wvfs.readlink(filename)
849 else:
849 else:
850 data = self.wopener.read(filename)
850 data = self.wopener.read(filename)
851 return self._filter(self._encodefilterpats, filename, data)
851 return self._filter(self._encodefilterpats, filename, data)
852
852
853 def wwrite(self, filename, data, flags):
853 def wwrite(self, filename, data, flags):
854 data = self._filter(self._decodefilterpats, filename, data)
854 data = self._filter(self._decodefilterpats, filename, data)
855 if 'l' in flags:
855 if 'l' in flags:
856 self.wopener.symlink(data, filename)
856 self.wopener.symlink(data, filename)
857 else:
857 else:
858 self.wopener.write(filename, data)
858 self.wopener.write(filename, data)
859 if 'x' in flags:
859 if 'x' in flags:
860 self.wvfs.setflags(filename, False, True)
860 self.wvfs.setflags(filename, False, True)
861
861
862 def wwritedata(self, filename, data):
862 def wwritedata(self, filename, data):
863 return self._filter(self._decodefilterpats, filename, data)
863 return self._filter(self._decodefilterpats, filename, data)
864
864
865 def transaction(self, desc, report=None):
865 def transaction(self, desc, report=None):
866 tr = self._transref and self._transref() or None
866 tr = self._transref and self._transref() or None
867 if tr and tr.running():
867 if tr and tr.running():
868 return tr.nest()
868 return tr.nest()
869
869
870 # abort here if the journal already exists
870 # abort here if the journal already exists
871 if self.svfs.exists("journal"):
871 if self.svfs.exists("journal"):
872 raise error.RepoError(
872 raise error.RepoError(
873 _("abandoned transaction found"),
873 _("abandoned transaction found"),
874 hint=_("run 'hg recover' to clean up transaction"))
874 hint=_("run 'hg recover' to clean up transaction"))
875
875
876 def onclose():
876 def onclose():
877 self.store.write(self._transref())
877 self.store.write(self._transref())
878
878
879 self._writejournal(desc)
879 self._writejournal(desc)
880 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
880 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
881 rp = report and report or self.ui.warn
881 rp = report and report or self.ui.warn
882 tr = transaction.transaction(rp, self.sopener,
882 tr = transaction.transaction(rp, self.sopener,
883 "journal",
883 "journal",
884 aftertrans(renames),
884 aftertrans(renames),
885 self.store.createmode,
885 self.store.createmode,
886 onclose)
886 onclose)
887 self._transref = weakref.ref(tr)
887 self._transref = weakref.ref(tr)
888 return tr
888 return tr
889
889
890 def _journalfiles(self):
890 def _journalfiles(self):
891 return ((self.svfs, 'journal'),
891 return ((self.svfs, 'journal'),
892 (self.vfs, 'journal.dirstate'),
892 (self.vfs, 'journal.dirstate'),
893 (self.vfs, 'journal.branch'),
893 (self.vfs, 'journal.branch'),
894 (self.vfs, 'journal.desc'),
894 (self.vfs, 'journal.desc'),
895 (self.vfs, 'journal.bookmarks'),
895 (self.vfs, 'journal.bookmarks'),
896 (self.svfs, 'journal.phaseroots'))
896 (self.svfs, 'journal.phaseroots'))
897
897
898 def undofiles(self):
898 def undofiles(self):
899 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
899 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
900
900
901 def _writejournal(self, desc):
901 def _writejournal(self, desc):
902 self.opener.write("journal.dirstate",
902 self.opener.write("journal.dirstate",
903 self.opener.tryread("dirstate"))
903 self.opener.tryread("dirstate"))
904 self.opener.write("journal.branch",
904 self.opener.write("journal.branch",
905 encoding.fromlocal(self.dirstate.branch()))
905 encoding.fromlocal(self.dirstate.branch()))
906 self.opener.write("journal.desc",
906 self.opener.write("journal.desc",
907 "%d\n%s\n" % (len(self), desc))
907 "%d\n%s\n" % (len(self), desc))
908 self.opener.write("journal.bookmarks",
908 self.opener.write("journal.bookmarks",
909 self.opener.tryread("bookmarks"))
909 self.opener.tryread("bookmarks"))
910 self.sopener.write("journal.phaseroots",
910 self.sopener.write("journal.phaseroots",
911 self.sopener.tryread("phaseroots"))
911 self.sopener.tryread("phaseroots"))
912
912
913 def recover(self):
913 def recover(self):
914 lock = self.lock()
914 lock = self.lock()
915 try:
915 try:
916 if self.svfs.exists("journal"):
916 if self.svfs.exists("journal"):
917 self.ui.status(_("rolling back interrupted transaction\n"))
917 self.ui.status(_("rolling back interrupted transaction\n"))
918 transaction.rollback(self.sopener, "journal",
918 transaction.rollback(self.sopener, "journal",
919 self.ui.warn)
919 self.ui.warn)
920 self.invalidate()
920 self.invalidate()
921 return True
921 return True
922 else:
922 else:
923 self.ui.warn(_("no interrupted transaction available\n"))
923 self.ui.warn(_("no interrupted transaction available\n"))
924 return False
924 return False
925 finally:
925 finally:
926 lock.release()
926 lock.release()
927
927
928 def rollback(self, dryrun=False, force=False):
928 def rollback(self, dryrun=False, force=False):
929 wlock = lock = None
929 wlock = lock = None
930 try:
930 try:
931 wlock = self.wlock()
931 wlock = self.wlock()
932 lock = self.lock()
932 lock = self.lock()
933 if self.svfs.exists("undo"):
933 if self.svfs.exists("undo"):
934 return self._rollback(dryrun, force)
934 return self._rollback(dryrun, force)
935 else:
935 else:
936 self.ui.warn(_("no rollback information available\n"))
936 self.ui.warn(_("no rollback information available\n"))
937 return 1
937 return 1
938 finally:
938 finally:
939 release(lock, wlock)
939 release(lock, wlock)
940
940
941 @unfilteredmethod # Until we get smarter cache management
941 @unfilteredmethod # Until we get smarter cache management
942 def _rollback(self, dryrun, force):
942 def _rollback(self, dryrun, force):
943 ui = self.ui
943 ui = self.ui
944 try:
944 try:
945 args = self.opener.read('undo.desc').splitlines()
945 args = self.opener.read('undo.desc').splitlines()
946 (oldlen, desc, detail) = (int(args[0]), args[1], None)
946 (oldlen, desc, detail) = (int(args[0]), args[1], None)
947 if len(args) >= 3:
947 if len(args) >= 3:
948 detail = args[2]
948 detail = args[2]
949 oldtip = oldlen - 1
949 oldtip = oldlen - 1
950
950
951 if detail and ui.verbose:
951 if detail and ui.verbose:
952 msg = (_('repository tip rolled back to revision %s'
952 msg = (_('repository tip rolled back to revision %s'
953 ' (undo %s: %s)\n')
953 ' (undo %s: %s)\n')
954 % (oldtip, desc, detail))
954 % (oldtip, desc, detail))
955 else:
955 else:
956 msg = (_('repository tip rolled back to revision %s'
956 msg = (_('repository tip rolled back to revision %s'
957 ' (undo %s)\n')
957 ' (undo %s)\n')
958 % (oldtip, desc))
958 % (oldtip, desc))
959 except IOError:
959 except IOError:
960 msg = _('rolling back unknown transaction\n')
960 msg = _('rolling back unknown transaction\n')
961 desc = None
961 desc = None
962
962
963 if not force and self['.'] != self['tip'] and desc == 'commit':
963 if not force and self['.'] != self['tip'] and desc == 'commit':
964 raise util.Abort(
964 raise util.Abort(
965 _('rollback of last commit while not checked out '
965 _('rollback of last commit while not checked out '
966 'may lose data'), hint=_('use -f to force'))
966 'may lose data'), hint=_('use -f to force'))
967
967
968 ui.status(msg)
968 ui.status(msg)
969 if dryrun:
969 if dryrun:
970 return 0
970 return 0
971
971
972 parents = self.dirstate.parents()
972 parents = self.dirstate.parents()
973 self.destroying()
973 self.destroying()
974 transaction.rollback(self.sopener, 'undo', ui.warn)
974 transaction.rollback(self.sopener, 'undo', ui.warn)
975 if self.vfs.exists('undo.bookmarks'):
975 if self.vfs.exists('undo.bookmarks'):
976 self.vfs.rename('undo.bookmarks', 'bookmarks')
976 self.vfs.rename('undo.bookmarks', 'bookmarks')
977 if self.svfs.exists('undo.phaseroots'):
977 if self.svfs.exists('undo.phaseroots'):
978 self.svfs.rename('undo.phaseroots', 'phaseroots')
978 self.svfs.rename('undo.phaseroots', 'phaseroots')
979 self.invalidate()
979 self.invalidate()
980
980
981 parentgone = (parents[0] not in self.changelog.nodemap or
981 parentgone = (parents[0] not in self.changelog.nodemap or
982 parents[1] not in self.changelog.nodemap)
982 parents[1] not in self.changelog.nodemap)
983 if parentgone:
983 if parentgone:
984 self.vfs.rename('undo.dirstate', 'dirstate')
984 self.vfs.rename('undo.dirstate', 'dirstate')
985 try:
985 try:
986 branch = self.opener.read('undo.branch')
986 branch = self.opener.read('undo.branch')
987 self.dirstate.setbranch(encoding.tolocal(branch))
987 self.dirstate.setbranch(encoding.tolocal(branch))
988 except IOError:
988 except IOError:
989 ui.warn(_('named branch could not be reset: '
989 ui.warn(_('named branch could not be reset: '
990 'current branch is still \'%s\'\n')
990 'current branch is still \'%s\'\n')
991 % self.dirstate.branch())
991 % self.dirstate.branch())
992
992
993 self.dirstate.invalidate()
993 self.dirstate.invalidate()
994 parents = tuple([p.rev() for p in self.parents()])
994 parents = tuple([p.rev() for p in self.parents()])
995 if len(parents) > 1:
995 if len(parents) > 1:
996 ui.status(_('working directory now based on '
996 ui.status(_('working directory now based on '
997 'revisions %d and %d\n') % parents)
997 'revisions %d and %d\n') % parents)
998 else:
998 else:
999 ui.status(_('working directory now based on '
999 ui.status(_('working directory now based on '
1000 'revision %d\n') % parents)
1000 'revision %d\n') % parents)
1001 # TODO: if we know which new heads may result from this rollback, pass
1001 # TODO: if we know which new heads may result from this rollback, pass
1002 # them to destroy(), which will prevent the branchhead cache from being
1002 # them to destroy(), which will prevent the branchhead cache from being
1003 # invalidated.
1003 # invalidated.
1004 self.destroyed()
1004 self.destroyed()
1005 return 0
1005 return 0
1006
1006
1007 def invalidatecaches(self):
1007 def invalidatecaches(self):
1008
1008
1009 if '_tagscache' in vars(self):
1009 if '_tagscache' in vars(self):
1010 # can't use delattr on proxy
1010 # can't use delattr on proxy
1011 del self.__dict__['_tagscache']
1011 del self.__dict__['_tagscache']
1012
1012
1013 self.unfiltered()._branchcaches.clear()
1013 self.unfiltered()._branchcaches.clear()
1014 self.invalidatevolatilesets()
1014 self.invalidatevolatilesets()
1015
1015
1016 def invalidatevolatilesets(self):
1016 def invalidatevolatilesets(self):
1017 self.filteredrevcache.clear()
1017 self.filteredrevcache.clear()
1018 obsolete.clearobscaches(self)
1018 obsolete.clearobscaches(self)
1019
1019
1020 def invalidatedirstate(self):
1020 def invalidatedirstate(self):
1021 '''Invalidates the dirstate, causing the next call to dirstate
1021 '''Invalidates the dirstate, causing the next call to dirstate
1022 to check if it was modified since the last time it was read,
1022 to check if it was modified since the last time it was read,
1023 rereading it if it has.
1023 rereading it if it has.
1024
1024
1025 This is different to dirstate.invalidate() that it doesn't always
1025 This is different to dirstate.invalidate() that it doesn't always
1026 rereads the dirstate. Use dirstate.invalidate() if you want to
1026 rereads the dirstate. Use dirstate.invalidate() if you want to
1027 explicitly read the dirstate again (i.e. restoring it to a previous
1027 explicitly read the dirstate again (i.e. restoring it to a previous
1028 known good state).'''
1028 known good state).'''
1029 if hasunfilteredcache(self, 'dirstate'):
1029 if hasunfilteredcache(self, 'dirstate'):
1030 for k in self.dirstate._filecache:
1030 for k in self.dirstate._filecache:
1031 try:
1031 try:
1032 delattr(self.dirstate, k)
1032 delattr(self.dirstate, k)
1033 except AttributeError:
1033 except AttributeError:
1034 pass
1034 pass
1035 delattr(self.unfiltered(), 'dirstate')
1035 delattr(self.unfiltered(), 'dirstate')
1036
1036
1037 def invalidate(self):
1037 def invalidate(self):
1038 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1038 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1039 for k in self._filecache:
1039 for k in self._filecache:
1040 # dirstate is invalidated separately in invalidatedirstate()
1040 # dirstate is invalidated separately in invalidatedirstate()
1041 if k == 'dirstate':
1041 if k == 'dirstate':
1042 continue
1042 continue
1043
1043
1044 try:
1044 try:
1045 delattr(unfiltered, k)
1045 delattr(unfiltered, k)
1046 except AttributeError:
1046 except AttributeError:
1047 pass
1047 pass
1048 self.invalidatecaches()
1048 self.invalidatecaches()
1049 self.store.invalidatecaches()
1049 self.store.invalidatecaches()
1050
1050
1051 def invalidateall(self):
1051 def invalidateall(self):
1052 '''Fully invalidates both store and non-store parts, causing the
1052 '''Fully invalidates both store and non-store parts, causing the
1053 subsequent operation to reread any outside changes.'''
1053 subsequent operation to reread any outside changes.'''
1054 # extension should hook this to invalidate its caches
1054 # extension should hook this to invalidate its caches
1055 self.invalidate()
1055 self.invalidate()
1056 self.invalidatedirstate()
1056 self.invalidatedirstate()
1057
1057
1058 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1058 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1059 try:
1059 try:
1060 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1060 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1061 except error.LockHeld, inst:
1061 except error.LockHeld, inst:
1062 if not wait:
1062 if not wait:
1063 raise
1063 raise
1064 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1064 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1065 (desc, inst.locker))
1065 (desc, inst.locker))
1066 # default to 600 seconds timeout
1066 # default to 600 seconds timeout
1067 l = lockmod.lock(vfs, lockname,
1067 l = lockmod.lock(vfs, lockname,
1068 int(self.ui.config("ui", "timeout", "600")),
1068 int(self.ui.config("ui", "timeout", "600")),
1069 releasefn, desc=desc)
1069 releasefn, desc=desc)
1070 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1070 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1071 if acquirefn:
1071 if acquirefn:
1072 acquirefn()
1072 acquirefn()
1073 return l
1073 return l
1074
1074
1075 def _afterlock(self, callback):
1075 def _afterlock(self, callback):
1076 """add a callback to the current repository lock.
1076 """add a callback to the current repository lock.
1077
1077
1078 The callback will be executed on lock release."""
1078 The callback will be executed on lock release."""
1079 l = self._lockref and self._lockref()
1079 l = self._lockref and self._lockref()
1080 if l:
1080 if l:
1081 l.postrelease.append(callback)
1081 l.postrelease.append(callback)
1082 else:
1082 else:
1083 callback()
1083 callback()
1084
1084
1085 def lock(self, wait=True):
1085 def lock(self, wait=True):
1086 '''Lock the repository store (.hg/store) and return a weak reference
1086 '''Lock the repository store (.hg/store) and return a weak reference
1087 to the lock. Use this before modifying the store (e.g. committing or
1087 to the lock. Use this before modifying the store (e.g. committing or
1088 stripping). If you are opening a transaction, get a lock as well.)'''
1088 stripping). If you are opening a transaction, get a lock as well.)'''
1089 l = self._lockref and self._lockref()
1089 l = self._lockref and self._lockref()
1090 if l is not None and l.held:
1090 if l is not None and l.held:
1091 l.lock()
1091 l.lock()
1092 return l
1092 return l
1093
1093
1094 def unlock():
1094 def unlock():
1095 for k, ce in self._filecache.items():
1095 for k, ce in self._filecache.items():
1096 if k == 'dirstate' or k not in self.__dict__:
1096 if k == 'dirstate' or k not in self.__dict__:
1097 continue
1097 continue
1098 ce.refresh()
1098 ce.refresh()
1099
1099
1100 l = self._lock(self.svfs, "lock", wait, unlock,
1100 l = self._lock(self.svfs, "lock", wait, unlock,
1101 self.invalidate, _('repository %s') % self.origroot)
1101 self.invalidate, _('repository %s') % self.origroot)
1102 self._lockref = weakref.ref(l)
1102 self._lockref = weakref.ref(l)
1103 return l
1103 return l
1104
1104
1105 def wlock(self, wait=True):
1105 def wlock(self, wait=True):
1106 '''Lock the non-store parts of the repository (everything under
1106 '''Lock the non-store parts of the repository (everything under
1107 .hg except .hg/store) and return a weak reference to the lock.
1107 .hg except .hg/store) and return a weak reference to the lock.
1108 Use this before modifying files in .hg.'''
1108 Use this before modifying files in .hg.'''
1109 l = self._wlockref and self._wlockref()
1109 l = self._wlockref and self._wlockref()
1110 if l is not None and l.held:
1110 if l is not None and l.held:
1111 l.lock()
1111 l.lock()
1112 return l
1112 return l
1113
1113
1114 def unlock():
1114 def unlock():
1115 if self.dirstate.pendingparentchange():
1115 if self.dirstate.pendingparentchange():
1116 self.dirstate.invalidate()
1116 self.dirstate.invalidate()
1117 else:
1117 else:
1118 self.dirstate.write()
1118 self.dirstate.write()
1119
1119
1120 self._filecache['dirstate'].refresh()
1120 self._filecache['dirstate'].refresh()
1121
1121
1122 l = self._lock(self.vfs, "wlock", wait, unlock,
1122 l = self._lock(self.vfs, "wlock", wait, unlock,
1123 self.invalidatedirstate, _('working directory of %s') %
1123 self.invalidatedirstate, _('working directory of %s') %
1124 self.origroot)
1124 self.origroot)
1125 self._wlockref = weakref.ref(l)
1125 self._wlockref = weakref.ref(l)
1126 return l
1126 return l
1127
1127
1128 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1128 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1129 """
1129 """
1130 commit an individual file as part of a larger transaction
1130 commit an individual file as part of a larger transaction
1131 """
1131 """
1132
1132
1133 fname = fctx.path()
1133 fname = fctx.path()
1134 text = fctx.data()
1134 text = fctx.data()
1135 flog = self.file(fname)
1135 flog = self.file(fname)
1136 fparent1 = manifest1.get(fname, nullid)
1136 fparent1 = manifest1.get(fname, nullid)
1137 fparent2 = manifest2.get(fname, nullid)
1137 fparent2 = manifest2.get(fname, nullid)
1138
1138
1139 meta = {}
1139 meta = {}
1140 copy = fctx.renamed()
1140 copy = fctx.renamed()
1141 if copy and copy[0] != fname:
1141 if copy and copy[0] != fname:
1142 # Mark the new revision of this file as a copy of another
1142 # Mark the new revision of this file as a copy of another
1143 # file. This copy data will effectively act as a parent
1143 # file. This copy data will effectively act as a parent
1144 # of this new revision. If this is a merge, the first
1144 # of this new revision. If this is a merge, the first
1145 # parent will be the nullid (meaning "look up the copy data")
1145 # parent will be the nullid (meaning "look up the copy data")
1146 # and the second one will be the other parent. For example:
1146 # and the second one will be the other parent. For example:
1147 #
1147 #
1148 # 0 --- 1 --- 3 rev1 changes file foo
1148 # 0 --- 1 --- 3 rev1 changes file foo
1149 # \ / rev2 renames foo to bar and changes it
1149 # \ / rev2 renames foo to bar and changes it
1150 # \- 2 -/ rev3 should have bar with all changes and
1150 # \- 2 -/ rev3 should have bar with all changes and
1151 # should record that bar descends from
1151 # should record that bar descends from
1152 # bar in rev2 and foo in rev1
1152 # bar in rev2 and foo in rev1
1153 #
1153 #
1154 # this allows this merge to succeed:
1154 # this allows this merge to succeed:
1155 #
1155 #
1156 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1156 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1157 # \ / merging rev3 and rev4 should use bar@rev2
1157 # \ / merging rev3 and rev4 should use bar@rev2
1158 # \- 2 --- 4 as the merge base
1158 # \- 2 --- 4 as the merge base
1159 #
1159 #
1160
1160
1161 cfname = copy[0]
1161 cfname = copy[0]
1162 crev = manifest1.get(cfname)
1162 crev = manifest1.get(cfname)
1163 newfparent = fparent2
1163 newfparent = fparent2
1164
1164
1165 if manifest2: # branch merge
1165 if manifest2: # branch merge
1166 if fparent2 == nullid or crev is None: # copied on remote side
1166 if fparent2 == nullid or crev is None: # copied on remote side
1167 if cfname in manifest2:
1167 if cfname in manifest2:
1168 crev = manifest2[cfname]
1168 crev = manifest2[cfname]
1169 newfparent = fparent1
1169 newfparent = fparent1
1170
1170
1171 # find source in nearest ancestor if we've lost track
1171 # find source in nearest ancestor if we've lost track
1172 if not crev:
1172 if not crev:
1173 self.ui.debug(" %s: searching for copy revision for %s\n" %
1173 self.ui.debug(" %s: searching for copy revision for %s\n" %
1174 (fname, cfname))
1174 (fname, cfname))
1175 for ancestor in self[None].ancestors():
1175 for ancestor in self[None].ancestors():
1176 if cfname in ancestor:
1176 if cfname in ancestor:
1177 crev = ancestor[cfname].filenode()
1177 crev = ancestor[cfname].filenode()
1178 break
1178 break
1179
1179
1180 if crev:
1180 if crev:
1181 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1181 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1182 meta["copy"] = cfname
1182 meta["copy"] = cfname
1183 meta["copyrev"] = hex(crev)
1183 meta["copyrev"] = hex(crev)
1184 fparent1, fparent2 = nullid, newfparent
1184 fparent1, fparent2 = nullid, newfparent
1185 else:
1185 else:
1186 self.ui.warn(_("warning: can't find ancestor for '%s' "
1186 self.ui.warn(_("warning: can't find ancestor for '%s' "
1187 "copied from '%s'!\n") % (fname, cfname))
1187 "copied from '%s'!\n") % (fname, cfname))
1188
1188
1189 elif fparent1 == nullid:
1189 elif fparent1 == nullid:
1190 fparent1, fparent2 = fparent2, nullid
1190 fparent1, fparent2 = fparent2, nullid
1191 elif fparent2 != nullid:
1191 elif fparent2 != nullid:
1192 # is one parent an ancestor of the other?
1192 # is one parent an ancestor of the other?
1193 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1193 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1194 if fparent1 in fparentancestors:
1194 if fparent1 in fparentancestors:
1195 fparent1, fparent2 = fparent2, nullid
1195 fparent1, fparent2 = fparent2, nullid
1196 elif fparent2 in fparentancestors:
1196 elif fparent2 in fparentancestors:
1197 fparent2 = nullid
1197 fparent2 = nullid
1198
1198
1199 # is the file changed?
1199 # is the file changed?
1200 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1200 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1201 changelist.append(fname)
1201 changelist.append(fname)
1202 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1202 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1203 # are just the flags changed during merge?
1203 # are just the flags changed during merge?
1204 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
1204 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
1205 changelist.append(fname)
1205 changelist.append(fname)
1206
1206
1207 return fparent1
1207 return fparent1
1208
1208
1209 @unfilteredmethod
1209 @unfilteredmethod
1210 def commit(self, text="", user=None, date=None, match=None, force=False,
1210 def commit(self, text="", user=None, date=None, match=None, force=False,
1211 editor=False, extra={}):
1211 editor=False, extra={}):
1212 """Add a new revision to current repository.
1212 """Add a new revision to current repository.
1213
1213
1214 Revision information is gathered from the working directory,
1214 Revision information is gathered from the working directory,
1215 match can be used to filter the committed files. If editor is
1215 match can be used to filter the committed files. If editor is
1216 supplied, it is called to get a commit message.
1216 supplied, it is called to get a commit message.
1217 """
1217 """
1218
1218
1219 def fail(f, msg):
1219 def fail(f, msg):
1220 raise util.Abort('%s: %s' % (f, msg))
1220 raise util.Abort('%s: %s' % (f, msg))
1221
1221
1222 if not match:
1222 if not match:
1223 match = matchmod.always(self.root, '')
1223 match = matchmod.always(self.root, '')
1224
1224
1225 if not force:
1225 if not force:
1226 vdirs = []
1226 vdirs = []
1227 match.explicitdir = vdirs.append
1227 match.explicitdir = vdirs.append
1228 match.bad = fail
1228 match.bad = fail
1229
1229
1230 wlock = self.wlock()
1230 wlock = self.wlock()
1231 try:
1231 try:
1232 wctx = self[None]
1232 wctx = self[None]
1233 merge = len(wctx.parents()) > 1
1233 merge = len(wctx.parents()) > 1
1234
1234
1235 if (not force and merge and match and
1235 if (not force and merge and match and
1236 (match.files() or match.anypats())):
1236 (match.files() or match.anypats())):
1237 raise util.Abort(_('cannot partially commit a merge '
1237 raise util.Abort(_('cannot partially commit a merge '
1238 '(do not specify files or patterns)'))
1238 '(do not specify files or patterns)'))
1239
1239
1240 status = self.status(match=match, clean=force)
1240 status = self.status(match=match, clean=force)
1241 if force:
1241 if force:
1242 status.modified.extend(status.clean) # mq may commit clean files
1242 status.modified.extend(status.clean) # mq may commit clean files
1243
1243
1244 # check subrepos
1244 # check subrepos
1245 subs = []
1245 subs = []
1246 commitsubs = set()
1246 commitsubs = set()
1247 newstate = wctx.substate.copy()
1247 newstate = wctx.substate.copy()
1248 # only manage subrepos and .hgsubstate if .hgsub is present
1248 # only manage subrepos and .hgsubstate if .hgsub is present
1249 if '.hgsub' in wctx:
1249 if '.hgsub' in wctx:
1250 # we'll decide whether to track this ourselves, thanks
1250 # we'll decide whether to track this ourselves, thanks
1251 for c in status.modified, status.added, status.removed:
1251 for c in status.modified, status.added, status.removed:
1252 if '.hgsubstate' in c:
1252 if '.hgsubstate' in c:
1253 c.remove('.hgsubstate')
1253 c.remove('.hgsubstate')
1254
1254
1255 # compare current state to last committed state
1255 # compare current state to last committed state
1256 # build new substate based on last committed state
1256 # build new substate based on last committed state
1257 oldstate = wctx.p1().substate
1257 oldstate = wctx.p1().substate
1258 for s in sorted(newstate.keys()):
1258 for s in sorted(newstate.keys()):
1259 if not match(s):
1259 if not match(s):
1260 # ignore working copy, use old state if present
1260 # ignore working copy, use old state if present
1261 if s in oldstate:
1261 if s in oldstate:
1262 newstate[s] = oldstate[s]
1262 newstate[s] = oldstate[s]
1263 continue
1263 continue
1264 if not force:
1264 if not force:
1265 raise util.Abort(
1265 raise util.Abort(
1266 _("commit with new subrepo %s excluded") % s)
1266 _("commit with new subrepo %s excluded") % s)
1267 if wctx.sub(s).dirty(True):
1267 if wctx.sub(s).dirty(True):
1268 if not self.ui.configbool('ui', 'commitsubrepos'):
1268 if not self.ui.configbool('ui', 'commitsubrepos'):
1269 raise util.Abort(
1269 raise util.Abort(
1270 _("uncommitted changes in subrepo %s") % s,
1270 _("uncommitted changes in subrepo %s") % s,
1271 hint=_("use --subrepos for recursive commit"))
1271 hint=_("use --subrepos for recursive commit"))
1272 subs.append(s)
1272 subs.append(s)
1273 commitsubs.add(s)
1273 commitsubs.add(s)
1274 else:
1274 else:
1275 bs = wctx.sub(s).basestate()
1275 bs = wctx.sub(s).basestate()
1276 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1276 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1277 if oldstate.get(s, (None, None, None))[1] != bs:
1277 if oldstate.get(s, (None, None, None))[1] != bs:
1278 subs.append(s)
1278 subs.append(s)
1279
1279
1280 # check for removed subrepos
1280 # check for removed subrepos
1281 for p in wctx.parents():
1281 for p in wctx.parents():
1282 r = [s for s in p.substate if s not in newstate]
1282 r = [s for s in p.substate if s not in newstate]
1283 subs += [s for s in r if match(s)]
1283 subs += [s for s in r if match(s)]
1284 if subs:
1284 if subs:
1285 if (not match('.hgsub') and
1285 if (not match('.hgsub') and
1286 '.hgsub' in (wctx.modified() + wctx.added())):
1286 '.hgsub' in (wctx.modified() + wctx.added())):
1287 raise util.Abort(
1287 raise util.Abort(
1288 _("can't commit subrepos without .hgsub"))
1288 _("can't commit subrepos without .hgsub"))
1289 status.modified.insert(0, '.hgsubstate')
1289 status.modified.insert(0, '.hgsubstate')
1290
1290
1291 elif '.hgsub' in status.removed:
1291 elif '.hgsub' in status.removed:
1292 # clean up .hgsubstate when .hgsub is removed
1292 # clean up .hgsubstate when .hgsub is removed
1293 if ('.hgsubstate' in wctx and
1293 if ('.hgsubstate' in wctx and
1294 '.hgsubstate' not in (status.modified + status.added +
1294 '.hgsubstate' not in (status.modified + status.added +
1295 status.removed)):
1295 status.removed)):
1296 status.removed.insert(0, '.hgsubstate')
1296 status.removed.insert(0, '.hgsubstate')
1297
1297
1298 # make sure all explicit patterns are matched
1298 # make sure all explicit patterns are matched
1299 if not force and match.files():
1299 if not force and match.files():
1300 matched = set(status.modified + status.added + status.removed)
1300 matched = set(status.modified + status.added + status.removed)
1301
1301
1302 for f in match.files():
1302 for f in match.files():
1303 f = self.dirstate.normalize(f)
1303 f = self.dirstate.normalize(f)
1304 if f == '.' or f in matched or f in wctx.substate:
1304 if f == '.' or f in matched or f in wctx.substate:
1305 continue
1305 continue
1306 if f in status.deleted:
1306 if f in status.deleted:
1307 fail(f, _('file not found!'))
1307 fail(f, _('file not found!'))
1308 if f in vdirs: # visited directory
1308 if f in vdirs: # visited directory
1309 d = f + '/'
1309 d = f + '/'
1310 for mf in matched:
1310 for mf in matched:
1311 if mf.startswith(d):
1311 if mf.startswith(d):
1312 break
1312 break
1313 else:
1313 else:
1314 fail(f, _("no match under directory!"))
1314 fail(f, _("no match under directory!"))
1315 elif f not in self.dirstate:
1315 elif f not in self.dirstate:
1316 fail(f, _("file not tracked!"))
1316 fail(f, _("file not tracked!"))
1317
1317
1318 cctx = context.workingctx(self, text, user, date, extra, status)
1318 cctx = context.workingctx(self, text, user, date, extra, status)
1319
1319
1320 if (not force and not extra.get("close") and not merge
1320 if (not force and not extra.get("close") and not merge
1321 and not cctx.files()
1321 and not cctx.files()
1322 and wctx.branch() == wctx.p1().branch()):
1322 and wctx.branch() == wctx.p1().branch()):
1323 return None
1323 return None
1324
1324
1325 if merge and cctx.deleted():
1325 if merge and cctx.deleted():
1326 raise util.Abort(_("cannot commit merge with missing files"))
1326 raise util.Abort(_("cannot commit merge with missing files"))
1327
1327
1328 ms = mergemod.mergestate(self)
1328 ms = mergemod.mergestate(self)
1329 for f in status.modified:
1329 for f in status.modified:
1330 if f in ms and ms[f] == 'u':
1330 if f in ms and ms[f] == 'u':
1331 raise util.Abort(_("unresolved merge conflicts "
1331 raise util.Abort(_("unresolved merge conflicts "
1332 "(see hg help resolve)"))
1332 "(see hg help resolve)"))
1333
1333
1334 if editor:
1334 if editor:
1335 cctx._text = editor(self, cctx, subs)
1335 cctx._text = editor(self, cctx, subs)
1336 edited = (text != cctx._text)
1336 edited = (text != cctx._text)
1337
1337
1338 # Save commit message in case this transaction gets rolled back
1338 # Save commit message in case this transaction gets rolled back
1339 # (e.g. by a pretxncommit hook). Leave the content alone on
1339 # (e.g. by a pretxncommit hook). Leave the content alone on
1340 # the assumption that the user will use the same editor again.
1340 # the assumption that the user will use the same editor again.
1341 msgfn = self.savecommitmessage(cctx._text)
1341 msgfn = self.savecommitmessage(cctx._text)
1342
1342
1343 # commit subs and write new state
1343 # commit subs and write new state
1344 if subs:
1344 if subs:
1345 for s in sorted(commitsubs):
1345 for s in sorted(commitsubs):
1346 sub = wctx.sub(s)
1346 sub = wctx.sub(s)
1347 self.ui.status(_('committing subrepository %s\n') %
1347 self.ui.status(_('committing subrepository %s\n') %
1348 subrepo.subrelpath(sub))
1348 subrepo.subrelpath(sub))
1349 sr = sub.commit(cctx._text, user, date)
1349 sr = sub.commit(cctx._text, user, date)
1350 newstate[s] = (newstate[s][0], sr)
1350 newstate[s] = (newstate[s][0], sr)
1351 subrepo.writestate(self, newstate)
1351 subrepo.writestate(self, newstate)
1352
1352
1353 p1, p2 = self.dirstate.parents()
1353 p1, p2 = self.dirstate.parents()
1354 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1354 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1355 try:
1355 try:
1356 self.hook("precommit", throw=True, parent1=hookp1,
1356 self.hook("precommit", throw=True, parent1=hookp1,
1357 parent2=hookp2)
1357 parent2=hookp2)
1358 ret = self.commitctx(cctx, True)
1358 ret = self.commitctx(cctx, True)
1359 except: # re-raises
1359 except: # re-raises
1360 if edited:
1360 if edited:
1361 self.ui.write(
1361 self.ui.write(
1362 _('note: commit message saved in %s\n') % msgfn)
1362 _('note: commit message saved in %s\n') % msgfn)
1363 raise
1363 raise
1364
1364
1365 # update bookmarks, dirstate and mergestate
1365 # update bookmarks, dirstate and mergestate
1366 bookmarks.update(self, [p1, p2], ret)
1366 bookmarks.update(self, [p1, p2], ret)
1367 cctx.markcommitted(ret)
1367 cctx.markcommitted(ret)
1368 ms.reset()
1368 ms.reset()
1369 finally:
1369 finally:
1370 wlock.release()
1370 wlock.release()
1371
1371
1372 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1372 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1373 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1373 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1374 self._afterlock(commithook)
1374 self._afterlock(commithook)
1375 return ret
1375 return ret
1376
1376
1377 @unfilteredmethod
1377 @unfilteredmethod
1378 def commitctx(self, ctx, error=False):
1378 def commitctx(self, ctx, error=False):
1379 """Add a new revision to current repository.
1379 """Add a new revision to current repository.
1380 Revision information is passed via the context argument.
1380 Revision information is passed via the context argument.
1381 """
1381 """
1382
1382
1383 tr = None
1383 tr = None
1384 p1, p2 = ctx.p1(), ctx.p2()
1384 p1, p2 = ctx.p1(), ctx.p2()
1385 user = ctx.user()
1385 user = ctx.user()
1386
1386
1387 lock = self.lock()
1387 lock = self.lock()
1388 try:
1388 try:
1389 tr = self.transaction("commit")
1389 tr = self.transaction("commit")
1390 trp = weakref.proxy(tr)
1390 trp = weakref.proxy(tr)
1391
1391
1392 if ctx.files():
1392 if ctx.files():
1393 m1 = p1.manifest()
1393 m1 = p1.manifest()
1394 m2 = p2.manifest()
1394 m2 = p2.manifest()
1395 m = m1.copy()
1395 m = m1.copy()
1396
1396
1397 # check in files
1397 # check in files
1398 added = []
1398 added = []
1399 changed = []
1399 changed = []
1400 removed = list(ctx.removed())
1400 removed = list(ctx.removed())
1401 linkrev = len(self)
1401 linkrev = len(self)
1402 for f in sorted(ctx.modified() + ctx.added()):
1402 for f in sorted(ctx.modified() + ctx.added()):
1403 self.ui.note(f + "\n")
1403 self.ui.note(f + "\n")
1404 try:
1404 try:
1405 fctx = ctx[f]
1405 fctx = ctx[f]
1406 if fctx is None:
1406 if fctx is None:
1407 removed.append(f)
1407 removed.append(f)
1408 else:
1408 else:
1409 added.append(f)
1409 added.append(f)
1410 m[f] = self._filecommit(fctx, m1, m2, linkrev,
1410 m[f] = self._filecommit(fctx, m1, m2, linkrev,
1411 trp, changed)
1411 trp, changed)
1412 m.setflag(f, fctx.flags())
1412 m.setflag(f, fctx.flags())
1413 except OSError, inst:
1413 except OSError, inst:
1414 self.ui.warn(_("trouble committing %s!\n") % f)
1414 self.ui.warn(_("trouble committing %s!\n") % f)
1415 raise
1415 raise
1416 except IOError, inst:
1416 except IOError, inst:
1417 errcode = getattr(inst, 'errno', errno.ENOENT)
1417 errcode = getattr(inst, 'errno', errno.ENOENT)
1418 if error or errcode and errcode != errno.ENOENT:
1418 if error or errcode and errcode != errno.ENOENT:
1419 self.ui.warn(_("trouble committing %s!\n") % f)
1419 self.ui.warn(_("trouble committing %s!\n") % f)
1420 raise
1420 raise
1421
1421
1422 # update manifest
1422 # update manifest
1423 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1423 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1424 drop = [f for f in removed if f in m]
1424 drop = [f for f in removed if f in m]
1425 for f in drop:
1425 for f in drop:
1426 del m[f]
1426 del m[f]
1427 mn = self.manifest.add(m, trp, linkrev,
1427 mn = self.manifest.add(m, trp, linkrev,
1428 p1.manifestnode(), p2.manifestnode(),
1428 p1.manifestnode(), p2.manifestnode(),
1429 added, drop)
1429 added, drop)
1430 files = changed + removed
1430 files = changed + removed
1431 else:
1431 else:
1432 mn = p1.manifestnode()
1432 mn = p1.manifestnode()
1433 files = []
1433 files = []
1434
1434
1435 # update changelog
1435 # update changelog
1436 self.changelog.delayupdate()
1436 self.changelog.delayupdate()
1437 n = self.changelog.add(mn, files, ctx.description(),
1437 n = self.changelog.add(mn, files, ctx.description(),
1438 trp, p1.node(), p2.node(),
1438 trp, p1.node(), p2.node(),
1439 user, ctx.date(), ctx.extra().copy())
1439 user, ctx.date(), ctx.extra().copy())
1440 p = lambda: self.changelog.writepending() and self.root or ""
1440 p = lambda: self.changelog.writepending() and self.root or ""
1441 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1441 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1442 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1442 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1443 parent2=xp2, pending=p)
1443 parent2=xp2, pending=p)
1444 self.changelog.finalize(trp)
1444 self.changelog.finalize(trp)
1445 # set the new commit is proper phase
1445 # set the new commit is proper phase
1446 targetphase = subrepo.newcommitphase(self.ui, ctx)
1446 targetphase = subrepo.newcommitphase(self.ui, ctx)
1447 if targetphase:
1447 if targetphase:
1448 # retract boundary do not alter parent changeset.
1448 # retract boundary do not alter parent changeset.
1449 # if a parent have higher the resulting phase will
1449 # if a parent have higher the resulting phase will
1450 # be compliant anyway
1450 # be compliant anyway
1451 #
1451 #
1452 # if minimal phase was 0 we don't need to retract anything
1452 # if minimal phase was 0 we don't need to retract anything
1453 phases.retractboundary(self, tr, targetphase, [n])
1453 phases.retractboundary(self, tr, targetphase, [n])
1454 tr.close()
1454 tr.close()
1455 branchmap.updatecache(self.filtered('served'))
1455 branchmap.updatecache(self.filtered('served'))
1456 return n
1456 return n
1457 finally:
1457 finally:
1458 if tr:
1458 if tr:
1459 tr.release()
1459 tr.release()
1460 lock.release()
1460 lock.release()
1461
1461
1462 @unfilteredmethod
1462 @unfilteredmethod
1463 def destroying(self):
1463 def destroying(self):
1464 '''Inform the repository that nodes are about to be destroyed.
1464 '''Inform the repository that nodes are about to be destroyed.
1465 Intended for use by strip and rollback, so there's a common
1465 Intended for use by strip and rollback, so there's a common
1466 place for anything that has to be done before destroying history.
1466 place for anything that has to be done before destroying history.
1467
1467
1468 This is mostly useful for saving state that is in memory and waiting
1468 This is mostly useful for saving state that is in memory and waiting
1469 to be flushed when the current lock is released. Because a call to
1469 to be flushed when the current lock is released. Because a call to
1470 destroyed is imminent, the repo will be invalidated causing those
1470 destroyed is imminent, the repo will be invalidated causing those
1471 changes to stay in memory (waiting for the next unlock), or vanish
1471 changes to stay in memory (waiting for the next unlock), or vanish
1472 completely.
1472 completely.
1473 '''
1473 '''
1474 # When using the same lock to commit and strip, the phasecache is left
1474 # When using the same lock to commit and strip, the phasecache is left
1475 # dirty after committing. Then when we strip, the repo is invalidated,
1475 # dirty after committing. Then when we strip, the repo is invalidated,
1476 # causing those changes to disappear.
1476 # causing those changes to disappear.
1477 if '_phasecache' in vars(self):
1477 if '_phasecache' in vars(self):
1478 self._phasecache.write()
1478 self._phasecache.write()
1479
1479
1480 @unfilteredmethod
1480 @unfilteredmethod
1481 def destroyed(self):
1481 def destroyed(self):
1482 '''Inform the repository that nodes have been destroyed.
1482 '''Inform the repository that nodes have been destroyed.
1483 Intended for use by strip and rollback, so there's a common
1483 Intended for use by strip and rollback, so there's a common
1484 place for anything that has to be done after destroying history.
1484 place for anything that has to be done after destroying history.
1485 '''
1485 '''
1486 # When one tries to:
1486 # When one tries to:
1487 # 1) destroy nodes thus calling this method (e.g. strip)
1487 # 1) destroy nodes thus calling this method (e.g. strip)
1488 # 2) use phasecache somewhere (e.g. commit)
1488 # 2) use phasecache somewhere (e.g. commit)
1489 #
1489 #
1490 # then 2) will fail because the phasecache contains nodes that were
1490 # then 2) will fail because the phasecache contains nodes that were
1491 # removed. We can either remove phasecache from the filecache,
1491 # removed. We can either remove phasecache from the filecache,
1492 # causing it to reload next time it is accessed, or simply filter
1492 # causing it to reload next time it is accessed, or simply filter
1493 # the removed nodes now and write the updated cache.
1493 # the removed nodes now and write the updated cache.
1494 self._phasecache.filterunknown(self)
1494 self._phasecache.filterunknown(self)
1495 self._phasecache.write()
1495 self._phasecache.write()
1496
1496
1497 # update the 'served' branch cache to help read only server process
1497 # update the 'served' branch cache to help read only server process
1498 # Thanks to branchcache collaboration this is done from the nearest
1498 # Thanks to branchcache collaboration this is done from the nearest
1499 # filtered subset and it is expected to be fast.
1499 # filtered subset and it is expected to be fast.
1500 branchmap.updatecache(self.filtered('served'))
1500 branchmap.updatecache(self.filtered('served'))
1501
1501
1502 # Ensure the persistent tag cache is updated. Doing it now
1502 # Ensure the persistent tag cache is updated. Doing it now
1503 # means that the tag cache only has to worry about destroyed
1503 # means that the tag cache only has to worry about destroyed
1504 # heads immediately after a strip/rollback. That in turn
1504 # heads immediately after a strip/rollback. That in turn
1505 # guarantees that "cachetip == currenttip" (comparing both rev
1505 # guarantees that "cachetip == currenttip" (comparing both rev
1506 # and node) always means no nodes have been added or destroyed.
1506 # and node) always means no nodes have been added or destroyed.
1507
1507
1508 # XXX this is suboptimal when qrefresh'ing: we strip the current
1508 # XXX this is suboptimal when qrefresh'ing: we strip the current
1509 # head, refresh the tag cache, then immediately add a new head.
1509 # head, refresh the tag cache, then immediately add a new head.
1510 # But I think doing it this way is necessary for the "instant
1510 # But I think doing it this way is necessary for the "instant
1511 # tag cache retrieval" case to work.
1511 # tag cache retrieval" case to work.
1512 self.invalidate()
1512 self.invalidate()
1513
1513
1514 def walk(self, match, node=None):
1514 def walk(self, match, node=None):
1515 '''
1515 '''
1516 walk recursively through the directory tree or a given
1516 walk recursively through the directory tree or a given
1517 changeset, finding all files matched by the match
1517 changeset, finding all files matched by the match
1518 function
1518 function
1519 '''
1519 '''
1520 return self[node].walk(match)
1520 return self[node].walk(match)
1521
1521
1522 def status(self, node1='.', node2=None, match=None,
1522 def status(self, node1='.', node2=None, match=None,
1523 ignored=False, clean=False, unknown=False,
1523 ignored=False, clean=False, unknown=False,
1524 listsubrepos=False):
1524 listsubrepos=False):
1525 '''a convenience method that calls node1.status(node2)'''
1525 '''a convenience method that calls node1.status(node2)'''
1526 return self[node1].status(node2, match, ignored, clean, unknown,
1526 return self[node1].status(node2, match, ignored, clean, unknown,
1527 listsubrepos)
1527 listsubrepos)
1528
1528
1529 def heads(self, start=None):
1529 def heads(self, start=None):
1530 heads = self.changelog.heads(start)
1530 heads = self.changelog.heads(start)
1531 # sort the output in rev descending order
1531 # sort the output in rev descending order
1532 return sorted(heads, key=self.changelog.rev, reverse=True)
1532 return sorted(heads, key=self.changelog.rev, reverse=True)
1533
1533
1534 def branchheads(self, branch=None, start=None, closed=False):
1534 def branchheads(self, branch=None, start=None, closed=False):
1535 '''return a (possibly filtered) list of heads for the given branch
1535 '''return a (possibly filtered) list of heads for the given branch
1536
1536
1537 Heads are returned in topological order, from newest to oldest.
1537 Heads are returned in topological order, from newest to oldest.
1538 If branch is None, use the dirstate branch.
1538 If branch is None, use the dirstate branch.
1539 If start is not None, return only heads reachable from start.
1539 If start is not None, return only heads reachable from start.
1540 If closed is True, return heads that are marked as closed as well.
1540 If closed is True, return heads that are marked as closed as well.
1541 '''
1541 '''
1542 if branch is None:
1542 if branch is None:
1543 branch = self[None].branch()
1543 branch = self[None].branch()
1544 branches = self.branchmap()
1544 branches = self.branchmap()
1545 if branch not in branches:
1545 if branch not in branches:
1546 return []
1546 return []
1547 # the cache returns heads ordered lowest to highest
1547 # the cache returns heads ordered lowest to highest
1548 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1548 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1549 if start is not None:
1549 if start is not None:
1550 # filter out the heads that cannot be reached from startrev
1550 # filter out the heads that cannot be reached from startrev
1551 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1551 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1552 bheads = [h for h in bheads if h in fbheads]
1552 bheads = [h for h in bheads if h in fbheads]
1553 return bheads
1553 return bheads
1554
1554
1555 def branches(self, nodes):
1555 def branches(self, nodes):
1556 if not nodes:
1556 if not nodes:
1557 nodes = [self.changelog.tip()]
1557 nodes = [self.changelog.tip()]
1558 b = []
1558 b = []
1559 for n in nodes:
1559 for n in nodes:
1560 t = n
1560 t = n
1561 while True:
1561 while True:
1562 p = self.changelog.parents(n)
1562 p = self.changelog.parents(n)
1563 if p[1] != nullid or p[0] == nullid:
1563 if p[1] != nullid or p[0] == nullid:
1564 b.append((t, n, p[0], p[1]))
1564 b.append((t, n, p[0], p[1]))
1565 break
1565 break
1566 n = p[0]
1566 n = p[0]
1567 return b
1567 return b
1568
1568
1569 def between(self, pairs):
1569 def between(self, pairs):
1570 r = []
1570 r = []
1571
1571
1572 for top, bottom in pairs:
1572 for top, bottom in pairs:
1573 n, l, i = top, [], 0
1573 n, l, i = top, [], 0
1574 f = 1
1574 f = 1
1575
1575
1576 while n != bottom and n != nullid:
1576 while n != bottom and n != nullid:
1577 p = self.changelog.parents(n)[0]
1577 p = self.changelog.parents(n)[0]
1578 if i == f:
1578 if i == f:
1579 l.append(n)
1579 l.append(n)
1580 f = f * 2
1580 f = f * 2
1581 n = p
1581 n = p
1582 i += 1
1582 i += 1
1583
1583
1584 r.append(l)
1584 r.append(l)
1585
1585
1586 return r
1586 return r
1587
1587
1588 def checkpush(self, pushop):
1588 def checkpush(self, pushop):
1589 """Extensions can override this function if additional checks have
1589 """Extensions can override this function if additional checks have
1590 to be performed before pushing, or call it if they override push
1590 to be performed before pushing, or call it if they override push
1591 command.
1591 command.
1592 """
1592 """
1593 pass
1593 pass
1594
1594
1595 @unfilteredpropertycache
1595 @unfilteredpropertycache
1596 def prepushoutgoinghooks(self):
1596 def prepushoutgoinghooks(self):
1597 """Return util.hooks consists of "(repo, remote, outgoing)"
1597 """Return util.hooks consists of "(repo, remote, outgoing)"
1598 functions, which are called before pushing changesets.
1598 functions, which are called before pushing changesets.
1599 """
1599 """
1600 return util.hooks()
1600 return util.hooks()
1601
1601
1602 def stream_in(self, remote, requirements):
1602 def stream_in(self, remote, requirements):
1603 lock = self.lock()
1603 lock = self.lock()
1604 try:
1604 try:
1605 # Save remote branchmap. We will use it later
1605 # Save remote branchmap. We will use it later
1606 # to speed up branchcache creation
1606 # to speed up branchcache creation
1607 rbranchmap = None
1607 rbranchmap = None
1608 if remote.capable("branchmap"):
1608 if remote.capable("branchmap"):
1609 rbranchmap = remote.branchmap()
1609 rbranchmap = remote.branchmap()
1610
1610
1611 fp = remote.stream_out()
1611 fp = remote.stream_out()
1612 l = fp.readline()
1612 l = fp.readline()
1613 try:
1613 try:
1614 resp = int(l)
1614 resp = int(l)
1615 except ValueError:
1615 except ValueError:
1616 raise error.ResponseError(
1616 raise error.ResponseError(
1617 _('unexpected response from remote server:'), l)
1617 _('unexpected response from remote server:'), l)
1618 if resp == 1:
1618 if resp == 1:
1619 raise util.Abort(_('operation forbidden by server'))
1619 raise util.Abort(_('operation forbidden by server'))
1620 elif resp == 2:
1620 elif resp == 2:
1621 raise util.Abort(_('locking the remote repository failed'))
1621 raise util.Abort(_('locking the remote repository failed'))
1622 elif resp != 0:
1622 elif resp != 0:
1623 raise util.Abort(_('the server sent an unknown error code'))
1623 raise util.Abort(_('the server sent an unknown error code'))
1624 self.ui.status(_('streaming all changes\n'))
1624 self.ui.status(_('streaming all changes\n'))
1625 l = fp.readline()
1625 l = fp.readline()
1626 try:
1626 try:
1627 total_files, total_bytes = map(int, l.split(' ', 1))
1627 total_files, total_bytes = map(int, l.split(' ', 1))
1628 except (ValueError, TypeError):
1628 except (ValueError, TypeError):
1629 raise error.ResponseError(
1629 raise error.ResponseError(
1630 _('unexpected response from remote server:'), l)
1630 _('unexpected response from remote server:'), l)
1631 self.ui.status(_('%d files to transfer, %s of data\n') %
1631 self.ui.status(_('%d files to transfer, %s of data\n') %
1632 (total_files, util.bytecount(total_bytes)))
1632 (total_files, util.bytecount(total_bytes)))
1633 handled_bytes = 0
1633 handled_bytes = 0
1634 self.ui.progress(_('clone'), 0, total=total_bytes)
1634 self.ui.progress(_('clone'), 0, total=total_bytes)
1635 start = time.time()
1635 start = time.time()
1636
1636
1637 tr = self.transaction(_('clone'))
1637 tr = self.transaction(_('clone'))
1638 try:
1638 try:
1639 for i in xrange(total_files):
1639 for i in xrange(total_files):
1640 # XXX doesn't support '\n' or '\r' in filenames
1640 # XXX doesn't support '\n' or '\r' in filenames
1641 l = fp.readline()
1641 l = fp.readline()
1642 try:
1642 try:
1643 name, size = l.split('\0', 1)
1643 name, size = l.split('\0', 1)
1644 size = int(size)
1644 size = int(size)
1645 except (ValueError, TypeError):
1645 except (ValueError, TypeError):
1646 raise error.ResponseError(
1646 raise error.ResponseError(
1647 _('unexpected response from remote server:'), l)
1647 _('unexpected response from remote server:'), l)
1648 if self.ui.debugflag:
1648 if self.ui.debugflag:
1649 self.ui.debug('adding %s (%s)\n' %
1649 self.ui.debug('adding %s (%s)\n' %
1650 (name, util.bytecount(size)))
1650 (name, util.bytecount(size)))
1651 # for backwards compat, name was partially encoded
1651 # for backwards compat, name was partially encoded
1652 ofp = self.sopener(store.decodedir(name), 'w')
1652 ofp = self.sopener(store.decodedir(name), 'w')
1653 for chunk in util.filechunkiter(fp, limit=size):
1653 for chunk in util.filechunkiter(fp, limit=size):
1654 handled_bytes += len(chunk)
1654 handled_bytes += len(chunk)
1655 self.ui.progress(_('clone'), handled_bytes,
1655 self.ui.progress(_('clone'), handled_bytes,
1656 total=total_bytes)
1656 total=total_bytes)
1657 ofp.write(chunk)
1657 ofp.write(chunk)
1658 ofp.close()
1658 ofp.close()
1659 tr.close()
1659 tr.close()
1660 finally:
1660 finally:
1661 tr.release()
1661 tr.release()
1662
1662
1663 # Writing straight to files circumvented the inmemory caches
1663 # Writing straight to files circumvented the inmemory caches
1664 self.invalidate()
1664 self.invalidate()
1665
1665
1666 elapsed = time.time() - start
1666 elapsed = time.time() - start
1667 if elapsed <= 0:
1667 if elapsed <= 0:
1668 elapsed = 0.001
1668 elapsed = 0.001
1669 self.ui.progress(_('clone'), None)
1669 self.ui.progress(_('clone'), None)
1670 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1670 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1671 (util.bytecount(total_bytes), elapsed,
1671 (util.bytecount(total_bytes), elapsed,
1672 util.bytecount(total_bytes / elapsed)))
1672 util.bytecount(total_bytes / elapsed)))
1673
1673
1674 # new requirements = old non-format requirements +
1674 # new requirements = old non-format requirements +
1675 # new format-related
1675 # new format-related
1676 # requirements from the streamed-in repository
1676 # requirements from the streamed-in repository
1677 requirements.update(set(self.requirements) - self.supportedformats)
1677 requirements.update(set(self.requirements) - self.supportedformats)
1678 self._applyrequirements(requirements)
1678 self._applyrequirements(requirements)
1679 self._writerequirements()
1679 self._writerequirements()
1680
1680
1681 if rbranchmap:
1681 if rbranchmap:
1682 rbheads = []
1682 rbheads = []
1683 for bheads in rbranchmap.itervalues():
1683 for bheads in rbranchmap.itervalues():
1684 rbheads.extend(bheads)
1684 rbheads.extend(bheads)
1685
1685
1686 if rbheads:
1686 if rbheads:
1687 rtiprev = max((int(self.changelog.rev(node))
1687 rtiprev = max((int(self.changelog.rev(node))
1688 for node in rbheads))
1688 for node in rbheads))
1689 cache = branchmap.branchcache(rbranchmap,
1689 cache = branchmap.branchcache(rbranchmap,
1690 self[rtiprev].node(),
1690 self[rtiprev].node(),
1691 rtiprev)
1691 rtiprev)
1692 # Try to stick it as low as possible
1692 # Try to stick it as low as possible
1693 # filter above served are unlikely to be fetch from a clone
1693 # filter above served are unlikely to be fetch from a clone
1694 for candidate in ('base', 'immutable', 'served'):
1694 for candidate in ('base', 'immutable', 'served'):
1695 rview = self.filtered(candidate)
1695 rview = self.filtered(candidate)
1696 if cache.validfor(rview):
1696 if cache.validfor(rview):
1697 self._branchcaches[candidate] = cache
1697 self._branchcaches[candidate] = cache
1698 cache.write(rview)
1698 cache.write(rview)
1699 break
1699 break
1700 self.invalidate()
1700 self.invalidate()
1701 return len(self.heads()) + 1
1701 return len(self.heads()) + 1
1702 finally:
1702 finally:
1703 lock.release()
1703 lock.release()
1704
1704
1705 def clone(self, remote, heads=[], stream=False):
1705 def clone(self, remote, heads=[], stream=False):
1706 '''clone remote repository.
1706 '''clone remote repository.
1707
1707
1708 keyword arguments:
1708 keyword arguments:
1709 heads: list of revs to clone (forces use of pull)
1709 heads: list of revs to clone (forces use of pull)
1710 stream: use streaming clone if possible'''
1710 stream: use streaming clone if possible'''
1711
1711
1712 # now, all clients that can request uncompressed clones can
1712 # now, all clients that can request uncompressed clones can
1713 # read repo formats supported by all servers that can serve
1713 # read repo formats supported by all servers that can serve
1714 # them.
1714 # them.
1715
1715
1716 # if revlog format changes, client will have to check version
1716 # if revlog format changes, client will have to check version
1717 # and format flags on "stream" capability, and use
1717 # and format flags on "stream" capability, and use
1718 # uncompressed only if compatible.
1718 # uncompressed only if compatible.
1719
1719
1720 if not stream:
1720 if not stream:
1721 # if the server explicitly prefers to stream (for fast LANs)
1721 # if the server explicitly prefers to stream (for fast LANs)
1722 stream = remote.capable('stream-preferred')
1722 stream = remote.capable('stream-preferred')
1723
1723
1724 if stream and not heads:
1724 if stream and not heads:
1725 # 'stream' means remote revlog format is revlogv1 only
1725 # 'stream' means remote revlog format is revlogv1 only
1726 if remote.capable('stream'):
1726 if remote.capable('stream'):
1727 return self.stream_in(remote, set(('revlogv1',)))
1727 return self.stream_in(remote, set(('revlogv1',)))
1728 # otherwise, 'streamreqs' contains the remote revlog format
1728 # otherwise, 'streamreqs' contains the remote revlog format
1729 streamreqs = remote.capable('streamreqs')
1729 streamreqs = remote.capable('streamreqs')
1730 if streamreqs:
1730 if streamreqs:
1731 streamreqs = set(streamreqs.split(','))
1731 streamreqs = set(streamreqs.split(','))
1732 # if we support it, stream in and adjust our requirements
1732 # if we support it, stream in and adjust our requirements
1733 if not streamreqs - self.supportedformats:
1733 if not streamreqs - self.supportedformats:
1734 return self.stream_in(remote, streamreqs)
1734 return self.stream_in(remote, streamreqs)
1735
1735
1736 quiet = self.ui.backupconfig('ui', 'quietbookmarkmove')
1736 quiet = self.ui.backupconfig('ui', 'quietbookmarkmove')
1737 try:
1737 try:
1738 self.ui.setconfig('ui', 'quietbookmarkmove', True, 'clone')
1738 self.ui.setconfig('ui', 'quietbookmarkmove', True, 'clone')
1739 ret = exchange.pull(self, remote, heads).cgresult
1739 ret = exchange.pull(self, remote, heads).cgresult
1740 finally:
1740 finally:
1741 self.ui.restoreconfig(quiet)
1741 self.ui.restoreconfig(quiet)
1742 return ret
1742 return ret
1743
1743
1744 def pushkey(self, namespace, key, old, new):
1744 def pushkey(self, namespace, key, old, new):
1745 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1745 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1746 old=old, new=new)
1746 old=old, new=new)
1747 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1747 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1748 ret = pushkey.push(self, namespace, key, old, new)
1748 ret = pushkey.push(self, namespace, key, old, new)
1749 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1749 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1750 ret=ret)
1750 ret=ret)
1751 return ret
1751 return ret
1752
1752
1753 def listkeys(self, namespace):
1753 def listkeys(self, namespace):
1754 self.hook('prelistkeys', throw=True, namespace=namespace)
1754 self.hook('prelistkeys', throw=True, namespace=namespace)
1755 self.ui.debug('listing keys for "%s"\n' % namespace)
1755 self.ui.debug('listing keys for "%s"\n' % namespace)
1756 values = pushkey.list(self, namespace)
1756 values = pushkey.list(self, namespace)
1757 self.hook('listkeys', namespace=namespace, values=values)
1757 self.hook('listkeys', namespace=namespace, values=values)
1758 return values
1758 return values
1759
1759
1760 def debugwireargs(self, one, two, three=None, four=None, five=None):
1760 def debugwireargs(self, one, two, three=None, four=None, five=None):
1761 '''used to test argument passing over the wire'''
1761 '''used to test argument passing over the wire'''
1762 return "%s %s %s %s %s" % (one, two, three, four, five)
1762 return "%s %s %s %s %s" % (one, two, three, four, five)
1763
1763
1764 def savecommitmessage(self, text):
1764 def savecommitmessage(self, text):
1765 fp = self.opener('last-message.txt', 'wb')
1765 fp = self.opener('last-message.txt', 'wb')
1766 try:
1766 try:
1767 fp.write(text)
1767 fp.write(text)
1768 finally:
1768 finally:
1769 fp.close()
1769 fp.close()
1770 return self.pathto(fp.name[len(self.root) + 1:])
1770 return self.pathto(fp.name[len(self.root) + 1:])
1771
1771
1772 # used to avoid circular references so destructors work
1772 # used to avoid circular references so destructors work
1773 def aftertrans(files):
1773 def aftertrans(files):
1774 renamefiles = [tuple(t) for t in files]
1774 renamefiles = [tuple(t) for t in files]
1775 def a():
1775 def a():
1776 for vfs, src, dest in renamefiles:
1776 for vfs, src, dest in renamefiles:
1777 try:
1777 try:
1778 vfs.rename(src, dest)
1778 vfs.rename(src, dest)
1779 except OSError: # journal file does not yet exist
1779 except OSError: # journal file does not yet exist
1780 pass
1780 pass
1781 return a
1781 return a
1782
1782
1783 def undoname(fn):
1783 def undoname(fn):
1784 base, name = os.path.split(fn)
1784 base, name = os.path.split(fn)
1785 assert name.startswith('journal')
1785 assert name.startswith('journal')
1786 return os.path.join(base, name.replace('journal', 'undo', 1))
1786 return os.path.join(base, name.replace('journal', 'undo', 1))
1787
1787
1788 def instance(ui, path, create):
1788 def instance(ui, path, create):
1789 return localrepository(ui, util.urllocalpath(path), create)
1789 return localrepository(ui, util.urllocalpath(path), create)
1790
1790
1791 def islocal(path):
1791 def islocal(path):
1792 return True
1792 return True
@@ -1,869 +1,869 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # mapping of options accepted by getbundle and their types
193 # mapping of options accepted by getbundle and their types
194 #
194 #
195 # Meant to be extended by extensions. It is extensions responsibility to ensure
195 # Meant to be extended by extensions. It is extensions responsibility to ensure
196 # such options are properly processed in exchange.getbundle.
196 # such options are properly processed in exchange.getbundle.
197 #
197 #
198 # supported types are:
198 # supported types are:
199 #
199 #
200 # :nodes: list of binary nodes
200 # :nodes: list of binary nodes
201 # :csv: list of comma-separated values
201 # :csv: list of comma-separated values
202 # :plain: string with no transformation needed.
202 # :plain: string with no transformation needed.
203 gboptsmap = {'heads': 'nodes',
203 gboptsmap = {'heads': 'nodes',
204 'common': 'nodes',
204 'common': 'nodes',
205 'obsmarkers': 'boolean',
205 'obsmarkers': 'boolean',
206 'bundlecaps': 'csv',
206 'bundlecaps': 'csv',
207 'listkeys': 'csv',
207 'listkeys': 'csv',
208 'cg': 'boolean'}
208 'cg': 'boolean'}
209
209
210 # client side
210 # client side
211
211
212 class wirepeer(peer.peerrepository):
212 class wirepeer(peer.peerrepository):
213
213
214 def batch(self):
214 def batch(self):
215 return remotebatch(self)
215 return remotebatch(self)
216 def _submitbatch(self, req):
216 def _submitbatch(self, req):
217 cmds = []
217 cmds = []
218 for op, argsdict in req:
218 for op, argsdict in req:
219 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
219 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
220 cmds.append('%s %s' % (op, args))
220 cmds.append('%s %s' % (op, args))
221 rsp = self._call("batch", cmds=';'.join(cmds))
221 rsp = self._call("batch", cmds=';'.join(cmds))
222 return rsp.split(';')
222 return rsp.split(';')
223 def _submitone(self, op, args):
223 def _submitone(self, op, args):
224 return self._call(op, **args)
224 return self._call(op, **args)
225
225
226 @batchable
226 @batchable
227 def lookup(self, key):
227 def lookup(self, key):
228 self.requirecap('lookup', _('look up remote revision'))
228 self.requirecap('lookup', _('look up remote revision'))
229 f = future()
229 f = future()
230 yield {'key': encoding.fromlocal(key)}, f
230 yield {'key': encoding.fromlocal(key)}, f
231 d = f.value
231 d = f.value
232 success, data = d[:-1].split(" ", 1)
232 success, data = d[:-1].split(" ", 1)
233 if int(success):
233 if int(success):
234 yield bin(data)
234 yield bin(data)
235 self._abort(error.RepoError(data))
235 self._abort(error.RepoError(data))
236
236
237 @batchable
237 @batchable
238 def heads(self):
238 def heads(self):
239 f = future()
239 f = future()
240 yield {}, f
240 yield {}, f
241 d = f.value
241 d = f.value
242 try:
242 try:
243 yield decodelist(d[:-1])
243 yield decodelist(d[:-1])
244 except ValueError:
244 except ValueError:
245 self._abort(error.ResponseError(_("unexpected response:"), d))
245 self._abort(error.ResponseError(_("unexpected response:"), d))
246
246
247 @batchable
247 @batchable
248 def known(self, nodes):
248 def known(self, nodes):
249 f = future()
249 f = future()
250 yield {'nodes': encodelist(nodes)}, f
250 yield {'nodes': encodelist(nodes)}, f
251 d = f.value
251 d = f.value
252 try:
252 try:
253 yield [bool(int(b)) for b in d]
253 yield [bool(int(b)) for b in d]
254 except ValueError:
254 except ValueError:
255 self._abort(error.ResponseError(_("unexpected response:"), d))
255 self._abort(error.ResponseError(_("unexpected response:"), d))
256
256
257 @batchable
257 @batchable
258 def branchmap(self):
258 def branchmap(self):
259 f = future()
259 f = future()
260 yield {}, f
260 yield {}, f
261 d = f.value
261 d = f.value
262 try:
262 try:
263 branchmap = {}
263 branchmap = {}
264 for branchpart in d.splitlines():
264 for branchpart in d.splitlines():
265 branchname, branchheads = branchpart.split(' ', 1)
265 branchname, branchheads = branchpart.split(' ', 1)
266 branchname = encoding.tolocal(urllib.unquote(branchname))
266 branchname = encoding.tolocal(urllib.unquote(branchname))
267 branchheads = decodelist(branchheads)
267 branchheads = decodelist(branchheads)
268 branchmap[branchname] = branchheads
268 branchmap[branchname] = branchheads
269 yield branchmap
269 yield branchmap
270 except TypeError:
270 except TypeError:
271 self._abort(error.ResponseError(_("unexpected response:"), d))
271 self._abort(error.ResponseError(_("unexpected response:"), d))
272
272
273 def branches(self, nodes):
273 def branches(self, nodes):
274 n = encodelist(nodes)
274 n = encodelist(nodes)
275 d = self._call("branches", nodes=n)
275 d = self._call("branches", nodes=n)
276 try:
276 try:
277 br = [tuple(decodelist(b)) for b in d.splitlines()]
277 br = [tuple(decodelist(b)) for b in d.splitlines()]
278 return br
278 return br
279 except ValueError:
279 except ValueError:
280 self._abort(error.ResponseError(_("unexpected response:"), d))
280 self._abort(error.ResponseError(_("unexpected response:"), d))
281
281
282 def between(self, pairs):
282 def between(self, pairs):
283 batch = 8 # avoid giant requests
283 batch = 8 # avoid giant requests
284 r = []
284 r = []
285 for i in xrange(0, len(pairs), batch):
285 for i in xrange(0, len(pairs), batch):
286 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
286 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
287 d = self._call("between", pairs=n)
287 d = self._call("between", pairs=n)
288 try:
288 try:
289 r.extend(l and decodelist(l) or [] for l in d.splitlines())
289 r.extend(l and decodelist(l) or [] for l in d.splitlines())
290 except ValueError:
290 except ValueError:
291 self._abort(error.ResponseError(_("unexpected response:"), d))
291 self._abort(error.ResponseError(_("unexpected response:"), d))
292 return r
292 return r
293
293
294 @batchable
294 @batchable
295 def pushkey(self, namespace, key, old, new):
295 def pushkey(self, namespace, key, old, new):
296 if not self.capable('pushkey'):
296 if not self.capable('pushkey'):
297 yield False, None
297 yield False, None
298 f = future()
298 f = future()
299 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
299 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
300 yield {'namespace': encoding.fromlocal(namespace),
300 yield {'namespace': encoding.fromlocal(namespace),
301 'key': encoding.fromlocal(key),
301 'key': encoding.fromlocal(key),
302 'old': encoding.fromlocal(old),
302 'old': encoding.fromlocal(old),
303 'new': encoding.fromlocal(new)}, f
303 'new': encoding.fromlocal(new)}, f
304 d = f.value
304 d = f.value
305 d, output = d.split('\n', 1)
305 d, output = d.split('\n', 1)
306 try:
306 try:
307 d = bool(int(d))
307 d = bool(int(d))
308 except ValueError:
308 except ValueError:
309 raise error.ResponseError(
309 raise error.ResponseError(
310 _('push failed (unexpected response):'), d)
310 _('push failed (unexpected response):'), d)
311 for l in output.splitlines(True):
311 for l in output.splitlines(True):
312 self.ui.status(_('remote: '), l)
312 self.ui.status(_('remote: '), l)
313 yield d
313 yield d
314
314
315 @batchable
315 @batchable
316 def listkeys(self, namespace):
316 def listkeys(self, namespace):
317 if not self.capable('pushkey'):
317 if not self.capable('pushkey'):
318 yield {}, None
318 yield {}, None
319 f = future()
319 f = future()
320 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
320 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
321 yield {'namespace': encoding.fromlocal(namespace)}, f
321 yield {'namespace': encoding.fromlocal(namespace)}, f
322 d = f.value
322 d = f.value
323 yield pushkeymod.decodekeys(d)
323 yield pushkeymod.decodekeys(d)
324
324
325 def stream_out(self):
325 def stream_out(self):
326 return self._callstream('stream_out')
326 return self._callstream('stream_out')
327
327
328 def changegroup(self, nodes, kind):
328 def changegroup(self, nodes, kind):
329 n = encodelist(nodes)
329 n = encodelist(nodes)
330 f = self._callcompressable("changegroup", roots=n)
330 f = self._callcompressable("changegroup", roots=n)
331 return changegroupmod.cg1unpacker(f, 'UN')
331 return changegroupmod.cg1unpacker(f, 'UN')
332
332
333 def changegroupsubset(self, bases, heads, kind):
333 def changegroupsubset(self, bases, heads, kind):
334 self.requirecap('changegroupsubset', _('look up remote changes'))
334 self.requirecap('changegroupsubset', _('look up remote changes'))
335 bases = encodelist(bases)
335 bases = encodelist(bases)
336 heads = encodelist(heads)
336 heads = encodelist(heads)
337 f = self._callcompressable("changegroupsubset",
337 f = self._callcompressable("changegroupsubset",
338 bases=bases, heads=heads)
338 bases=bases, heads=heads)
339 return changegroupmod.cg1unpacker(f, 'UN')
339 return changegroupmod.cg1unpacker(f, 'UN')
340
340
341 def getbundle(self, source, **kwargs):
341 def getbundle(self, source, **kwargs):
342 self.requirecap('getbundle', _('look up remote changes'))
342 self.requirecap('getbundle', _('look up remote changes'))
343 opts = {}
343 opts = {}
344 for key, value in kwargs.iteritems():
344 for key, value in kwargs.iteritems():
345 if value is None:
345 if value is None:
346 continue
346 continue
347 keytype = gboptsmap.get(key)
347 keytype = gboptsmap.get(key)
348 if keytype is None:
348 if keytype is None:
349 assert False, 'unexpected'
349 assert False, 'unexpected'
350 elif keytype == 'nodes':
350 elif keytype == 'nodes':
351 value = encodelist(value)
351 value = encodelist(value)
352 elif keytype == 'csv':
352 elif keytype == 'csv':
353 value = ','.join(value)
353 value = ','.join(value)
354 elif keytype == 'boolean':
354 elif keytype == 'boolean':
355 value = '%i' % bool(value)
355 value = '%i' % bool(value)
356 elif keytype != 'plain':
356 elif keytype != 'plain':
357 raise KeyError('unknown getbundle option type %s'
357 raise KeyError('unknown getbundle option type %s'
358 % keytype)
358 % keytype)
359 opts[key] = value
359 opts[key] = value
360 f = self._callcompressable("getbundle", **opts)
360 f = self._callcompressable("getbundle", **opts)
361 bundlecaps = kwargs.get('bundlecaps')
361 bundlecaps = kwargs.get('bundlecaps')
362 if bundlecaps is not None and 'HG2X' in bundlecaps:
362 if bundlecaps is not None and 'HG2Y' in bundlecaps:
363 return bundle2.unbundle20(self.ui, f)
363 return bundle2.unbundle20(self.ui, f)
364 else:
364 else:
365 return changegroupmod.cg1unpacker(f, 'UN')
365 return changegroupmod.cg1unpacker(f, 'UN')
366
366
367 def unbundle(self, cg, heads, source):
367 def unbundle(self, cg, heads, source):
368 '''Send cg (a readable file-like object representing the
368 '''Send cg (a readable file-like object representing the
369 changegroup to push, typically a chunkbuffer object) to the
369 changegroup to push, typically a chunkbuffer object) to the
370 remote server as a bundle.
370 remote server as a bundle.
371
371
372 When pushing a bundle10 stream, return an integer indicating the
372 When pushing a bundle10 stream, return an integer indicating the
373 result of the push (see localrepository.addchangegroup()).
373 result of the push (see localrepository.addchangegroup()).
374
374
375 When pushing a bundle20 stream, return a bundle20 stream.'''
375 When pushing a bundle20 stream, return a bundle20 stream.'''
376
376
377 if heads != ['force'] and self.capable('unbundlehash'):
377 if heads != ['force'] and self.capable('unbundlehash'):
378 heads = encodelist(['hashed',
378 heads = encodelist(['hashed',
379 util.sha1(''.join(sorted(heads))).digest()])
379 util.sha1(''.join(sorted(heads))).digest()])
380 else:
380 else:
381 heads = encodelist(heads)
381 heads = encodelist(heads)
382
382
383 if util.safehasattr(cg, 'deltaheader'):
383 if util.safehasattr(cg, 'deltaheader'):
384 # this a bundle10, do the old style call sequence
384 # this a bundle10, do the old style call sequence
385 ret, output = self._callpush("unbundle", cg, heads=heads)
385 ret, output = self._callpush("unbundle", cg, heads=heads)
386 if ret == "":
386 if ret == "":
387 raise error.ResponseError(
387 raise error.ResponseError(
388 _('push failed:'), output)
388 _('push failed:'), output)
389 try:
389 try:
390 ret = int(ret)
390 ret = int(ret)
391 except ValueError:
391 except ValueError:
392 raise error.ResponseError(
392 raise error.ResponseError(
393 _('push failed (unexpected response):'), ret)
393 _('push failed (unexpected response):'), ret)
394
394
395 for l in output.splitlines(True):
395 for l in output.splitlines(True):
396 self.ui.status(_('remote: '), l)
396 self.ui.status(_('remote: '), l)
397 else:
397 else:
398 # bundle2 push. Send a stream, fetch a stream.
398 # bundle2 push. Send a stream, fetch a stream.
399 stream = self._calltwowaystream('unbundle', cg, heads=heads)
399 stream = self._calltwowaystream('unbundle', cg, heads=heads)
400 ret = bundle2.unbundle20(self.ui, stream)
400 ret = bundle2.unbundle20(self.ui, stream)
401 return ret
401 return ret
402
402
403 def debugwireargs(self, one, two, three=None, four=None, five=None):
403 def debugwireargs(self, one, two, three=None, four=None, five=None):
404 # don't pass optional arguments left at their default value
404 # don't pass optional arguments left at their default value
405 opts = {}
405 opts = {}
406 if three is not None:
406 if three is not None:
407 opts['three'] = three
407 opts['three'] = three
408 if four is not None:
408 if four is not None:
409 opts['four'] = four
409 opts['four'] = four
410 return self._call('debugwireargs', one=one, two=two, **opts)
410 return self._call('debugwireargs', one=one, two=two, **opts)
411
411
412 def _call(self, cmd, **args):
412 def _call(self, cmd, **args):
413 """execute <cmd> on the server
413 """execute <cmd> on the server
414
414
415 The command is expected to return a simple string.
415 The command is expected to return a simple string.
416
416
417 returns the server reply as a string."""
417 returns the server reply as a string."""
418 raise NotImplementedError()
418 raise NotImplementedError()
419
419
420 def _callstream(self, cmd, **args):
420 def _callstream(self, cmd, **args):
421 """execute <cmd> on the server
421 """execute <cmd> on the server
422
422
423 The command is expected to return a stream.
423 The command is expected to return a stream.
424
424
425 returns the server reply as a file like object."""
425 returns the server reply as a file like object."""
426 raise NotImplementedError()
426 raise NotImplementedError()
427
427
428 def _callcompressable(self, cmd, **args):
428 def _callcompressable(self, cmd, **args):
429 """execute <cmd> on the server
429 """execute <cmd> on the server
430
430
431 The command is expected to return a stream.
431 The command is expected to return a stream.
432
432
433 The stream may have been compressed in some implementations. This
433 The stream may have been compressed in some implementations. This
434 function takes care of the decompression. This is the only difference
434 function takes care of the decompression. This is the only difference
435 with _callstream.
435 with _callstream.
436
436
437 returns the server reply as a file like object.
437 returns the server reply as a file like object.
438 """
438 """
439 raise NotImplementedError()
439 raise NotImplementedError()
440
440
441 def _callpush(self, cmd, fp, **args):
441 def _callpush(self, cmd, fp, **args):
442 """execute a <cmd> on server
442 """execute a <cmd> on server
443
443
444 The command is expected to be related to a push. Push has a special
444 The command is expected to be related to a push. Push has a special
445 return method.
445 return method.
446
446
447 returns the server reply as a (ret, output) tuple. ret is either
447 returns the server reply as a (ret, output) tuple. ret is either
448 empty (error) or a stringified int.
448 empty (error) or a stringified int.
449 """
449 """
450 raise NotImplementedError()
450 raise NotImplementedError()
451
451
452 def _calltwowaystream(self, cmd, fp, **args):
452 def _calltwowaystream(self, cmd, fp, **args):
453 """execute <cmd> on server
453 """execute <cmd> on server
454
454
455 The command will send a stream to the server and get a stream in reply.
455 The command will send a stream to the server and get a stream in reply.
456 """
456 """
457 raise NotImplementedError()
457 raise NotImplementedError()
458
458
459 def _abort(self, exception):
459 def _abort(self, exception):
460 """clearly abort the wire protocol connection and raise the exception
460 """clearly abort the wire protocol connection and raise the exception
461 """
461 """
462 raise NotImplementedError()
462 raise NotImplementedError()
463
463
464 # server side
464 # server side
465
465
466 # wire protocol command can either return a string or one of these classes.
466 # wire protocol command can either return a string or one of these classes.
467 class streamres(object):
467 class streamres(object):
468 """wireproto reply: binary stream
468 """wireproto reply: binary stream
469
469
470 The call was successful and the result is a stream.
470 The call was successful and the result is a stream.
471 Iterate on the `self.gen` attribute to retrieve chunks.
471 Iterate on the `self.gen` attribute to retrieve chunks.
472 """
472 """
473 def __init__(self, gen):
473 def __init__(self, gen):
474 self.gen = gen
474 self.gen = gen
475
475
476 class pushres(object):
476 class pushres(object):
477 """wireproto reply: success with simple integer return
477 """wireproto reply: success with simple integer return
478
478
479 The call was successful and returned an integer contained in `self.res`.
479 The call was successful and returned an integer contained in `self.res`.
480 """
480 """
481 def __init__(self, res):
481 def __init__(self, res):
482 self.res = res
482 self.res = res
483
483
484 class pusherr(object):
484 class pusherr(object):
485 """wireproto reply: failure
485 """wireproto reply: failure
486
486
487 The call failed. The `self.res` attribute contains the error message.
487 The call failed. The `self.res` attribute contains the error message.
488 """
488 """
489 def __init__(self, res):
489 def __init__(self, res):
490 self.res = res
490 self.res = res
491
491
492 class ooberror(object):
492 class ooberror(object):
493 """wireproto reply: failure of a batch of operation
493 """wireproto reply: failure of a batch of operation
494
494
495 Something failed during a batch call. The error message is stored in
495 Something failed during a batch call. The error message is stored in
496 `self.message`.
496 `self.message`.
497 """
497 """
498 def __init__(self, message):
498 def __init__(self, message):
499 self.message = message
499 self.message = message
500
500
501 def dispatch(repo, proto, command):
501 def dispatch(repo, proto, command):
502 repo = repo.filtered("served")
502 repo = repo.filtered("served")
503 func, spec = commands[command]
503 func, spec = commands[command]
504 args = proto.getargs(spec)
504 args = proto.getargs(spec)
505 return func(repo, proto, *args)
505 return func(repo, proto, *args)
506
506
507 def options(cmd, keys, others):
507 def options(cmd, keys, others):
508 opts = {}
508 opts = {}
509 for k in keys:
509 for k in keys:
510 if k in others:
510 if k in others:
511 opts[k] = others[k]
511 opts[k] = others[k]
512 del others[k]
512 del others[k]
513 if others:
513 if others:
514 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
514 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
515 % (cmd, ",".join(others)))
515 % (cmd, ",".join(others)))
516 return opts
516 return opts
517
517
518 # list of commands
518 # list of commands
519 commands = {}
519 commands = {}
520
520
521 def wireprotocommand(name, args=''):
521 def wireprotocommand(name, args=''):
522 """decorator for wire protocol command"""
522 """decorator for wire protocol command"""
523 def register(func):
523 def register(func):
524 commands[name] = (func, args)
524 commands[name] = (func, args)
525 return func
525 return func
526 return register
526 return register
527
527
528 @wireprotocommand('batch', 'cmds *')
528 @wireprotocommand('batch', 'cmds *')
529 def batch(repo, proto, cmds, others):
529 def batch(repo, proto, cmds, others):
530 repo = repo.filtered("served")
530 repo = repo.filtered("served")
531 res = []
531 res = []
532 for pair in cmds.split(';'):
532 for pair in cmds.split(';'):
533 op, args = pair.split(' ', 1)
533 op, args = pair.split(' ', 1)
534 vals = {}
534 vals = {}
535 for a in args.split(','):
535 for a in args.split(','):
536 if a:
536 if a:
537 n, v = a.split('=')
537 n, v = a.split('=')
538 vals[n] = unescapearg(v)
538 vals[n] = unescapearg(v)
539 func, spec = commands[op]
539 func, spec = commands[op]
540 if spec:
540 if spec:
541 keys = spec.split()
541 keys = spec.split()
542 data = {}
542 data = {}
543 for k in keys:
543 for k in keys:
544 if k == '*':
544 if k == '*':
545 star = {}
545 star = {}
546 for key in vals.keys():
546 for key in vals.keys():
547 if key not in keys:
547 if key not in keys:
548 star[key] = vals[key]
548 star[key] = vals[key]
549 data['*'] = star
549 data['*'] = star
550 else:
550 else:
551 data[k] = vals[k]
551 data[k] = vals[k]
552 result = func(repo, proto, *[data[k] for k in keys])
552 result = func(repo, proto, *[data[k] for k in keys])
553 else:
553 else:
554 result = func(repo, proto)
554 result = func(repo, proto)
555 if isinstance(result, ooberror):
555 if isinstance(result, ooberror):
556 return result
556 return result
557 res.append(escapearg(result))
557 res.append(escapearg(result))
558 return ';'.join(res)
558 return ';'.join(res)
559
559
560 @wireprotocommand('between', 'pairs')
560 @wireprotocommand('between', 'pairs')
561 def between(repo, proto, pairs):
561 def between(repo, proto, pairs):
562 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
562 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
563 r = []
563 r = []
564 for b in repo.between(pairs):
564 for b in repo.between(pairs):
565 r.append(encodelist(b) + "\n")
565 r.append(encodelist(b) + "\n")
566 return "".join(r)
566 return "".join(r)
567
567
568 @wireprotocommand('branchmap')
568 @wireprotocommand('branchmap')
569 def branchmap(repo, proto):
569 def branchmap(repo, proto):
570 branchmap = repo.branchmap()
570 branchmap = repo.branchmap()
571 heads = []
571 heads = []
572 for branch, nodes in branchmap.iteritems():
572 for branch, nodes in branchmap.iteritems():
573 branchname = urllib.quote(encoding.fromlocal(branch))
573 branchname = urllib.quote(encoding.fromlocal(branch))
574 branchnodes = encodelist(nodes)
574 branchnodes = encodelist(nodes)
575 heads.append('%s %s' % (branchname, branchnodes))
575 heads.append('%s %s' % (branchname, branchnodes))
576 return '\n'.join(heads)
576 return '\n'.join(heads)
577
577
578 @wireprotocommand('branches', 'nodes')
578 @wireprotocommand('branches', 'nodes')
579 def branches(repo, proto, nodes):
579 def branches(repo, proto, nodes):
580 nodes = decodelist(nodes)
580 nodes = decodelist(nodes)
581 r = []
581 r = []
582 for b in repo.branches(nodes):
582 for b in repo.branches(nodes):
583 r.append(encodelist(b) + "\n")
583 r.append(encodelist(b) + "\n")
584 return "".join(r)
584 return "".join(r)
585
585
586
586
587 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
587 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
588 'known', 'getbundle', 'unbundlehash', 'batch']
588 'known', 'getbundle', 'unbundlehash', 'batch']
589
589
590 def _capabilities(repo, proto):
590 def _capabilities(repo, proto):
591 """return a list of capabilities for a repo
591 """return a list of capabilities for a repo
592
592
593 This function exists to allow extensions to easily wrap capabilities
593 This function exists to allow extensions to easily wrap capabilities
594 computation
594 computation
595
595
596 - returns a lists: easy to alter
596 - returns a lists: easy to alter
597 - change done here will be propagated to both `capabilities` and `hello`
597 - change done here will be propagated to both `capabilities` and `hello`
598 command without any other action needed.
598 command without any other action needed.
599 """
599 """
600 # copy to prevent modification of the global list
600 # copy to prevent modification of the global list
601 caps = list(wireprotocaps)
601 caps = list(wireprotocaps)
602 if _allowstream(repo.ui):
602 if _allowstream(repo.ui):
603 if repo.ui.configbool('server', 'preferuncompressed', False):
603 if repo.ui.configbool('server', 'preferuncompressed', False):
604 caps.append('stream-preferred')
604 caps.append('stream-preferred')
605 requiredformats = repo.requirements & repo.supportedformats
605 requiredformats = repo.requirements & repo.supportedformats
606 # if our local revlogs are just revlogv1, add 'stream' cap
606 # if our local revlogs are just revlogv1, add 'stream' cap
607 if not requiredformats - set(('revlogv1',)):
607 if not requiredformats - set(('revlogv1',)):
608 caps.append('stream')
608 caps.append('stream')
609 # otherwise, add 'streamreqs' detailing our local revlog format
609 # otherwise, add 'streamreqs' detailing our local revlog format
610 else:
610 else:
611 caps.append('streamreqs=%s' % ','.join(requiredformats))
611 caps.append('streamreqs=%s' % ','.join(requiredformats))
612 if repo.ui.configbool('experimental', 'bundle2-exp', False):
612 if repo.ui.configbool('experimental', 'bundle2-exp', False):
613 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
613 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
614 caps.append('bundle2-exp=' + urllib.quote(capsblob))
614 caps.append('bundle2-exp=' + urllib.quote(capsblob))
615 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
615 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
616 caps.append('httpheader=1024')
616 caps.append('httpheader=1024')
617 return caps
617 return caps
618
618
619 # If you are writing an extension and consider wrapping this function. Wrap
619 # If you are writing an extension and consider wrapping this function. Wrap
620 # `_capabilities` instead.
620 # `_capabilities` instead.
621 @wireprotocommand('capabilities')
621 @wireprotocommand('capabilities')
622 def capabilities(repo, proto):
622 def capabilities(repo, proto):
623 return ' '.join(_capabilities(repo, proto))
623 return ' '.join(_capabilities(repo, proto))
624
624
625 @wireprotocommand('changegroup', 'roots')
625 @wireprotocommand('changegroup', 'roots')
626 def changegroup(repo, proto, roots):
626 def changegroup(repo, proto, roots):
627 nodes = decodelist(roots)
627 nodes = decodelist(roots)
628 cg = changegroupmod.changegroup(repo, nodes, 'serve')
628 cg = changegroupmod.changegroup(repo, nodes, 'serve')
629 return streamres(proto.groupchunks(cg))
629 return streamres(proto.groupchunks(cg))
630
630
631 @wireprotocommand('changegroupsubset', 'bases heads')
631 @wireprotocommand('changegroupsubset', 'bases heads')
632 def changegroupsubset(repo, proto, bases, heads):
632 def changegroupsubset(repo, proto, bases, heads):
633 bases = decodelist(bases)
633 bases = decodelist(bases)
634 heads = decodelist(heads)
634 heads = decodelist(heads)
635 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
635 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
636 return streamres(proto.groupchunks(cg))
636 return streamres(proto.groupchunks(cg))
637
637
638 @wireprotocommand('debugwireargs', 'one two *')
638 @wireprotocommand('debugwireargs', 'one two *')
639 def debugwireargs(repo, proto, one, two, others):
639 def debugwireargs(repo, proto, one, two, others):
640 # only accept optional args from the known set
640 # only accept optional args from the known set
641 opts = options('debugwireargs', ['three', 'four'], others)
641 opts = options('debugwireargs', ['three', 'four'], others)
642 return repo.debugwireargs(one, two, **opts)
642 return repo.debugwireargs(one, two, **opts)
643
643
644 # List of options accepted by getbundle.
644 # List of options accepted by getbundle.
645 #
645 #
646 # Meant to be extended by extensions. It is the extension's responsibility to
646 # Meant to be extended by extensions. It is the extension's responsibility to
647 # ensure such options are properly processed in exchange.getbundle.
647 # ensure such options are properly processed in exchange.getbundle.
648 gboptslist = ['heads', 'common', 'bundlecaps']
648 gboptslist = ['heads', 'common', 'bundlecaps']
649
649
650 @wireprotocommand('getbundle', '*')
650 @wireprotocommand('getbundle', '*')
651 def getbundle(repo, proto, others):
651 def getbundle(repo, proto, others):
652 opts = options('getbundle', gboptsmap.keys(), others)
652 opts = options('getbundle', gboptsmap.keys(), others)
653 for k, v in opts.iteritems():
653 for k, v in opts.iteritems():
654 keytype = gboptsmap[k]
654 keytype = gboptsmap[k]
655 if keytype == 'nodes':
655 if keytype == 'nodes':
656 opts[k] = decodelist(v)
656 opts[k] = decodelist(v)
657 elif keytype == 'csv':
657 elif keytype == 'csv':
658 opts[k] = set(v.split(','))
658 opts[k] = set(v.split(','))
659 elif keytype == 'boolean':
659 elif keytype == 'boolean':
660 opts[k] = bool(v)
660 opts[k] = bool(v)
661 elif keytype != 'plain':
661 elif keytype != 'plain':
662 raise KeyError('unknown getbundle option type %s'
662 raise KeyError('unknown getbundle option type %s'
663 % keytype)
663 % keytype)
664 cg = exchange.getbundle(repo, 'serve', **opts)
664 cg = exchange.getbundle(repo, 'serve', **opts)
665 return streamres(proto.groupchunks(cg))
665 return streamres(proto.groupchunks(cg))
666
666
667 @wireprotocommand('heads')
667 @wireprotocommand('heads')
668 def heads(repo, proto):
668 def heads(repo, proto):
669 h = repo.heads()
669 h = repo.heads()
670 return encodelist(h) + "\n"
670 return encodelist(h) + "\n"
671
671
672 @wireprotocommand('hello')
672 @wireprotocommand('hello')
673 def hello(repo, proto):
673 def hello(repo, proto):
674 '''the hello command returns a set of lines describing various
674 '''the hello command returns a set of lines describing various
675 interesting things about the server, in an RFC822-like format.
675 interesting things about the server, in an RFC822-like format.
676 Currently the only one defined is "capabilities", which
676 Currently the only one defined is "capabilities", which
677 consists of a line in the form:
677 consists of a line in the form:
678
678
679 capabilities: space separated list of tokens
679 capabilities: space separated list of tokens
680 '''
680 '''
681 return "capabilities: %s\n" % (capabilities(repo, proto))
681 return "capabilities: %s\n" % (capabilities(repo, proto))
682
682
683 @wireprotocommand('listkeys', 'namespace')
683 @wireprotocommand('listkeys', 'namespace')
684 def listkeys(repo, proto, namespace):
684 def listkeys(repo, proto, namespace):
685 d = repo.listkeys(encoding.tolocal(namespace)).items()
685 d = repo.listkeys(encoding.tolocal(namespace)).items()
686 return pushkeymod.encodekeys(d)
686 return pushkeymod.encodekeys(d)
687
687
688 @wireprotocommand('lookup', 'key')
688 @wireprotocommand('lookup', 'key')
689 def lookup(repo, proto, key):
689 def lookup(repo, proto, key):
690 try:
690 try:
691 k = encoding.tolocal(key)
691 k = encoding.tolocal(key)
692 c = repo[k]
692 c = repo[k]
693 r = c.hex()
693 r = c.hex()
694 success = 1
694 success = 1
695 except Exception, inst:
695 except Exception, inst:
696 r = str(inst)
696 r = str(inst)
697 success = 0
697 success = 0
698 return "%s %s\n" % (success, r)
698 return "%s %s\n" % (success, r)
699
699
700 @wireprotocommand('known', 'nodes *')
700 @wireprotocommand('known', 'nodes *')
701 def known(repo, proto, nodes, others):
701 def known(repo, proto, nodes, others):
702 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
702 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
703
703
704 @wireprotocommand('pushkey', 'namespace key old new')
704 @wireprotocommand('pushkey', 'namespace key old new')
705 def pushkey(repo, proto, namespace, key, old, new):
705 def pushkey(repo, proto, namespace, key, old, new):
706 # compatibility with pre-1.8 clients which were accidentally
706 # compatibility with pre-1.8 clients which were accidentally
707 # sending raw binary nodes rather than utf-8-encoded hex
707 # sending raw binary nodes rather than utf-8-encoded hex
708 if len(new) == 20 and new.encode('string-escape') != new:
708 if len(new) == 20 and new.encode('string-escape') != new:
709 # looks like it could be a binary node
709 # looks like it could be a binary node
710 try:
710 try:
711 new.decode('utf-8')
711 new.decode('utf-8')
712 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
712 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
713 except UnicodeDecodeError:
713 except UnicodeDecodeError:
714 pass # binary, leave unmodified
714 pass # binary, leave unmodified
715 else:
715 else:
716 new = encoding.tolocal(new) # normal path
716 new = encoding.tolocal(new) # normal path
717
717
718 if util.safehasattr(proto, 'restore'):
718 if util.safehasattr(proto, 'restore'):
719
719
720 proto.redirect()
720 proto.redirect()
721
721
722 try:
722 try:
723 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
723 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
724 encoding.tolocal(old), new) or False
724 encoding.tolocal(old), new) or False
725 except util.Abort:
725 except util.Abort:
726 r = False
726 r = False
727
727
728 output = proto.restore()
728 output = proto.restore()
729
729
730 return '%s\n%s' % (int(r), output)
730 return '%s\n%s' % (int(r), output)
731
731
732 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
732 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
733 encoding.tolocal(old), new)
733 encoding.tolocal(old), new)
734 return '%s\n' % int(r)
734 return '%s\n' % int(r)
735
735
736 def _allowstream(ui):
736 def _allowstream(ui):
737 return ui.configbool('server', 'uncompressed', True, untrusted=True)
737 return ui.configbool('server', 'uncompressed', True, untrusted=True)
738
738
739 def _walkstreamfiles(repo):
739 def _walkstreamfiles(repo):
740 # this is it's own function so extensions can override it
740 # this is it's own function so extensions can override it
741 return repo.store.walk()
741 return repo.store.walk()
742
742
743 @wireprotocommand('stream_out')
743 @wireprotocommand('stream_out')
744 def stream(repo, proto):
744 def stream(repo, proto):
745 '''If the server supports streaming clone, it advertises the "stream"
745 '''If the server supports streaming clone, it advertises the "stream"
746 capability with a value representing the version and flags of the repo
746 capability with a value representing the version and flags of the repo
747 it is serving. Client checks to see if it understands the format.
747 it is serving. Client checks to see if it understands the format.
748
748
749 The format is simple: the server writes out a line with the amount
749 The format is simple: the server writes out a line with the amount
750 of files, then the total amount of bytes to be transferred (separated
750 of files, then the total amount of bytes to be transferred (separated
751 by a space). Then, for each file, the server first writes the filename
751 by a space). Then, for each file, the server first writes the filename
752 and file size (separated by the null character), then the file contents.
752 and file size (separated by the null character), then the file contents.
753 '''
753 '''
754
754
755 if not _allowstream(repo.ui):
755 if not _allowstream(repo.ui):
756 return '1\n'
756 return '1\n'
757
757
758 entries = []
758 entries = []
759 total_bytes = 0
759 total_bytes = 0
760 try:
760 try:
761 # get consistent snapshot of repo, lock during scan
761 # get consistent snapshot of repo, lock during scan
762 lock = repo.lock()
762 lock = repo.lock()
763 try:
763 try:
764 repo.ui.debug('scanning\n')
764 repo.ui.debug('scanning\n')
765 for name, ename, size in _walkstreamfiles(repo):
765 for name, ename, size in _walkstreamfiles(repo):
766 if size:
766 if size:
767 entries.append((name, size))
767 entries.append((name, size))
768 total_bytes += size
768 total_bytes += size
769 finally:
769 finally:
770 lock.release()
770 lock.release()
771 except error.LockError:
771 except error.LockError:
772 return '2\n' # error: 2
772 return '2\n' # error: 2
773
773
774 def streamer(repo, entries, total):
774 def streamer(repo, entries, total):
775 '''stream out all metadata files in repository.'''
775 '''stream out all metadata files in repository.'''
776 yield '0\n' # success
776 yield '0\n' # success
777 repo.ui.debug('%d files, %d bytes to transfer\n' %
777 repo.ui.debug('%d files, %d bytes to transfer\n' %
778 (len(entries), total_bytes))
778 (len(entries), total_bytes))
779 yield '%d %d\n' % (len(entries), total_bytes)
779 yield '%d %d\n' % (len(entries), total_bytes)
780
780
781 sopener = repo.sopener
781 sopener = repo.sopener
782 oldaudit = sopener.mustaudit
782 oldaudit = sopener.mustaudit
783 debugflag = repo.ui.debugflag
783 debugflag = repo.ui.debugflag
784 sopener.mustaudit = False
784 sopener.mustaudit = False
785
785
786 try:
786 try:
787 for name, size in entries:
787 for name, size in entries:
788 if debugflag:
788 if debugflag:
789 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
789 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
790 # partially encode name over the wire for backwards compat
790 # partially encode name over the wire for backwards compat
791 yield '%s\0%d\n' % (store.encodedir(name), size)
791 yield '%s\0%d\n' % (store.encodedir(name), size)
792 if size <= 65536:
792 if size <= 65536:
793 fp = sopener(name)
793 fp = sopener(name)
794 try:
794 try:
795 data = fp.read(size)
795 data = fp.read(size)
796 finally:
796 finally:
797 fp.close()
797 fp.close()
798 yield data
798 yield data
799 else:
799 else:
800 for chunk in util.filechunkiter(sopener(name), limit=size):
800 for chunk in util.filechunkiter(sopener(name), limit=size):
801 yield chunk
801 yield chunk
802 # replace with "finally:" when support for python 2.4 has been dropped
802 # replace with "finally:" when support for python 2.4 has been dropped
803 except Exception:
803 except Exception:
804 sopener.mustaudit = oldaudit
804 sopener.mustaudit = oldaudit
805 raise
805 raise
806 sopener.mustaudit = oldaudit
806 sopener.mustaudit = oldaudit
807
807
808 return streamres(streamer(repo, entries, total_bytes))
808 return streamres(streamer(repo, entries, total_bytes))
809
809
810 @wireprotocommand('unbundle', 'heads')
810 @wireprotocommand('unbundle', 'heads')
811 def unbundle(repo, proto, heads):
811 def unbundle(repo, proto, heads):
812 their_heads = decodelist(heads)
812 their_heads = decodelist(heads)
813
813
814 try:
814 try:
815 proto.redirect()
815 proto.redirect()
816
816
817 exchange.check_heads(repo, their_heads, 'preparing changes')
817 exchange.check_heads(repo, their_heads, 'preparing changes')
818
818
819 # write bundle data to temporary file because it can be big
819 # write bundle data to temporary file because it can be big
820 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
820 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
821 fp = os.fdopen(fd, 'wb+')
821 fp = os.fdopen(fd, 'wb+')
822 r = 0
822 r = 0
823 try:
823 try:
824 proto.getfile(fp)
824 proto.getfile(fp)
825 fp.seek(0)
825 fp.seek(0)
826 gen = exchange.readbundle(repo.ui, fp, None)
826 gen = exchange.readbundle(repo.ui, fp, None)
827 r = exchange.unbundle(repo, gen, their_heads, 'serve',
827 r = exchange.unbundle(repo, gen, their_heads, 'serve',
828 proto._client())
828 proto._client())
829 if util.safehasattr(r, 'addpart'):
829 if util.safehasattr(r, 'addpart'):
830 # The return looks streameable, we are in the bundle2 case and
830 # The return looks streameable, we are in the bundle2 case and
831 # should return a stream.
831 # should return a stream.
832 return streamres(r.getchunks())
832 return streamres(r.getchunks())
833 return pushres(r)
833 return pushres(r)
834
834
835 finally:
835 finally:
836 fp.close()
836 fp.close()
837 os.unlink(tempname)
837 os.unlink(tempname)
838 except error.BundleValueError, exc:
838 except error.BundleValueError, exc:
839 bundler = bundle2.bundle20(repo.ui)
839 bundler = bundle2.bundle20(repo.ui)
840 errpart = bundler.newpart('B2X:ERROR:UNSUPPORTEDCONTENT')
840 errpart = bundler.newpart('B2X:ERROR:UNSUPPORTEDCONTENT')
841 if exc.parttype is not None:
841 if exc.parttype is not None:
842 errpart.addparam('parttype', exc.parttype)
842 errpart.addparam('parttype', exc.parttype)
843 if exc.params:
843 if exc.params:
844 errpart.addparam('params', '\0'.join(exc.params))
844 errpart.addparam('params', '\0'.join(exc.params))
845 return streamres(bundler.getchunks())
845 return streamres(bundler.getchunks())
846 except util.Abort, inst:
846 except util.Abort, inst:
847 # The old code we moved used sys.stderr directly.
847 # The old code we moved used sys.stderr directly.
848 # We did not change it to minimise code change.
848 # We did not change it to minimise code change.
849 # This need to be moved to something proper.
849 # This need to be moved to something proper.
850 # Feel free to do it.
850 # Feel free to do it.
851 if getattr(inst, 'duringunbundle2', False):
851 if getattr(inst, 'duringunbundle2', False):
852 bundler = bundle2.bundle20(repo.ui)
852 bundler = bundle2.bundle20(repo.ui)
853 manargs = [('message', str(inst))]
853 manargs = [('message', str(inst))]
854 advargs = []
854 advargs = []
855 if inst.hint is not None:
855 if inst.hint is not None:
856 advargs.append(('hint', inst.hint))
856 advargs.append(('hint', inst.hint))
857 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
857 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
858 manargs, advargs))
858 manargs, advargs))
859 return streamres(bundler.getchunks())
859 return streamres(bundler.getchunks())
860 else:
860 else:
861 sys.stderr.write("abort: %s\n" % inst)
861 sys.stderr.write("abort: %s\n" % inst)
862 return pushres(0)
862 return pushres(0)
863 except error.PushRaced, exc:
863 except error.PushRaced, exc:
864 if getattr(exc, 'duringunbundle2', False):
864 if getattr(exc, 'duringunbundle2', False):
865 bundler = bundle2.bundle20(repo.ui)
865 bundler = bundle2.bundle20(repo.ui)
866 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
866 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
867 return streamres(bundler.getchunks())
867 return streamres(bundler.getchunks())
868 else:
868 else:
869 return pusherr(str(exc))
869 return pusherr(str(exc))
@@ -1,802 +1,802 b''
1 This test is decicated to test the bundle2 container format
1 This test is decicated to test the bundle2 container format
2
2
3 It test multiple existing parts to test different feature of the container. You
3 It test multiple existing parts to test different feature of the container. You
4 probably do not need to touch this test unless you change the binary encoding
4 probably do not need to touch this test unless you change the binary encoding
5 of the bundle2 format itself.
5 of the bundle2 format itself.
6
6
7 Create an extension to test bundle2 API
7 Create an extension to test bundle2 API
8
8
9 $ cat > bundle2.py << EOF
9 $ cat > bundle2.py << EOF
10 > """A small extension to test bundle2 implementation
10 > """A small extension to test bundle2 implementation
11 >
11 >
12 > Current bundle2 implementation is far too limited to be used in any core
12 > Current bundle2 implementation is far too limited to be used in any core
13 > code. We still need to be able to test it while it grow up.
13 > code. We still need to be able to test it while it grow up.
14 > """
14 > """
15 >
15 >
16 > import sys, os
16 > import sys, os
17 > from mercurial import cmdutil
17 > from mercurial import cmdutil
18 > from mercurial import util
18 > from mercurial import util
19 > from mercurial import bundle2
19 > from mercurial import bundle2
20 > from mercurial import scmutil
20 > from mercurial import scmutil
21 > from mercurial import discovery
21 > from mercurial import discovery
22 > from mercurial import changegroup
22 > from mercurial import changegroup
23 > from mercurial import error
23 > from mercurial import error
24 > from mercurial import obsolete
24 > from mercurial import obsolete
25 >
25 >
26 >
26 >
27 > try:
27 > try:
28 > import msvcrt
28 > import msvcrt
29 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
29 > msvcrt.setmode(sys.stdin.fileno(), os.O_BINARY)
30 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
30 > msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
31 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
31 > msvcrt.setmode(sys.stderr.fileno(), os.O_BINARY)
32 > except ImportError:
32 > except ImportError:
33 > pass
33 > pass
34 >
34 >
35 > cmdtable = {}
35 > cmdtable = {}
36 > command = cmdutil.command(cmdtable)
36 > command = cmdutil.command(cmdtable)
37 >
37 >
38 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
38 > ELEPHANTSSONG = """Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
39 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
39 > Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
40 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
40 > Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko."""
41 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
41 > assert len(ELEPHANTSSONG) == 178 # future test say 178 bytes, trust it.
42 >
42 >
43 > @bundle2.parthandler('test:song')
43 > @bundle2.parthandler('test:song')
44 > def songhandler(op, part):
44 > def songhandler(op, part):
45 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
45 > """handle a "test:song" bundle2 part, printing the lyrics on stdin"""
46 > op.ui.write('The choir starts singing:\n')
46 > op.ui.write('The choir starts singing:\n')
47 > verses = 0
47 > verses = 0
48 > for line in part.read().split('\n'):
48 > for line in part.read().split('\n'):
49 > op.ui.write(' %s\n' % line)
49 > op.ui.write(' %s\n' % line)
50 > verses += 1
50 > verses += 1
51 > op.records.add('song', {'verses': verses})
51 > op.records.add('song', {'verses': verses})
52 >
52 >
53 > @bundle2.parthandler('test:ping')
53 > @bundle2.parthandler('test:ping')
54 > def pinghandler(op, part):
54 > def pinghandler(op, part):
55 > op.ui.write('received ping request (id %i)\n' % part.id)
55 > op.ui.write('received ping request (id %i)\n' % part.id)
56 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
56 > if op.reply is not None and 'ping-pong' in op.reply.capabilities:
57 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
57 > op.ui.write_err('replying to ping request (id %i)\n' % part.id)
58 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
58 > op.reply.newpart('test:pong', [('in-reply-to', str(part.id))])
59 >
59 >
60 > @bundle2.parthandler('test:debugreply')
60 > @bundle2.parthandler('test:debugreply')
61 > def debugreply(op, part):
61 > def debugreply(op, part):
62 > """print data about the capacity of the bundle reply"""
62 > """print data about the capacity of the bundle reply"""
63 > if op.reply is None:
63 > if op.reply is None:
64 > op.ui.write('debugreply: no reply\n')
64 > op.ui.write('debugreply: no reply\n')
65 > else:
65 > else:
66 > op.ui.write('debugreply: capabilities:\n')
66 > op.ui.write('debugreply: capabilities:\n')
67 > for cap in sorted(op.reply.capabilities):
67 > for cap in sorted(op.reply.capabilities):
68 > op.ui.write('debugreply: %r\n' % cap)
68 > op.ui.write('debugreply: %r\n' % cap)
69 > for val in op.reply.capabilities[cap]:
69 > for val in op.reply.capabilities[cap]:
70 > op.ui.write('debugreply: %r\n' % val)
70 > op.ui.write('debugreply: %r\n' % val)
71 >
71 >
72 > @command('bundle2',
72 > @command('bundle2',
73 > [('', 'param', [], 'stream level parameter'),
73 > [('', 'param', [], 'stream level parameter'),
74 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
74 > ('', 'unknown', False, 'include an unknown mandatory part in the bundle'),
75 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
75 > ('', 'unknownparams', False, 'include an unknown part parameters in the bundle'),
76 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
76 > ('', 'parts', False, 'include some arbitrary parts to the bundle'),
77 > ('', 'reply', False, 'produce a reply bundle'),
77 > ('', 'reply', False, 'produce a reply bundle'),
78 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
78 > ('', 'pushrace', False, 'includes a check:head part with unknown nodes'),
79 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
79 > ('', 'genraise', False, 'includes a part that raise an exception during generation'),
80 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
80 > ('r', 'rev', [], 'includes those changeset in the bundle'),],
81 > '[OUTPUTFILE]')
81 > '[OUTPUTFILE]')
82 > def cmdbundle2(ui, repo, path=None, **opts):
82 > def cmdbundle2(ui, repo, path=None, **opts):
83 > """write a bundle2 container on standard ouput"""
83 > """write a bundle2 container on standard ouput"""
84 > bundler = bundle2.bundle20(ui)
84 > bundler = bundle2.bundle20(ui)
85 > for p in opts['param']:
85 > for p in opts['param']:
86 > p = p.split('=', 1)
86 > p = p.split('=', 1)
87 > try:
87 > try:
88 > bundler.addparam(*p)
88 > bundler.addparam(*p)
89 > except ValueError, exc:
89 > except ValueError, exc:
90 > raise util.Abort('%s' % exc)
90 > raise util.Abort('%s' % exc)
91 >
91 >
92 > if opts['reply']:
92 > if opts['reply']:
93 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
93 > capsstring = 'ping-pong\nelephants=babar,celeste\ncity%3D%21=celeste%2Cville'
94 > bundler.newpart('b2x:replycaps', data=capsstring)
94 > bundler.newpart('b2x:replycaps', data=capsstring)
95 >
95 >
96 > if opts['pushrace']:
96 > if opts['pushrace']:
97 > # also serve to test the assignement of data outside of init
97 > # also serve to test the assignement of data outside of init
98 > part = bundler.newpart('b2x:check:heads')
98 > part = bundler.newpart('b2x:check:heads')
99 > part.data = '01234567890123456789'
99 > part.data = '01234567890123456789'
100 >
100 >
101 > revs = opts['rev']
101 > revs = opts['rev']
102 > if 'rev' in opts:
102 > if 'rev' in opts:
103 > revs = scmutil.revrange(repo, opts['rev'])
103 > revs = scmutil.revrange(repo, opts['rev'])
104 > if revs:
104 > if revs:
105 > # very crude version of a changegroup part creation
105 > # very crude version of a changegroup part creation
106 > bundled = repo.revs('%ld::%ld', revs, revs)
106 > bundled = repo.revs('%ld::%ld', revs, revs)
107 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
107 > headmissing = [c.node() for c in repo.set('heads(%ld)', revs)]
108 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
108 > headcommon = [c.node() for c in repo.set('parents(%ld) - %ld', revs, revs)]
109 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
109 > outgoing = discovery.outgoing(repo.changelog, headcommon, headmissing)
110 > cg = changegroup.getlocalchangegroup(repo, 'test:bundle2', outgoing, None)
110 > cg = changegroup.getlocalchangegroup(repo, 'test:bundle2', outgoing, None)
111 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
111 > bundler.newpart('b2x:changegroup', data=cg.getchunks())
112 >
112 >
113 > if opts['parts']:
113 > if opts['parts']:
114 > bundler.newpart('test:empty')
114 > bundler.newpart('test:empty')
115 > # add a second one to make sure we handle multiple parts
115 > # add a second one to make sure we handle multiple parts
116 > bundler.newpart('test:empty')
116 > bundler.newpart('test:empty')
117 > bundler.newpart('test:song', data=ELEPHANTSSONG)
117 > bundler.newpart('test:song', data=ELEPHANTSSONG)
118 > bundler.newpart('test:debugreply')
118 > bundler.newpart('test:debugreply')
119 > mathpart = bundler.newpart('test:math')
119 > mathpart = bundler.newpart('test:math')
120 > mathpart.addparam('pi', '3.14')
120 > mathpart.addparam('pi', '3.14')
121 > mathpart.addparam('e', '2.72')
121 > mathpart.addparam('e', '2.72')
122 > mathpart.addparam('cooking', 'raw', mandatory=False)
122 > mathpart.addparam('cooking', 'raw', mandatory=False)
123 > mathpart.data = '42'
123 > mathpart.data = '42'
124 > # advisory known part with unknown mandatory param
124 > # advisory known part with unknown mandatory param
125 > bundler.newpart('test:song', [('randomparam','')])
125 > bundler.newpart('test:song', [('randomparam','')])
126 > if opts['unknown']:
126 > if opts['unknown']:
127 > bundler.newpart('test:UNKNOWN', data='some random content')
127 > bundler.newpart('test:UNKNOWN', data='some random content')
128 > if opts['unknownparams']:
128 > if opts['unknownparams']:
129 > bundler.newpart('test:SONG', [('randomparams', '')])
129 > bundler.newpart('test:SONG', [('randomparams', '')])
130 > if opts['parts']:
130 > if opts['parts']:
131 > bundler.newpart('test:ping')
131 > bundler.newpart('test:ping')
132 > if opts['genraise']:
132 > if opts['genraise']:
133 > def genraise():
133 > def genraise():
134 > yield 'first line\n'
134 > yield 'first line\n'
135 > raise RuntimeError('Someone set up us the bomb!')
135 > raise RuntimeError('Someone set up us the bomb!')
136 > bundler.newpart('b2x:output', data=genraise())
136 > bundler.newpart('b2x:output', data=genraise())
137 >
137 >
138 > if path is None:
138 > if path is None:
139 > file = sys.stdout
139 > file = sys.stdout
140 > else:
140 > else:
141 > file = open(path, 'wb')
141 > file = open(path, 'wb')
142 >
142 >
143 > try:
143 > try:
144 > for chunk in bundler.getchunks():
144 > for chunk in bundler.getchunks():
145 > file.write(chunk)
145 > file.write(chunk)
146 > except RuntimeError, exc:
146 > except RuntimeError, exc:
147 > raise util.Abort(exc)
147 > raise util.Abort(exc)
148 >
148 >
149 > @command('unbundle2', [], '')
149 > @command('unbundle2', [], '')
150 > def cmdunbundle2(ui, repo, replypath=None):
150 > def cmdunbundle2(ui, repo, replypath=None):
151 > """process a bundle2 stream from stdin on the current repo"""
151 > """process a bundle2 stream from stdin on the current repo"""
152 > try:
152 > try:
153 > tr = None
153 > tr = None
154 > lock = repo.lock()
154 > lock = repo.lock()
155 > tr = repo.transaction('processbundle')
155 > tr = repo.transaction('processbundle')
156 > try:
156 > try:
157 > unbundler = bundle2.unbundle20(ui, sys.stdin)
157 > unbundler = bundle2.unbundle20(ui, sys.stdin)
158 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
158 > op = bundle2.processbundle(repo, unbundler, lambda: tr)
159 > tr.close()
159 > tr.close()
160 > except error.BundleValueError, exc:
160 > except error.BundleValueError, exc:
161 > raise util.Abort('missing support for %s' % exc)
161 > raise util.Abort('missing support for %s' % exc)
162 > except error.PushRaced, exc:
162 > except error.PushRaced, exc:
163 > raise util.Abort('push race: %s' % exc)
163 > raise util.Abort('push race: %s' % exc)
164 > finally:
164 > finally:
165 > if tr is not None:
165 > if tr is not None:
166 > tr.release()
166 > tr.release()
167 > lock.release()
167 > lock.release()
168 > remains = sys.stdin.read()
168 > remains = sys.stdin.read()
169 > ui.write('%i unread bytes\n' % len(remains))
169 > ui.write('%i unread bytes\n' % len(remains))
170 > if op.records['song']:
170 > if op.records['song']:
171 > totalverses = sum(r['verses'] for r in op.records['song'])
171 > totalverses = sum(r['verses'] for r in op.records['song'])
172 > ui.write('%i total verses sung\n' % totalverses)
172 > ui.write('%i total verses sung\n' % totalverses)
173 > for rec in op.records['changegroup']:
173 > for rec in op.records['changegroup']:
174 > ui.write('addchangegroup return: %i\n' % rec['return'])
174 > ui.write('addchangegroup return: %i\n' % rec['return'])
175 > if op.reply is not None and replypath is not None:
175 > if op.reply is not None and replypath is not None:
176 > file = open(replypath, 'wb')
176 > file = open(replypath, 'wb')
177 > for chunk in op.reply.getchunks():
177 > for chunk in op.reply.getchunks():
178 > file.write(chunk)
178 > file.write(chunk)
179 >
179 >
180 > @command('statbundle2', [], '')
180 > @command('statbundle2', [], '')
181 > def cmdstatbundle2(ui, repo):
181 > def cmdstatbundle2(ui, repo):
182 > """print statistic on the bundle2 container read from stdin"""
182 > """print statistic on the bundle2 container read from stdin"""
183 > unbundler = bundle2.unbundle20(ui, sys.stdin)
183 > unbundler = bundle2.unbundle20(ui, sys.stdin)
184 > try:
184 > try:
185 > params = unbundler.params
185 > params = unbundler.params
186 > except error.BundleValueError, exc:
186 > except error.BundleValueError, exc:
187 > raise util.Abort('unknown parameters: %s' % exc)
187 > raise util.Abort('unknown parameters: %s' % exc)
188 > ui.write('options count: %i\n' % len(params))
188 > ui.write('options count: %i\n' % len(params))
189 > for key in sorted(params):
189 > for key in sorted(params):
190 > ui.write('- %s\n' % key)
190 > ui.write('- %s\n' % key)
191 > value = params[key]
191 > value = params[key]
192 > if value is not None:
192 > if value is not None:
193 > ui.write(' %s\n' % value)
193 > ui.write(' %s\n' % value)
194 > count = 0
194 > count = 0
195 > for p in unbundler.iterparts():
195 > for p in unbundler.iterparts():
196 > count += 1
196 > count += 1
197 > ui.write(' :%s:\n' % p.type)
197 > ui.write(' :%s:\n' % p.type)
198 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
198 > ui.write(' mandatory: %i\n' % len(p.mandatoryparams))
199 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
199 > ui.write(' advisory: %i\n' % len(p.advisoryparams))
200 > ui.write(' payload: %i bytes\n' % len(p.read()))
200 > ui.write(' payload: %i bytes\n' % len(p.read()))
201 > ui.write('parts count: %i\n' % count)
201 > ui.write('parts count: %i\n' % count)
202 > EOF
202 > EOF
203 $ cat >> $HGRCPATH << EOF
203 $ cat >> $HGRCPATH << EOF
204 > [extensions]
204 > [extensions]
205 > bundle2=$TESTTMP/bundle2.py
205 > bundle2=$TESTTMP/bundle2.py
206 > [experimental]
206 > [experimental]
207 > bundle2-exp=True
207 > bundle2-exp=True
208 > evolution=createmarkers
208 > evolution=createmarkers
209 > [ui]
209 > [ui]
210 > ssh=python "$TESTDIR/dummyssh"
210 > ssh=python "$TESTDIR/dummyssh"
211 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
211 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
212 > [web]
212 > [web]
213 > push_ssl = false
213 > push_ssl = false
214 > allow_push = *
214 > allow_push = *
215 > [phases]
215 > [phases]
216 > publish=False
216 > publish=False
217 > EOF
217 > EOF
218
218
219 The extension requires a repo (currently unused)
219 The extension requires a repo (currently unused)
220
220
221 $ hg init main
221 $ hg init main
222 $ cd main
222 $ cd main
223 $ touch a
223 $ touch a
224 $ hg add a
224 $ hg add a
225 $ hg commit -m 'a'
225 $ hg commit -m 'a'
226
226
227
227
228 Empty bundle
228 Empty bundle
229 =================
229 =================
230
230
231 - no option
231 - no option
232 - no parts
232 - no parts
233
233
234 Test bundling
234 Test bundling
235
235
236 $ hg bundle2
236 $ hg bundle2
237 HG2X\x00\x00\x00\x00 (no-eol) (esc)
237 HG2Y\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
238
238
239 Test unbundling
239 Test unbundling
240
240
241 $ hg bundle2 | hg statbundle2
241 $ hg bundle2 | hg statbundle2
242 options count: 0
242 options count: 0
243 parts count: 0
243 parts count: 0
244
244
245 Test old style bundle are detected and refused
245 Test old style bundle are detected and refused
246
246
247 $ hg bundle --all ../bundle.hg
247 $ hg bundle --all ../bundle.hg
248 1 changesets found
248 1 changesets found
249 $ hg statbundle2 < ../bundle.hg
249 $ hg statbundle2 < ../bundle.hg
250 abort: unknown bundle version 10
250 abort: unknown bundle version 10
251 [255]
251 [255]
252
252
253 Test parameters
253 Test parameters
254 =================
254 =================
255
255
256 - some options
256 - some options
257 - no parts
257 - no parts
258
258
259 advisory parameters, no value
259 advisory parameters, no value
260 -------------------------------
260 -------------------------------
261
261
262 Simplest possible parameters form
262 Simplest possible parameters form
263
263
264 Test generation simple option
264 Test generation simple option
265
265
266 $ hg bundle2 --param 'caution'
266 $ hg bundle2 --param 'caution'
267 HG2X\x00\x07caution\x00\x00 (no-eol) (esc)
267 HG2Y\x00\x00\x00\x07caution\x00\x00\x00\x00 (no-eol) (esc)
268
268
269 Test unbundling
269 Test unbundling
270
270
271 $ hg bundle2 --param 'caution' | hg statbundle2
271 $ hg bundle2 --param 'caution' | hg statbundle2
272 options count: 1
272 options count: 1
273 - caution
273 - caution
274 parts count: 0
274 parts count: 0
275
275
276 Test generation multiple option
276 Test generation multiple option
277
277
278 $ hg bundle2 --param 'caution' --param 'meal'
278 $ hg bundle2 --param 'caution' --param 'meal'
279 HG2X\x00\x0ccaution meal\x00\x00 (no-eol) (esc)
279 HG2Y\x00\x00\x00\x0ccaution meal\x00\x00\x00\x00 (no-eol) (esc)
280
280
281 Test unbundling
281 Test unbundling
282
282
283 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
283 $ hg bundle2 --param 'caution' --param 'meal' | hg statbundle2
284 options count: 2
284 options count: 2
285 - caution
285 - caution
286 - meal
286 - meal
287 parts count: 0
287 parts count: 0
288
288
289 advisory parameters, with value
289 advisory parameters, with value
290 -------------------------------
290 -------------------------------
291
291
292 Test generation
292 Test generation
293
293
294 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
294 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants'
295 HG2X\x00\x1ccaution meal=vegan elephants\x00\x00 (no-eol) (esc)
295 HG2Y\x00\x00\x00\x1ccaution meal=vegan elephants\x00\x00\x00\x00 (no-eol) (esc)
296
296
297 Test unbundling
297 Test unbundling
298
298
299 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
299 $ hg bundle2 --param 'caution' --param 'meal=vegan' --param 'elephants' | hg statbundle2
300 options count: 3
300 options count: 3
301 - caution
301 - caution
302 - elephants
302 - elephants
303 - meal
303 - meal
304 vegan
304 vegan
305 parts count: 0
305 parts count: 0
306
306
307 parameter with special char in value
307 parameter with special char in value
308 ---------------------------------------------------
308 ---------------------------------------------------
309
309
310 Test generation
310 Test generation
311
311
312 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
312 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple
313 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
313 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
314
314
315 Test unbundling
315 Test unbundling
316
316
317 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
317 $ hg bundle2 --param 'e|! 7/=babar%#==tutu' --param simple | hg statbundle2
318 options count: 2
318 options count: 2
319 - e|! 7/
319 - e|! 7/
320 babar%#==tutu
320 babar%#==tutu
321 - simple
321 - simple
322 parts count: 0
322 parts count: 0
323
323
324 Test unknown mandatory option
324 Test unknown mandatory option
325 ---------------------------------------------------
325 ---------------------------------------------------
326
326
327 $ hg bundle2 --param 'Gravity' | hg statbundle2
327 $ hg bundle2 --param 'Gravity' | hg statbundle2
328 abort: unknown parameters: Stream Parameter - Gravity
328 abort: unknown parameters: Stream Parameter - Gravity
329 [255]
329 [255]
330
330
331 Test debug output
331 Test debug output
332 ---------------------------------------------------
332 ---------------------------------------------------
333
333
334 bundling debug
334 bundling debug
335
335
336 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
336 $ hg bundle2 --debug --param 'e|! 7/=babar%#==tutu' --param simple ../out.hg2
337 start emission of HG2X stream
337 start emission of HG2Y stream
338 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
338 bundle parameter: e%7C%21%207/=babar%25%23%3D%3Dtutu simple
339 start of parts
339 start of parts
340 end of bundle
340 end of bundle
341
341
342 file content is ok
342 file content is ok
343
343
344 $ cat ../out.hg2
344 $ cat ../out.hg2
345 HG2X\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00 (no-eol) (esc)
345 HG2Y\x00\x00\x00)e%7C%21%207/=babar%25%23%3D%3Dtutu simple\x00\x00\x00\x00 (no-eol) (esc)
346
346
347 unbundling debug
347 unbundling debug
348
348
349 $ hg statbundle2 --debug < ../out.hg2
349 $ hg statbundle2 --debug < ../out.hg2
350 start processing of HG2X stream
350 start processing of HG2Y stream
351 reading bundle2 stream parameters
351 reading bundle2 stream parameters
352 ignoring unknown parameter 'e|! 7/'
352 ignoring unknown parameter 'e|! 7/'
353 ignoring unknown parameter 'simple'
353 ignoring unknown parameter 'simple'
354 options count: 2
354 options count: 2
355 - e|! 7/
355 - e|! 7/
356 babar%#==tutu
356 babar%#==tutu
357 - simple
357 - simple
358 start extraction of bundle2 parts
358 start extraction of bundle2 parts
359 part header size: 0
359 part header size: 0
360 end of bundle2 stream
360 end of bundle2 stream
361 parts count: 0
361 parts count: 0
362
362
363
363
364 Test buggy input
364 Test buggy input
365 ---------------------------------------------------
365 ---------------------------------------------------
366
366
367 empty parameter name
367 empty parameter name
368
368
369 $ hg bundle2 --param '' --quiet
369 $ hg bundle2 --param '' --quiet
370 abort: empty parameter name
370 abort: empty parameter name
371 [255]
371 [255]
372
372
373 bad parameter name
373 bad parameter name
374
374
375 $ hg bundle2 --param 42babar
375 $ hg bundle2 --param 42babar
376 abort: non letter first character: '42babar'
376 abort: non letter first character: '42babar'
377 [255]
377 [255]
378
378
379
379
380 Test part
380 Test part
381 =================
381 =================
382
382
383 $ hg bundle2 --parts ../parts.hg2 --debug
383 $ hg bundle2 --parts ../parts.hg2 --debug
384 start emission of HG2X stream
384 start emission of HG2Y stream
385 bundle parameter:
385 bundle parameter:
386 start of parts
386 start of parts
387 bundle part: "test:empty"
387 bundle part: "test:empty"
388 bundle part: "test:empty"
388 bundle part: "test:empty"
389 bundle part: "test:song"
389 bundle part: "test:song"
390 bundle part: "test:debugreply"
390 bundle part: "test:debugreply"
391 bundle part: "test:math"
391 bundle part: "test:math"
392 bundle part: "test:song"
392 bundle part: "test:song"
393 bundle part: "test:ping"
393 bundle part: "test:ping"
394 end of bundle
394 end of bundle
395
395
396 $ cat ../parts.hg2
396 $ cat ../parts.hg2
397 HG2X\x00\x00\x00\x11 (esc)
397 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
398 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
398 test:empty\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
399 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
399 test:empty\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10 test:song\x00\x00\x00\x02\x00\x00\x00\x00\x00\xb2Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko (esc)
400 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
400 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
401 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
401 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.\x00\x00\x00\x00\x00\x00\x00\x16\x0ftest:debugreply\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x00\x00+ test:math\x00\x00\x00\x04\x02\x01\x02\x04\x01\x04\x07\x03pi3.14e2.72cookingraw\x00\x00\x00\x0242\x00\x00\x00\x00\x00\x00\x00\x1d test:song\x00\x00\x00\x05\x01\x00\x0b\x00randomparam\x00\x00\x00\x00\x00\x00\x00\x10 test:ping\x00\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
402
402
403
403
404 $ hg statbundle2 < ../parts.hg2
404 $ hg statbundle2 < ../parts.hg2
405 options count: 0
405 options count: 0
406 :test:empty:
406 :test:empty:
407 mandatory: 0
407 mandatory: 0
408 advisory: 0
408 advisory: 0
409 payload: 0 bytes
409 payload: 0 bytes
410 :test:empty:
410 :test:empty:
411 mandatory: 0
411 mandatory: 0
412 advisory: 0
412 advisory: 0
413 payload: 0 bytes
413 payload: 0 bytes
414 :test:song:
414 :test:song:
415 mandatory: 0
415 mandatory: 0
416 advisory: 0
416 advisory: 0
417 payload: 178 bytes
417 payload: 178 bytes
418 :test:debugreply:
418 :test:debugreply:
419 mandatory: 0
419 mandatory: 0
420 advisory: 0
420 advisory: 0
421 payload: 0 bytes
421 payload: 0 bytes
422 :test:math:
422 :test:math:
423 mandatory: 2
423 mandatory: 2
424 advisory: 1
424 advisory: 1
425 payload: 2 bytes
425 payload: 2 bytes
426 :test:song:
426 :test:song:
427 mandatory: 1
427 mandatory: 1
428 advisory: 0
428 advisory: 0
429 payload: 0 bytes
429 payload: 0 bytes
430 :test:ping:
430 :test:ping:
431 mandatory: 0
431 mandatory: 0
432 advisory: 0
432 advisory: 0
433 payload: 0 bytes
433 payload: 0 bytes
434 parts count: 7
434 parts count: 7
435
435
436 $ hg statbundle2 --debug < ../parts.hg2
436 $ hg statbundle2 --debug < ../parts.hg2
437 start processing of HG2X stream
437 start processing of HG2Y stream
438 reading bundle2 stream parameters
438 reading bundle2 stream parameters
439 options count: 0
439 options count: 0
440 start extraction of bundle2 parts
440 start extraction of bundle2 parts
441 part header size: 17
441 part header size: 17
442 part type: "test:empty"
442 part type: "test:empty"
443 part id: "0"
443 part id: "0"
444 part parameters: 0
444 part parameters: 0
445 :test:empty:
445 :test:empty:
446 mandatory: 0
446 mandatory: 0
447 advisory: 0
447 advisory: 0
448 payload chunk size: 0
448 payload chunk size: 0
449 payload: 0 bytes
449 payload: 0 bytes
450 part header size: 17
450 part header size: 17
451 part type: "test:empty"
451 part type: "test:empty"
452 part id: "1"
452 part id: "1"
453 part parameters: 0
453 part parameters: 0
454 :test:empty:
454 :test:empty:
455 mandatory: 0
455 mandatory: 0
456 advisory: 0
456 advisory: 0
457 payload chunk size: 0
457 payload chunk size: 0
458 payload: 0 bytes
458 payload: 0 bytes
459 part header size: 16
459 part header size: 16
460 part type: "test:song"
460 part type: "test:song"
461 part id: "2"
461 part id: "2"
462 part parameters: 0
462 part parameters: 0
463 :test:song:
463 :test:song:
464 mandatory: 0
464 mandatory: 0
465 advisory: 0
465 advisory: 0
466 payload chunk size: 178
466 payload chunk size: 178
467 payload chunk size: 0
467 payload chunk size: 0
468 payload: 178 bytes
468 payload: 178 bytes
469 part header size: 22
469 part header size: 22
470 part type: "test:debugreply"
470 part type: "test:debugreply"
471 part id: "3"
471 part id: "3"
472 part parameters: 0
472 part parameters: 0
473 :test:debugreply:
473 :test:debugreply:
474 mandatory: 0
474 mandatory: 0
475 advisory: 0
475 advisory: 0
476 payload chunk size: 0
476 payload chunk size: 0
477 payload: 0 bytes
477 payload: 0 bytes
478 part header size: 43
478 part header size: 43
479 part type: "test:math"
479 part type: "test:math"
480 part id: "4"
480 part id: "4"
481 part parameters: 3
481 part parameters: 3
482 :test:math:
482 :test:math:
483 mandatory: 2
483 mandatory: 2
484 advisory: 1
484 advisory: 1
485 payload chunk size: 2
485 payload chunk size: 2
486 payload chunk size: 0
486 payload chunk size: 0
487 payload: 2 bytes
487 payload: 2 bytes
488 part header size: 29
488 part header size: 29
489 part type: "test:song"
489 part type: "test:song"
490 part id: "5"
490 part id: "5"
491 part parameters: 1
491 part parameters: 1
492 :test:song:
492 :test:song:
493 mandatory: 1
493 mandatory: 1
494 advisory: 0
494 advisory: 0
495 payload chunk size: 0
495 payload chunk size: 0
496 payload: 0 bytes
496 payload: 0 bytes
497 part header size: 16
497 part header size: 16
498 part type: "test:ping"
498 part type: "test:ping"
499 part id: "6"
499 part id: "6"
500 part parameters: 0
500 part parameters: 0
501 :test:ping:
501 :test:ping:
502 mandatory: 0
502 mandatory: 0
503 advisory: 0
503 advisory: 0
504 payload chunk size: 0
504 payload chunk size: 0
505 payload: 0 bytes
505 payload: 0 bytes
506 part header size: 0
506 part header size: 0
507 end of bundle2 stream
507 end of bundle2 stream
508 parts count: 7
508 parts count: 7
509
509
510 Test actual unbundling of test part
510 Test actual unbundling of test part
511 =======================================
511 =======================================
512
512
513 Process the bundle
513 Process the bundle
514
514
515 $ hg unbundle2 --debug < ../parts.hg2
515 $ hg unbundle2 --debug < ../parts.hg2
516 start processing of HG2X stream
516 start processing of HG2Y stream
517 reading bundle2 stream parameters
517 reading bundle2 stream parameters
518 start extraction of bundle2 parts
518 start extraction of bundle2 parts
519 part header size: 17
519 part header size: 17
520 part type: "test:empty"
520 part type: "test:empty"
521 part id: "0"
521 part id: "0"
522 part parameters: 0
522 part parameters: 0
523 ignoring unsupported advisory part test:empty
523 ignoring unsupported advisory part test:empty
524 payload chunk size: 0
524 payload chunk size: 0
525 part header size: 17
525 part header size: 17
526 part type: "test:empty"
526 part type: "test:empty"
527 part id: "1"
527 part id: "1"
528 part parameters: 0
528 part parameters: 0
529 ignoring unsupported advisory part test:empty
529 ignoring unsupported advisory part test:empty
530 payload chunk size: 0
530 payload chunk size: 0
531 part header size: 16
531 part header size: 16
532 part type: "test:song"
532 part type: "test:song"
533 part id: "2"
533 part id: "2"
534 part parameters: 0
534 part parameters: 0
535 found a handler for part 'test:song'
535 found a handler for part 'test:song'
536 The choir starts singing:
536 The choir starts singing:
537 payload chunk size: 178
537 payload chunk size: 178
538 payload chunk size: 0
538 payload chunk size: 0
539 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
539 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
540 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
540 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
541 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
541 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
542 part header size: 22
542 part header size: 22
543 part type: "test:debugreply"
543 part type: "test:debugreply"
544 part id: "3"
544 part id: "3"
545 part parameters: 0
545 part parameters: 0
546 found a handler for part 'test:debugreply'
546 found a handler for part 'test:debugreply'
547 debugreply: no reply
547 debugreply: no reply
548 payload chunk size: 0
548 payload chunk size: 0
549 part header size: 43
549 part header size: 43
550 part type: "test:math"
550 part type: "test:math"
551 part id: "4"
551 part id: "4"
552 part parameters: 3
552 part parameters: 3
553 ignoring unsupported advisory part test:math
553 ignoring unsupported advisory part test:math
554 payload chunk size: 2
554 payload chunk size: 2
555 payload chunk size: 0
555 payload chunk size: 0
556 part header size: 29
556 part header size: 29
557 part type: "test:song"
557 part type: "test:song"
558 part id: "5"
558 part id: "5"
559 part parameters: 1
559 part parameters: 1
560 found a handler for part 'test:song'
560 found a handler for part 'test:song'
561 ignoring unsupported advisory part test:song - randomparam
561 ignoring unsupported advisory part test:song - randomparam
562 payload chunk size: 0
562 payload chunk size: 0
563 part header size: 16
563 part header size: 16
564 part type: "test:ping"
564 part type: "test:ping"
565 part id: "6"
565 part id: "6"
566 part parameters: 0
566 part parameters: 0
567 found a handler for part 'test:ping'
567 found a handler for part 'test:ping'
568 received ping request (id 6)
568 received ping request (id 6)
569 payload chunk size: 0
569 payload chunk size: 0
570 part header size: 0
570 part header size: 0
571 end of bundle2 stream
571 end of bundle2 stream
572 0 unread bytes
572 0 unread bytes
573 3 total verses sung
573 3 total verses sung
574
574
575 Unbundle with an unknown mandatory part
575 Unbundle with an unknown mandatory part
576 (should abort)
576 (should abort)
577
577
578 $ hg bundle2 --parts --unknown ../unknown.hg2
578 $ hg bundle2 --parts --unknown ../unknown.hg2
579
579
580 $ hg unbundle2 < ../unknown.hg2
580 $ hg unbundle2 < ../unknown.hg2
581 The choir starts singing:
581 The choir starts singing:
582 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
582 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
583 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
583 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
584 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
584 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
585 debugreply: no reply
585 debugreply: no reply
586 0 unread bytes
586 0 unread bytes
587 abort: missing support for test:unknown
587 abort: missing support for test:unknown
588 [255]
588 [255]
589
589
590 Unbundle with an unknown mandatory part parameters
590 Unbundle with an unknown mandatory part parameters
591 (should abort)
591 (should abort)
592
592
593 $ hg bundle2 --unknownparams ../unknown.hg2
593 $ hg bundle2 --unknownparams ../unknown.hg2
594
594
595 $ hg unbundle2 < ../unknown.hg2
595 $ hg unbundle2 < ../unknown.hg2
596 0 unread bytes
596 0 unread bytes
597 abort: missing support for test:song - randomparams
597 abort: missing support for test:song - randomparams
598 [255]
598 [255]
599
599
600 unbundle with a reply
600 unbundle with a reply
601
601
602 $ hg bundle2 --parts --reply ../parts-reply.hg2
602 $ hg bundle2 --parts --reply ../parts-reply.hg2
603 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
603 $ hg unbundle2 ../reply.hg2 < ../parts-reply.hg2
604 0 unread bytes
604 0 unread bytes
605 3 total verses sung
605 3 total verses sung
606
606
607 The reply is a bundle
607 The reply is a bundle
608
608
609 $ cat ../reply.hg2
609 $ cat ../reply.hg2
610 HG2X\x00\x00\x00\x1f (esc)
610 HG2Y\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
611 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
611 b2x:output\x00\x00\x00\x00\x00\x01\x0b\x01in-reply-to3\x00\x00\x00\xd9The choir starts singing: (esc)
612 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
612 Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
613 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
613 Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
614 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
614 Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
615 \x00\x00\x00\x00\x00\x1f (esc)
615 \x00\x00\x00\x00\x00\x00\x00\x1f (esc)
616 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
616 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to4\x00\x00\x00\xc9debugreply: capabilities: (esc)
617 debugreply: 'city=!'
617 debugreply: 'city=!'
618 debugreply: 'celeste,ville'
618 debugreply: 'celeste,ville'
619 debugreply: 'elephants'
619 debugreply: 'elephants'
620 debugreply: 'babar'
620 debugreply: 'babar'
621 debugreply: 'celeste'
621 debugreply: 'celeste'
622 debugreply: 'ping-pong'
622 debugreply: 'ping-pong'
623 \x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x1f (esc)
623 \x00\x00\x00\x00\x00\x00\x00\x1e test:pong\x00\x00\x00\x02\x01\x00\x0b\x01in-reply-to7\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
624 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to7\x00\x00\x00=received ping request (id 7) (esc)
624 b2x:output\x00\x00\x00\x03\x00\x01\x0b\x01in-reply-to7\x00\x00\x00=received ping request (id 7) (esc)
625 replying to ping request (id 7)
625 replying to ping request (id 7)
626 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
626 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
627
627
628 The reply is valid
628 The reply is valid
629
629
630 $ hg statbundle2 < ../reply.hg2
630 $ hg statbundle2 < ../reply.hg2
631 options count: 0
631 options count: 0
632 :b2x:output:
632 :b2x:output:
633 mandatory: 0
633 mandatory: 0
634 advisory: 1
634 advisory: 1
635 payload: 217 bytes
635 payload: 217 bytes
636 :b2x:output:
636 :b2x:output:
637 mandatory: 0
637 mandatory: 0
638 advisory: 1
638 advisory: 1
639 payload: 201 bytes
639 payload: 201 bytes
640 :test:pong:
640 :test:pong:
641 mandatory: 1
641 mandatory: 1
642 advisory: 0
642 advisory: 0
643 payload: 0 bytes
643 payload: 0 bytes
644 :b2x:output:
644 :b2x:output:
645 mandatory: 0
645 mandatory: 0
646 advisory: 1
646 advisory: 1
647 payload: 61 bytes
647 payload: 61 bytes
648 parts count: 4
648 parts count: 4
649
649
650 Unbundle the reply to get the output:
650 Unbundle the reply to get the output:
651
651
652 $ hg unbundle2 < ../reply.hg2
652 $ hg unbundle2 < ../reply.hg2
653 remote: The choir starts singing:
653 remote: The choir starts singing:
654 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
654 remote: Patali Dirapata, Cromda Cromda Ripalo, Pata Pata, Ko Ko Ko
655 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
655 remote: Bokoro Dipoulito, Rondi Rondi Pepino, Pata Pata, Ko Ko Ko
656 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
656 remote: Emana Karassoli, Loucra Loucra Ponponto, Pata Pata, Ko Ko Ko.
657 remote: debugreply: capabilities:
657 remote: debugreply: capabilities:
658 remote: debugreply: 'city=!'
658 remote: debugreply: 'city=!'
659 remote: debugreply: 'celeste,ville'
659 remote: debugreply: 'celeste,ville'
660 remote: debugreply: 'elephants'
660 remote: debugreply: 'elephants'
661 remote: debugreply: 'babar'
661 remote: debugreply: 'babar'
662 remote: debugreply: 'celeste'
662 remote: debugreply: 'celeste'
663 remote: debugreply: 'ping-pong'
663 remote: debugreply: 'ping-pong'
664 remote: received ping request (id 7)
664 remote: received ping request (id 7)
665 remote: replying to ping request (id 7)
665 remote: replying to ping request (id 7)
666 0 unread bytes
666 0 unread bytes
667
667
668 Test push race detection
668 Test push race detection
669
669
670 $ hg bundle2 --pushrace ../part-race.hg2
670 $ hg bundle2 --pushrace ../part-race.hg2
671
671
672 $ hg unbundle2 < ../part-race.hg2
672 $ hg unbundle2 < ../part-race.hg2
673 0 unread bytes
673 0 unread bytes
674 abort: push race: repository changed while pushing - please try again
674 abort: push race: repository changed while pushing - please try again
675 [255]
675 [255]
676
676
677 Support for changegroup
677 Support for changegroup
678 ===================================
678 ===================================
679
679
680 $ hg unbundle $TESTDIR/bundles/rebase.hg
680 $ hg unbundle $TESTDIR/bundles/rebase.hg
681 adding changesets
681 adding changesets
682 adding manifests
682 adding manifests
683 adding file changes
683 adding file changes
684 added 8 changesets with 7 changes to 7 files (+3 heads)
684 added 8 changesets with 7 changes to 7 files (+3 heads)
685 (run 'hg heads' to see heads, 'hg merge' to merge)
685 (run 'hg heads' to see heads, 'hg merge' to merge)
686
686
687 $ hg log -G
687 $ hg log -G
688 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
688 o 8:02de42196ebe draft Nicolas Dumazet <nicdumz.commits@gmail.com> H
689 |
689 |
690 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
690 | o 7:eea13746799a draft Nicolas Dumazet <nicdumz.commits@gmail.com> G
691 |/|
691 |/|
692 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
692 o | 6:24b6387c8c8c draft Nicolas Dumazet <nicdumz.commits@gmail.com> F
693 | |
693 | |
694 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
694 | o 5:9520eea781bc draft Nicolas Dumazet <nicdumz.commits@gmail.com> E
695 |/
695 |/
696 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
696 | o 4:32af7686d403 draft Nicolas Dumazet <nicdumz.commits@gmail.com> D
697 | |
697 | |
698 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
698 | o 3:5fddd98957c8 draft Nicolas Dumazet <nicdumz.commits@gmail.com> C
699 | |
699 | |
700 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
700 | o 2:42ccdea3bb16 draft Nicolas Dumazet <nicdumz.commits@gmail.com> B
701 |/
701 |/
702 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
702 o 1:cd010b8cd998 draft Nicolas Dumazet <nicdumz.commits@gmail.com> A
703
703
704 @ 0:3903775176ed draft test a
704 @ 0:3903775176ed draft test a
705
705
706
706
707 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
707 $ hg bundle2 --debug --rev '8+7+5+4' ../rev.hg2
708 4 changesets found
708 4 changesets found
709 list of changesets:
709 list of changesets:
710 32af7686d403cf45b5d95f2d70cebea587ac806a
710 32af7686d403cf45b5d95f2d70cebea587ac806a
711 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
711 9520eea781bcca16c1e15acc0ba14335a0e8e5ba
712 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
712 eea13746799a9e0bfd88f29d3c2e9dc9389f524f
713 02de42196ebee42ef284b6780a87cdc96e8eaab6
713 02de42196ebee42ef284b6780a87cdc96e8eaab6
714 start emission of HG2X stream
714 start emission of HG2Y stream
715 bundle parameter:
715 bundle parameter:
716 start of parts
716 start of parts
717 bundle part: "b2x:changegroup"
717 bundle part: "b2x:changegroup"
718 bundling: 1/4 changesets (25.00%)
718 bundling: 1/4 changesets (25.00%)
719 bundling: 2/4 changesets (50.00%)
719 bundling: 2/4 changesets (50.00%)
720 bundling: 3/4 changesets (75.00%)
720 bundling: 3/4 changesets (75.00%)
721 bundling: 4/4 changesets (100.00%)
721 bundling: 4/4 changesets (100.00%)
722 bundling: 1/4 manifests (25.00%)
722 bundling: 1/4 manifests (25.00%)
723 bundling: 2/4 manifests (50.00%)
723 bundling: 2/4 manifests (50.00%)
724 bundling: 3/4 manifests (75.00%)
724 bundling: 3/4 manifests (75.00%)
725 bundling: 4/4 manifests (100.00%)
725 bundling: 4/4 manifests (100.00%)
726 bundling: D 1/3 files (33.33%)
726 bundling: D 1/3 files (33.33%)
727 bundling: E 2/3 files (66.67%)
727 bundling: E 2/3 files (66.67%)
728 bundling: H 3/3 files (100.00%)
728 bundling: H 3/3 files (100.00%)
729 end of bundle
729 end of bundle
730
730
731 $ cat ../rev.hg2
731 $ cat ../rev.hg2
732 HG2X\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
732 HG2Y\x00\x00\x00\x00\x00\x00\x00\x16\x0fb2x:changegroup\x00\x00\x00\x00\x00\x00\x00\x00\x06\x13\x00\x00\x00\xa42\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j_\xdd\xd9\x89W\xc8\xa5JMCm\xfe\x1d\xa9\xd8\x7f!\xa1\xb9{\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)6e1f4c47ecb533ffd0c8e52cdc88afb6cd39e20c (esc)
733 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
733 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02D (esc)
734 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
734 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01D\x00\x00\x00\xa4\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xcd\x01\x0b\x8c\xd9\x98\xf3\x98\x1aZ\x81\x15\xf9O\x8d\xa4\xabP`\x89\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)4dece9c826f69490507b98c6383a3009b295837d (esc)
735 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
735 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x02E (esc)
736 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
736 \x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01E\x00\x00\x00\xa2\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)365b93d57fdf4814e2b5911d6bacff2b12014441 (esc)
737 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
737 \x00\x00\x00f\x00\x00\x00h\x00\x00\x00\x00\x00\x00\x00i\x00\x00\x00j\x00\x00\x00\x01G\x00\x00\x00\xa4\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
738 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
738 \x87\xcd\xc9n\x8e\xaa\xb6$\xb68|\x8c\x8c\xae7\x17\x88\x80\xf3\xfa\x95\xde\xd3\xcb\x1c\xf7\x85\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
739 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
739 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00)\x00\x00\x00)8bee48edc7318541fc0013ee41b089276a8c24bf (esc)
740 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
740 \x00\x00\x00f\x00\x00\x00f\x00\x00\x00\x02H (esc)
741 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
741 \x00\x00\x00g\x00\x00\x00h\x00\x00\x00\x01H\x00\x00\x00\x00\x00\x00\x00\x8bn\x1fLG\xec\xb53\xff\xd0\xc8\xe5,\xdc\x88\xaf\xb6\xcd9\xe2\x0cf\xa5\xa0\x18\x17\xfd\xf5#\x9c'8\x02\xb5\xb7a\x8d\x05\x1c\x89\xe4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+D\x00c3f1ca2924c16a19b0656a84900e504e5b0aec2d (esc)
742 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
742 \x00\x00\x00\x8bM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\x00}\x8c\x9d\x88\x84\x13%\xf5\xc6\xb0cq\xb3[N\x8a+\x1a\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00+\x00\x00\x00\xac\x00\x00\x00+E\x009c6fd0350a6c0d0c49d4a9c5017cf07043f54e58 (esc)
743 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
743 \x00\x00\x00\x8b6[\x93\xd5\x7f\xdfH\x14\xe2\xb5\x91\x1dk\xac\xff+\x12\x01DA(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xceM\xec\xe9\xc8&\xf6\x94\x90P{\x98\xc68:0 \xb2\x95\x83}\xee\xa17Fy\x9a\x9e\x0b\xfd\x88\xf2\x9d<.\x9d\xc98\x9fRO\x00\x00\x00V\x00\x00\x00V\x00\x00\x00+F\x0022bfcfd62a21a3287edbd4d656218d0f525ed76a (esc)
744 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
744 \x00\x00\x00\x97\x8b\xeeH\xed\xc71\x85A\xfc\x00\x13\xeeA\xb0\x89'j\x8c$\xbf(\xa5\x84\xc6^\xf1!\xf8\x9e\xb6j\xb7\xd0\xbc\x15=\x80\x99\xe7\xce\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
745 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
745 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00+\x00\x00\x00V\x00\x00\x00\x00\x00\x00\x00\x81\x00\x00\x00\x81\x00\x00\x00+H\x008500189e74a9e0475e822093bc7db0d631aeb0b4 (esc)
746 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
746 \x00\x00\x00\x00\x00\x00\x00\x05D\x00\x00\x00b\xc3\xf1\xca)$\xc1j\x19\xb0ej\x84\x90\x0ePN[ (esc)
747 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
747 \xec-\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x002\xafv\x86\xd4\x03\xcfE\xb5\xd9_-p\xce\xbe\xa5\x87\xac\x80j\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02D (esc)
748 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
748 \x00\x00\x00\x00\x00\x00\x00\x05E\x00\x00\x00b\x9co\xd05 (esc)
749 l\r (no-eol) (esc)
749 l\r (no-eol) (esc)
750 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
750 \x0cI\xd4\xa9\xc5\x01|\xf0pC\xf5NX\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x95 \xee\xa7\x81\xbc\xca\x16\xc1\xe1Z\xcc\x0b\xa1C5\xa0\xe8\xe5\xba\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02E (esc)
751 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
751 \x00\x00\x00\x00\x00\x00\x00\x05H\x00\x00\x00b\x85\x00\x18\x9et\xa9\xe0G^\x82 \x93\xbc}\xb0\xd61\xae\xb0\xb4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\xdeB\x19n\xbe\xe4.\xf2\x84\xb6x (esc)
752 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
752 \x87\xcd\xc9n\x8e\xaa\xb6\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02H (esc)
753 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
753 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
754
754
755 $ hg unbundle2 < ../rev.hg2
755 $ hg unbundle2 < ../rev.hg2
756 adding changesets
756 adding changesets
757 adding manifests
757 adding manifests
758 adding file changes
758 adding file changes
759 added 0 changesets with 0 changes to 3 files
759 added 0 changesets with 0 changes to 3 files
760 0 unread bytes
760 0 unread bytes
761 addchangegroup return: 1
761 addchangegroup return: 1
762
762
763 with reply
763 with reply
764
764
765 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
765 $ hg bundle2 --rev '8+7+5+4' --reply ../rev-rr.hg2
766 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
766 $ hg unbundle2 ../rev-reply.hg2 < ../rev-rr.hg2
767 0 unread bytes
767 0 unread bytes
768 addchangegroup return: 1
768 addchangegroup return: 1
769
769
770 $ cat ../rev-reply.hg2
770 $ cat ../rev-reply.hg2
771 HG2X\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x1f (esc)
771 HG2Y\x00\x00\x00\x00\x00\x00\x003\x15b2x:reply:changegroup\x00\x00\x00\x00\x00\x02\x0b\x01\x06\x01in-reply-to1return1\x00\x00\x00\x00\x00\x00\x00\x1f (esc)
772 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
772 b2x:output\x00\x00\x00\x01\x00\x01\x0b\x01in-reply-to1\x00\x00\x00dadding changesets (esc)
773 adding manifests
773 adding manifests
774 adding file changes
774 adding file changes
775 added 0 changesets with 0 changes to 3 files
775 added 0 changesets with 0 changes to 3 files
776 \x00\x00\x00\x00\x00\x00 (no-eol) (esc)
776 \x00\x00\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
777
777
778 Check handling of exception during generation.
778 Check handling of exception during generation.
779 ----------------------------------------------
779 ----------------------------------------------
780 (is currently not right)
780 (is currently not right)
781
781
782 $ hg bundle2 --genraise > ../genfailed.hg2
782 $ hg bundle2 --genraise > ../genfailed.hg2
783 abort: Someone set up us the bomb!
783 abort: Someone set up us the bomb!
784 [255]
784 [255]
785
785
786 Should still be a valid bundle
786 Should still be a valid bundle
787 (is currently not right)
787 (is currently not right)
788
788
789 $ cat ../genfailed.hg2
789 $ cat ../genfailed.hg2
790 HG2X\x00\x00\x00\x11 (esc)
790 HG2Y\x00\x00\x00\x00\x00\x00\x00\x11 (esc)
791 b2x:output\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
791 b2x:output\x00\x00\x00\x00\x00\x00 (no-eol) (esc)
792
792
793 And its handling on the other size raise a clean exception
793 And its handling on the other size raise a clean exception
794 (is currently not right)
794 (is currently not right)
795
795
796 $ cat ../genfailed.hg2 | hg unbundle2
796 $ cat ../genfailed.hg2 | hg unbundle2
797 0 unread bytes
797 0 unread bytes
798 abort: stream ended unexpectedly (got 0 bytes, expected 2)
798 abort: stream ended unexpectedly (got 0 bytes, expected 4)
799 [255]
799 [255]
800
800
801
801
802 $ cd ..
802 $ cd ..
General Comments 0
You need to be logged in to leave comments. Login now