##// END OF EJS Templates
obsmarker: move bundle2caps from the localrepo class to the bundle2 module...
Pierre-Yves David -
r22341:2d16b396 default
parent child Browse files
Show More
@@ -1,922 +1,928 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: (16 bits integer)
34 :params size: (16 bits integer)
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: (16 bits inter)
67 :header size: (16 bits inter)
68
68
69 The total number of Bytes used by the part headers. When the header is empty
69 The total number of Bytes used by the part headers. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name
88 :parttype: alphanumerical part name
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
122 `chunksize` is a 32 bits integer, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 Bundle processing
128 Bundle processing
129 ============================
129 ============================
130
130
131 Each part is processed in order using a "part handler". Handler are registered
131 Each part is processed in order using a "part handler". Handler are registered
132 for a certain part type.
132 for a certain part type.
133
133
134 The matching of a part to its handler is case insensitive. The case of the
134 The matching of a part to its handler is case insensitive. The case of the
135 part type is used to know if a part is mandatory or advisory. If the Part type
135 part type is used to know if a part is mandatory or advisory. If the Part type
136 contains any uppercase char it is considered mandatory. When no handler is
136 contains any uppercase char it is considered mandatory. When no handler is
137 known for a Mandatory part, the process is aborted and an exception is raised.
137 known for a Mandatory part, the process is aborted and an exception is raised.
138 If the part is advisory and no handler is known, the part is ignored. When the
138 If the part is advisory and no handler is known, the part is ignored. When the
139 process is aborted, the full bundle is still read from the stream to keep the
139 process is aborted, the full bundle is still read from the stream to keep the
140 channel usable. But none of the part read from an abort are processed. In the
140 channel usable. But none of the part read from an abort are processed. In the
141 future, dropping the stream may become an option for channel we do not care to
141 future, dropping the stream may become an option for channel we do not care to
142 preserve.
142 preserve.
143 """
143 """
144
144
145 import util
145 import util
146 import struct
146 import struct
147 import urllib
147 import urllib
148 import string
148 import string
149 import pushkey
149 import pushkey
150
150
151 import changegroup, error
151 import changegroup, error
152 from i18n import _
152 from i18n import _
153
153
154 _pack = struct.pack
154 _pack = struct.pack
155 _unpack = struct.unpack
155 _unpack = struct.unpack
156
156
157 _magicstring = 'HG2X'
157 _magicstring = 'HG2X'
158
158
159 _fstreamparamsize = '>H'
159 _fstreamparamsize = '>H'
160 _fpartheadersize = '>H'
160 _fpartheadersize = '>H'
161 _fparttypesize = '>B'
161 _fparttypesize = '>B'
162 _fpartid = '>I'
162 _fpartid = '>I'
163 _fpayloadsize = '>I'
163 _fpayloadsize = '>I'
164 _fpartparamcount = '>BB'
164 _fpartparamcount = '>BB'
165
165
166 preferedchunksize = 4096
166 preferedchunksize = 4096
167
167
168 def _makefpartparamsizes(nbparams):
168 def _makefpartparamsizes(nbparams):
169 """return a struct format to read part parameter sizes
169 """return a struct format to read part parameter sizes
170
170
171 The number parameters is variable so we need to build that format
171 The number parameters is variable so we need to build that format
172 dynamically.
172 dynamically.
173 """
173 """
174 return '>'+('BB'*nbparams)
174 return '>'+('BB'*nbparams)
175
175
176 parthandlermapping = {}
176 parthandlermapping = {}
177
177
178 def parthandler(parttype, params=()):
178 def parthandler(parttype, params=()):
179 """decorator that register a function as a bundle2 part handler
179 """decorator that register a function as a bundle2 part handler
180
180
181 eg::
181 eg::
182
182
183 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
183 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
184 def myparttypehandler(...):
184 def myparttypehandler(...):
185 '''process a part of type "my part".'''
185 '''process a part of type "my part".'''
186 ...
186 ...
187 """
187 """
188 def _decorator(func):
188 def _decorator(func):
189 lparttype = parttype.lower() # enforce lower case matching.
189 lparttype = parttype.lower() # enforce lower case matching.
190 assert lparttype not in parthandlermapping
190 assert lparttype not in parthandlermapping
191 parthandlermapping[lparttype] = func
191 parthandlermapping[lparttype] = func
192 func.params = frozenset(params)
192 func.params = frozenset(params)
193 return func
193 return func
194 return _decorator
194 return _decorator
195
195
196 class unbundlerecords(object):
196 class unbundlerecords(object):
197 """keep record of what happens during and unbundle
197 """keep record of what happens during and unbundle
198
198
199 New records are added using `records.add('cat', obj)`. Where 'cat' is a
199 New records are added using `records.add('cat', obj)`. Where 'cat' is a
200 category of record and obj is an arbitrary object.
200 category of record and obj is an arbitrary object.
201
201
202 `records['cat']` will return all entries of this category 'cat'.
202 `records['cat']` will return all entries of this category 'cat'.
203
203
204 Iterating on the object itself will yield `('category', obj)` tuples
204 Iterating on the object itself will yield `('category', obj)` tuples
205 for all entries.
205 for all entries.
206
206
207 All iterations happens in chronological order.
207 All iterations happens in chronological order.
208 """
208 """
209
209
210 def __init__(self):
210 def __init__(self):
211 self._categories = {}
211 self._categories = {}
212 self._sequences = []
212 self._sequences = []
213 self._replies = {}
213 self._replies = {}
214
214
215 def add(self, category, entry, inreplyto=None):
215 def add(self, category, entry, inreplyto=None):
216 """add a new record of a given category.
216 """add a new record of a given category.
217
217
218 The entry can then be retrieved in the list returned by
218 The entry can then be retrieved in the list returned by
219 self['category']."""
219 self['category']."""
220 self._categories.setdefault(category, []).append(entry)
220 self._categories.setdefault(category, []).append(entry)
221 self._sequences.append((category, entry))
221 self._sequences.append((category, entry))
222 if inreplyto is not None:
222 if inreplyto is not None:
223 self.getreplies(inreplyto).add(category, entry)
223 self.getreplies(inreplyto).add(category, entry)
224
224
225 def getreplies(self, partid):
225 def getreplies(self, partid):
226 """get the subrecords that replies to a specific part"""
226 """get the subrecords that replies to a specific part"""
227 return self._replies.setdefault(partid, unbundlerecords())
227 return self._replies.setdefault(partid, unbundlerecords())
228
228
229 def __getitem__(self, cat):
229 def __getitem__(self, cat):
230 return tuple(self._categories.get(cat, ()))
230 return tuple(self._categories.get(cat, ()))
231
231
232 def __iter__(self):
232 def __iter__(self):
233 return iter(self._sequences)
233 return iter(self._sequences)
234
234
235 def __len__(self):
235 def __len__(self):
236 return len(self._sequences)
236 return len(self._sequences)
237
237
238 def __nonzero__(self):
238 def __nonzero__(self):
239 return bool(self._sequences)
239 return bool(self._sequences)
240
240
241 class bundleoperation(object):
241 class bundleoperation(object):
242 """an object that represents a single bundling process
242 """an object that represents a single bundling process
243
243
244 Its purpose is to carry unbundle-related objects and states.
244 Its purpose is to carry unbundle-related objects and states.
245
245
246 A new object should be created at the beginning of each bundle processing.
246 A new object should be created at the beginning of each bundle processing.
247 The object is to be returned by the processing function.
247 The object is to be returned by the processing function.
248
248
249 The object has very little content now it will ultimately contain:
249 The object has very little content now it will ultimately contain:
250 * an access to the repo the bundle is applied to,
250 * an access to the repo the bundle is applied to,
251 * a ui object,
251 * a ui object,
252 * a way to retrieve a transaction to add changes to the repo,
252 * a way to retrieve a transaction to add changes to the repo,
253 * a way to record the result of processing each part,
253 * a way to record the result of processing each part,
254 * a way to construct a bundle response when applicable.
254 * a way to construct a bundle response when applicable.
255 """
255 """
256
256
257 def __init__(self, repo, transactiongetter):
257 def __init__(self, repo, transactiongetter):
258 self.repo = repo
258 self.repo = repo
259 self.ui = repo.ui
259 self.ui = repo.ui
260 self.records = unbundlerecords()
260 self.records = unbundlerecords()
261 self.gettransaction = transactiongetter
261 self.gettransaction = transactiongetter
262 self.reply = None
262 self.reply = None
263
263
264 class TransactionUnavailable(RuntimeError):
264 class TransactionUnavailable(RuntimeError):
265 pass
265 pass
266
266
267 def _notransaction():
267 def _notransaction():
268 """default method to get a transaction while processing a bundle
268 """default method to get a transaction while processing a bundle
269
269
270 Raise an exception to highlight the fact that no transaction was expected
270 Raise an exception to highlight the fact that no transaction was expected
271 to be created"""
271 to be created"""
272 raise TransactionUnavailable()
272 raise TransactionUnavailable()
273
273
274 def processbundle(repo, unbundler, transactiongetter=_notransaction):
274 def processbundle(repo, unbundler, transactiongetter=_notransaction):
275 """This function process a bundle, apply effect to/from a repo
275 """This function process a bundle, apply effect to/from a repo
276
276
277 It iterates over each part then searches for and uses the proper handling
277 It iterates over each part then searches for and uses the proper handling
278 code to process the part. Parts are processed in order.
278 code to process the part. Parts are processed in order.
279
279
280 This is very early version of this function that will be strongly reworked
280 This is very early version of this function that will be strongly reworked
281 before final usage.
281 before final usage.
282
282
283 Unknown Mandatory part will abort the process.
283 Unknown Mandatory part will abort the process.
284 """
284 """
285 op = bundleoperation(repo, transactiongetter)
285 op = bundleoperation(repo, transactiongetter)
286 # todo:
286 # todo:
287 # - replace this is a init function soon.
287 # - replace this is a init function soon.
288 # - exception catching
288 # - exception catching
289 unbundler.params
289 unbundler.params
290 iterparts = unbundler.iterparts()
290 iterparts = unbundler.iterparts()
291 part = None
291 part = None
292 try:
292 try:
293 for part in iterparts:
293 for part in iterparts:
294 parttype = part.type
294 parttype = part.type
295 # part key are matched lower case
295 # part key are matched lower case
296 key = parttype.lower()
296 key = parttype.lower()
297 try:
297 try:
298 handler = parthandlermapping.get(key)
298 handler = parthandlermapping.get(key)
299 if handler is None:
299 if handler is None:
300 raise error.BundleValueError(parttype=key)
300 raise error.BundleValueError(parttype=key)
301 op.ui.debug('found a handler for part %r\n' % parttype)
301 op.ui.debug('found a handler for part %r\n' % parttype)
302 unknownparams = part.mandatorykeys - handler.params
302 unknownparams = part.mandatorykeys - handler.params
303 if unknownparams:
303 if unknownparams:
304 unknownparams = list(unknownparams)
304 unknownparams = list(unknownparams)
305 unknownparams.sort()
305 unknownparams.sort()
306 raise error.BundleValueError(parttype=key,
306 raise error.BundleValueError(parttype=key,
307 params=unknownparams)
307 params=unknownparams)
308 except error.BundleValueError, exc:
308 except error.BundleValueError, exc:
309 if key != parttype: # mandatory parts
309 if key != parttype: # mandatory parts
310 raise
310 raise
311 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
311 op.ui.debug('ignoring unsupported advisory part %s\n' % exc)
312 # consuming the part
312 # consuming the part
313 part.read()
313 part.read()
314 continue
314 continue
315
315
316
316
317 # handler is called outside the above try block so that we don't
317 # handler is called outside the above try block so that we don't
318 # risk catching KeyErrors from anything other than the
318 # risk catching KeyErrors from anything other than the
319 # parthandlermapping lookup (any KeyError raised by handler()
319 # parthandlermapping lookup (any KeyError raised by handler()
320 # itself represents a defect of a different variety).
320 # itself represents a defect of a different variety).
321 output = None
321 output = None
322 if op.reply is not None:
322 if op.reply is not None:
323 op.ui.pushbuffer(error=True)
323 op.ui.pushbuffer(error=True)
324 output = ''
324 output = ''
325 try:
325 try:
326 handler(op, part)
326 handler(op, part)
327 finally:
327 finally:
328 if output is not None:
328 if output is not None:
329 output = op.ui.popbuffer()
329 output = op.ui.popbuffer()
330 if output:
330 if output:
331 outpart = op.reply.newpart('b2x:output', data=output)
331 outpart = op.reply.newpart('b2x:output', data=output)
332 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
332 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
333 part.read()
333 part.read()
334 except Exception, exc:
334 except Exception, exc:
335 if part is not None:
335 if part is not None:
336 # consume the bundle content
336 # consume the bundle content
337 part.read()
337 part.read()
338 for part in iterparts:
338 for part in iterparts:
339 # consume the bundle content
339 # consume the bundle content
340 part.read()
340 part.read()
341 # Small hack to let caller code distinguish exceptions from bundle2
341 # Small hack to let caller code distinguish exceptions from bundle2
342 # processing fron the ones from bundle1 processing. This is mostly
342 # processing fron the ones from bundle1 processing. This is mostly
343 # needed to handle different return codes to unbundle according to the
343 # needed to handle different return codes to unbundle according to the
344 # type of bundle. We should probably clean up or drop this return code
344 # type of bundle. We should probably clean up or drop this return code
345 # craziness in a future version.
345 # craziness in a future version.
346 exc.duringunbundle2 = True
346 exc.duringunbundle2 = True
347 raise
347 raise
348 return op
348 return op
349
349
350 def decodecaps(blob):
350 def decodecaps(blob):
351 """decode a bundle2 caps bytes blob into a dictionnary
351 """decode a bundle2 caps bytes blob into a dictionnary
352
352
353 The blob is a list of capabilities (one per line)
353 The blob is a list of capabilities (one per line)
354 Capabilities may have values using a line of the form::
354 Capabilities may have values using a line of the form::
355
355
356 capability=value1,value2,value3
356 capability=value1,value2,value3
357
357
358 The values are always a list."""
358 The values are always a list."""
359 caps = {}
359 caps = {}
360 for line in blob.splitlines():
360 for line in blob.splitlines():
361 if not line:
361 if not line:
362 continue
362 continue
363 if '=' not in line:
363 if '=' not in line:
364 key, vals = line, ()
364 key, vals = line, ()
365 else:
365 else:
366 key, vals = line.split('=', 1)
366 key, vals = line.split('=', 1)
367 vals = vals.split(',')
367 vals = vals.split(',')
368 key = urllib.unquote(key)
368 key = urllib.unquote(key)
369 vals = [urllib.unquote(v) for v in vals]
369 vals = [urllib.unquote(v) for v in vals]
370 caps[key] = vals
370 caps[key] = vals
371 return caps
371 return caps
372
372
373 def encodecaps(caps):
373 def encodecaps(caps):
374 """encode a bundle2 caps dictionary into a bytes blob"""
374 """encode a bundle2 caps dictionary into a bytes blob"""
375 chunks = []
375 chunks = []
376 for ca in sorted(caps):
376 for ca in sorted(caps):
377 vals = caps[ca]
377 vals = caps[ca]
378 ca = urllib.quote(ca)
378 ca = urllib.quote(ca)
379 vals = [urllib.quote(v) for v in vals]
379 vals = [urllib.quote(v) for v in vals]
380 if vals:
380 if vals:
381 ca = "%s=%s" % (ca, ','.join(vals))
381 ca = "%s=%s" % (ca, ','.join(vals))
382 chunks.append(ca)
382 chunks.append(ca)
383 return '\n'.join(chunks)
383 return '\n'.join(chunks)
384
384
385 class bundle20(object):
385 class bundle20(object):
386 """represent an outgoing bundle2 container
386 """represent an outgoing bundle2 container
387
387
388 Use the `addparam` method to add stream level parameter. and `newpart` to
388 Use the `addparam` method to add stream level parameter. and `newpart` to
389 populate it. Then call `getchunks` to retrieve all the binary chunks of
389 populate it. Then call `getchunks` to retrieve all the binary chunks of
390 data that compose the bundle2 container."""
390 data that compose the bundle2 container."""
391
391
392 def __init__(self, ui, capabilities=()):
392 def __init__(self, ui, capabilities=()):
393 self.ui = ui
393 self.ui = ui
394 self._params = []
394 self._params = []
395 self._parts = []
395 self._parts = []
396 self.capabilities = dict(capabilities)
396 self.capabilities = dict(capabilities)
397
397
398 @property
398 @property
399 def nbparts(self):
399 def nbparts(self):
400 """total number of parts added to the bundler"""
400 """total number of parts added to the bundler"""
401 return len(self._parts)
401 return len(self._parts)
402
402
403 # methods used to defines the bundle2 content
403 # methods used to defines the bundle2 content
404 def addparam(self, name, value=None):
404 def addparam(self, name, value=None):
405 """add a stream level parameter"""
405 """add a stream level parameter"""
406 if not name:
406 if not name:
407 raise ValueError('empty parameter name')
407 raise ValueError('empty parameter name')
408 if name[0] not in string.letters:
408 if name[0] not in string.letters:
409 raise ValueError('non letter first character: %r' % name)
409 raise ValueError('non letter first character: %r' % name)
410 self._params.append((name, value))
410 self._params.append((name, value))
411
411
412 def addpart(self, part):
412 def addpart(self, part):
413 """add a new part to the bundle2 container
413 """add a new part to the bundle2 container
414
414
415 Parts contains the actual applicative payload."""
415 Parts contains the actual applicative payload."""
416 assert part.id is None
416 assert part.id is None
417 part.id = len(self._parts) # very cheap counter
417 part.id = len(self._parts) # very cheap counter
418 self._parts.append(part)
418 self._parts.append(part)
419
419
420 def newpart(self, typeid, *args, **kwargs):
420 def newpart(self, typeid, *args, **kwargs):
421 """create a new part and add it to the containers
421 """create a new part and add it to the containers
422
422
423 As the part is directly added to the containers. For now, this means
423 As the part is directly added to the containers. For now, this means
424 that any failure to properly initialize the part after calling
424 that any failure to properly initialize the part after calling
425 ``newpart`` should result in a failure of the whole bundling process.
425 ``newpart`` should result in a failure of the whole bundling process.
426
426
427 You can still fall back to manually create and add if you need better
427 You can still fall back to manually create and add if you need better
428 control."""
428 control."""
429 part = bundlepart(typeid, *args, **kwargs)
429 part = bundlepart(typeid, *args, **kwargs)
430 self.addpart(part)
430 self.addpart(part)
431 return part
431 return part
432
432
433 # methods used to generate the bundle2 stream
433 # methods used to generate the bundle2 stream
434 def getchunks(self):
434 def getchunks(self):
435 self.ui.debug('start emission of %s stream\n' % _magicstring)
435 self.ui.debug('start emission of %s stream\n' % _magicstring)
436 yield _magicstring
436 yield _magicstring
437 param = self._paramchunk()
437 param = self._paramchunk()
438 self.ui.debug('bundle parameter: %s\n' % param)
438 self.ui.debug('bundle parameter: %s\n' % param)
439 yield _pack(_fstreamparamsize, len(param))
439 yield _pack(_fstreamparamsize, len(param))
440 if param:
440 if param:
441 yield param
441 yield param
442
442
443 self.ui.debug('start of parts\n')
443 self.ui.debug('start of parts\n')
444 for part in self._parts:
444 for part in self._parts:
445 self.ui.debug('bundle part: "%s"\n' % part.type)
445 self.ui.debug('bundle part: "%s"\n' % part.type)
446 for chunk in part.getchunks():
446 for chunk in part.getchunks():
447 yield chunk
447 yield chunk
448 self.ui.debug('end of bundle\n')
448 self.ui.debug('end of bundle\n')
449 yield '\0\0'
449 yield '\0\0'
450
450
451 def _paramchunk(self):
451 def _paramchunk(self):
452 """return a encoded version of all stream parameters"""
452 """return a encoded version of all stream parameters"""
453 blocks = []
453 blocks = []
454 for par, value in self._params:
454 for par, value in self._params:
455 par = urllib.quote(par)
455 par = urllib.quote(par)
456 if value is not None:
456 if value is not None:
457 value = urllib.quote(value)
457 value = urllib.quote(value)
458 par = '%s=%s' % (par, value)
458 par = '%s=%s' % (par, value)
459 blocks.append(par)
459 blocks.append(par)
460 return ' '.join(blocks)
460 return ' '.join(blocks)
461
461
462 class unpackermixin(object):
462 class unpackermixin(object):
463 """A mixin to extract bytes and struct data from a stream"""
463 """A mixin to extract bytes and struct data from a stream"""
464
464
465 def __init__(self, fp):
465 def __init__(self, fp):
466 self._fp = fp
466 self._fp = fp
467
467
468 def _unpack(self, format):
468 def _unpack(self, format):
469 """unpack this struct format from the stream"""
469 """unpack this struct format from the stream"""
470 data = self._readexact(struct.calcsize(format))
470 data = self._readexact(struct.calcsize(format))
471 return _unpack(format, data)
471 return _unpack(format, data)
472
472
473 def _readexact(self, size):
473 def _readexact(self, size):
474 """read exactly <size> bytes from the stream"""
474 """read exactly <size> bytes from the stream"""
475 return changegroup.readexactly(self._fp, size)
475 return changegroup.readexactly(self._fp, size)
476
476
477
477
478 class unbundle20(unpackermixin):
478 class unbundle20(unpackermixin):
479 """interpret a bundle2 stream
479 """interpret a bundle2 stream
480
480
481 This class is fed with a binary stream and yields parts through its
481 This class is fed with a binary stream and yields parts through its
482 `iterparts` methods."""
482 `iterparts` methods."""
483
483
484 def __init__(self, ui, fp, header=None):
484 def __init__(self, ui, fp, header=None):
485 """If header is specified, we do not read it out of the stream."""
485 """If header is specified, we do not read it out of the stream."""
486 self.ui = ui
486 self.ui = ui
487 super(unbundle20, self).__init__(fp)
487 super(unbundle20, self).__init__(fp)
488 if header is None:
488 if header is None:
489 header = self._readexact(4)
489 header = self._readexact(4)
490 magic, version = header[0:2], header[2:4]
490 magic, version = header[0:2], header[2:4]
491 if magic != 'HG':
491 if magic != 'HG':
492 raise util.Abort(_('not a Mercurial bundle'))
492 raise util.Abort(_('not a Mercurial bundle'))
493 if version != '2X':
493 if version != '2X':
494 raise util.Abort(_('unknown bundle version %s') % version)
494 raise util.Abort(_('unknown bundle version %s') % version)
495 self.ui.debug('start processing of %s stream\n' % header)
495 self.ui.debug('start processing of %s stream\n' % header)
496
496
497 @util.propertycache
497 @util.propertycache
498 def params(self):
498 def params(self):
499 """dictionary of stream level parameters"""
499 """dictionary of stream level parameters"""
500 self.ui.debug('reading bundle2 stream parameters\n')
500 self.ui.debug('reading bundle2 stream parameters\n')
501 params = {}
501 params = {}
502 paramssize = self._unpack(_fstreamparamsize)[0]
502 paramssize = self._unpack(_fstreamparamsize)[0]
503 if paramssize:
503 if paramssize:
504 for p in self._readexact(paramssize).split(' '):
504 for p in self._readexact(paramssize).split(' '):
505 p = p.split('=', 1)
505 p = p.split('=', 1)
506 p = [urllib.unquote(i) for i in p]
506 p = [urllib.unquote(i) for i in p]
507 if len(p) < 2:
507 if len(p) < 2:
508 p.append(None)
508 p.append(None)
509 self._processparam(*p)
509 self._processparam(*p)
510 params[p[0]] = p[1]
510 params[p[0]] = p[1]
511 return params
511 return params
512
512
513 def _processparam(self, name, value):
513 def _processparam(self, name, value):
514 """process a parameter, applying its effect if needed
514 """process a parameter, applying its effect if needed
515
515
516 Parameter starting with a lower case letter are advisory and will be
516 Parameter starting with a lower case letter are advisory and will be
517 ignored when unknown. Those starting with an upper case letter are
517 ignored when unknown. Those starting with an upper case letter are
518 mandatory and will this function will raise a KeyError when unknown.
518 mandatory and will this function will raise a KeyError when unknown.
519
519
520 Note: no option are currently supported. Any input will be either
520 Note: no option are currently supported. Any input will be either
521 ignored or failing.
521 ignored or failing.
522 """
522 """
523 if not name:
523 if not name:
524 raise ValueError('empty parameter name')
524 raise ValueError('empty parameter name')
525 if name[0] not in string.letters:
525 if name[0] not in string.letters:
526 raise ValueError('non letter first character: %r' % name)
526 raise ValueError('non letter first character: %r' % name)
527 # Some logic will be later added here to try to process the option for
527 # Some logic will be later added here to try to process the option for
528 # a dict of known parameter.
528 # a dict of known parameter.
529 if name[0].islower():
529 if name[0].islower():
530 self.ui.debug("ignoring unknown parameter %r\n" % name)
530 self.ui.debug("ignoring unknown parameter %r\n" % name)
531 else:
531 else:
532 raise error.BundleValueError(params=(name,))
532 raise error.BundleValueError(params=(name,))
533
533
534
534
535 def iterparts(self):
535 def iterparts(self):
536 """yield all parts contained in the stream"""
536 """yield all parts contained in the stream"""
537 # make sure param have been loaded
537 # make sure param have been loaded
538 self.params
538 self.params
539 self.ui.debug('start extraction of bundle2 parts\n')
539 self.ui.debug('start extraction of bundle2 parts\n')
540 headerblock = self._readpartheader()
540 headerblock = self._readpartheader()
541 while headerblock is not None:
541 while headerblock is not None:
542 part = unbundlepart(self.ui, headerblock, self._fp)
542 part = unbundlepart(self.ui, headerblock, self._fp)
543 yield part
543 yield part
544 headerblock = self._readpartheader()
544 headerblock = self._readpartheader()
545 self.ui.debug('end of bundle2 stream\n')
545 self.ui.debug('end of bundle2 stream\n')
546
546
547 def _readpartheader(self):
547 def _readpartheader(self):
548 """reads a part header size and return the bytes blob
548 """reads a part header size and return the bytes blob
549
549
550 returns None if empty"""
550 returns None if empty"""
551 headersize = self._unpack(_fpartheadersize)[0]
551 headersize = self._unpack(_fpartheadersize)[0]
552 self.ui.debug('part header size: %i\n' % headersize)
552 self.ui.debug('part header size: %i\n' % headersize)
553 if headersize:
553 if headersize:
554 return self._readexact(headersize)
554 return self._readexact(headersize)
555 return None
555 return None
556
556
557
557
558 class bundlepart(object):
558 class bundlepart(object):
559 """A bundle2 part contains application level payload
559 """A bundle2 part contains application level payload
560
560
561 The part `type` is used to route the part to the application level
561 The part `type` is used to route the part to the application level
562 handler.
562 handler.
563
563
564 The part payload is contained in ``part.data``. It could be raw bytes or a
564 The part payload is contained in ``part.data``. It could be raw bytes or a
565 generator of byte chunks.
565 generator of byte chunks.
566
566
567 You can add parameters to the part using the ``addparam`` method.
567 You can add parameters to the part using the ``addparam`` method.
568 Parameters can be either mandatory (default) or advisory. Remote side
568 Parameters can be either mandatory (default) or advisory. Remote side
569 should be able to safely ignore the advisory ones.
569 should be able to safely ignore the advisory ones.
570
570
571 Both data and parameters cannot be modified after the generation has begun.
571 Both data and parameters cannot be modified after the generation has begun.
572 """
572 """
573
573
574 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
574 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
575 data=''):
575 data=''):
576 self.id = None
576 self.id = None
577 self.type = parttype
577 self.type = parttype
578 self._data = data
578 self._data = data
579 self._mandatoryparams = list(mandatoryparams)
579 self._mandatoryparams = list(mandatoryparams)
580 self._advisoryparams = list(advisoryparams)
580 self._advisoryparams = list(advisoryparams)
581 # checking for duplicated entries
581 # checking for duplicated entries
582 self._seenparams = set()
582 self._seenparams = set()
583 for pname, __ in self._mandatoryparams + self._advisoryparams:
583 for pname, __ in self._mandatoryparams + self._advisoryparams:
584 if pname in self._seenparams:
584 if pname in self._seenparams:
585 raise RuntimeError('duplicated params: %s' % pname)
585 raise RuntimeError('duplicated params: %s' % pname)
586 self._seenparams.add(pname)
586 self._seenparams.add(pname)
587 # status of the part's generation:
587 # status of the part's generation:
588 # - None: not started,
588 # - None: not started,
589 # - False: currently generated,
589 # - False: currently generated,
590 # - True: generation done.
590 # - True: generation done.
591 self._generated = None
591 self._generated = None
592
592
593 # methods used to defines the part content
593 # methods used to defines the part content
594 def __setdata(self, data):
594 def __setdata(self, data):
595 if self._generated is not None:
595 if self._generated is not None:
596 raise error.ReadOnlyPartError('part is being generated')
596 raise error.ReadOnlyPartError('part is being generated')
597 self._data = data
597 self._data = data
598 def __getdata(self):
598 def __getdata(self):
599 return self._data
599 return self._data
600 data = property(__getdata, __setdata)
600 data = property(__getdata, __setdata)
601
601
602 @property
602 @property
603 def mandatoryparams(self):
603 def mandatoryparams(self):
604 # make it an immutable tuple to force people through ``addparam``
604 # make it an immutable tuple to force people through ``addparam``
605 return tuple(self._mandatoryparams)
605 return tuple(self._mandatoryparams)
606
606
607 @property
607 @property
608 def advisoryparams(self):
608 def advisoryparams(self):
609 # make it an immutable tuple to force people through ``addparam``
609 # make it an immutable tuple to force people through ``addparam``
610 return tuple(self._advisoryparams)
610 return tuple(self._advisoryparams)
611
611
612 def addparam(self, name, value='', mandatory=True):
612 def addparam(self, name, value='', mandatory=True):
613 if self._generated is not None:
613 if self._generated is not None:
614 raise error.ReadOnlyPartError('part is being generated')
614 raise error.ReadOnlyPartError('part is being generated')
615 if name in self._seenparams:
615 if name in self._seenparams:
616 raise ValueError('duplicated params: %s' % name)
616 raise ValueError('duplicated params: %s' % name)
617 self._seenparams.add(name)
617 self._seenparams.add(name)
618 params = self._advisoryparams
618 params = self._advisoryparams
619 if mandatory:
619 if mandatory:
620 params = self._mandatoryparams
620 params = self._mandatoryparams
621 params.append((name, value))
621 params.append((name, value))
622
622
623 # methods used to generates the bundle2 stream
623 # methods used to generates the bundle2 stream
624 def getchunks(self):
624 def getchunks(self):
625 if self._generated is not None:
625 if self._generated is not None:
626 raise RuntimeError('part can only be consumed once')
626 raise RuntimeError('part can only be consumed once')
627 self._generated = False
627 self._generated = False
628 #### header
628 #### header
629 ## parttype
629 ## parttype
630 header = [_pack(_fparttypesize, len(self.type)),
630 header = [_pack(_fparttypesize, len(self.type)),
631 self.type, _pack(_fpartid, self.id),
631 self.type, _pack(_fpartid, self.id),
632 ]
632 ]
633 ## parameters
633 ## parameters
634 # count
634 # count
635 manpar = self.mandatoryparams
635 manpar = self.mandatoryparams
636 advpar = self.advisoryparams
636 advpar = self.advisoryparams
637 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
637 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
638 # size
638 # size
639 parsizes = []
639 parsizes = []
640 for key, value in manpar:
640 for key, value in manpar:
641 parsizes.append(len(key))
641 parsizes.append(len(key))
642 parsizes.append(len(value))
642 parsizes.append(len(value))
643 for key, value in advpar:
643 for key, value in advpar:
644 parsizes.append(len(key))
644 parsizes.append(len(key))
645 parsizes.append(len(value))
645 parsizes.append(len(value))
646 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
646 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
647 header.append(paramsizes)
647 header.append(paramsizes)
648 # key, value
648 # key, value
649 for key, value in manpar:
649 for key, value in manpar:
650 header.append(key)
650 header.append(key)
651 header.append(value)
651 header.append(value)
652 for key, value in advpar:
652 for key, value in advpar:
653 header.append(key)
653 header.append(key)
654 header.append(value)
654 header.append(value)
655 ## finalize header
655 ## finalize header
656 headerchunk = ''.join(header)
656 headerchunk = ''.join(header)
657 yield _pack(_fpartheadersize, len(headerchunk))
657 yield _pack(_fpartheadersize, len(headerchunk))
658 yield headerchunk
658 yield headerchunk
659 ## payload
659 ## payload
660 for chunk in self._payloadchunks():
660 for chunk in self._payloadchunks():
661 yield _pack(_fpayloadsize, len(chunk))
661 yield _pack(_fpayloadsize, len(chunk))
662 yield chunk
662 yield chunk
663 # end of payload
663 # end of payload
664 yield _pack(_fpayloadsize, 0)
664 yield _pack(_fpayloadsize, 0)
665 self._generated = True
665 self._generated = True
666
666
667 def _payloadchunks(self):
667 def _payloadchunks(self):
668 """yield chunks of a the part payload
668 """yield chunks of a the part payload
669
669
670 Exists to handle the different methods to provide data to a part."""
670 Exists to handle the different methods to provide data to a part."""
671 # we only support fixed size data now.
671 # we only support fixed size data now.
672 # This will be improved in the future.
672 # This will be improved in the future.
673 if util.safehasattr(self.data, 'next'):
673 if util.safehasattr(self.data, 'next'):
674 buff = util.chunkbuffer(self.data)
674 buff = util.chunkbuffer(self.data)
675 chunk = buff.read(preferedchunksize)
675 chunk = buff.read(preferedchunksize)
676 while chunk:
676 while chunk:
677 yield chunk
677 yield chunk
678 chunk = buff.read(preferedchunksize)
678 chunk = buff.read(preferedchunksize)
679 elif len(self.data):
679 elif len(self.data):
680 yield self.data
680 yield self.data
681
681
682 class unbundlepart(unpackermixin):
682 class unbundlepart(unpackermixin):
683 """a bundle part read from a bundle"""
683 """a bundle part read from a bundle"""
684
684
685 def __init__(self, ui, header, fp):
685 def __init__(self, ui, header, fp):
686 super(unbundlepart, self).__init__(fp)
686 super(unbundlepart, self).__init__(fp)
687 self.ui = ui
687 self.ui = ui
688 # unbundle state attr
688 # unbundle state attr
689 self._headerdata = header
689 self._headerdata = header
690 self._headeroffset = 0
690 self._headeroffset = 0
691 self._initialized = False
691 self._initialized = False
692 self.consumed = False
692 self.consumed = False
693 # part data
693 # part data
694 self.id = None
694 self.id = None
695 self.type = None
695 self.type = None
696 self.mandatoryparams = None
696 self.mandatoryparams = None
697 self.advisoryparams = None
697 self.advisoryparams = None
698 self.params = None
698 self.params = None
699 self.mandatorykeys = ()
699 self.mandatorykeys = ()
700 self._payloadstream = None
700 self._payloadstream = None
701 self._readheader()
701 self._readheader()
702
702
703 def _fromheader(self, size):
703 def _fromheader(self, size):
704 """return the next <size> byte from the header"""
704 """return the next <size> byte from the header"""
705 offset = self._headeroffset
705 offset = self._headeroffset
706 data = self._headerdata[offset:(offset + size)]
706 data = self._headerdata[offset:(offset + size)]
707 self._headeroffset = offset + size
707 self._headeroffset = offset + size
708 return data
708 return data
709
709
710 def _unpackheader(self, format):
710 def _unpackheader(self, format):
711 """read given format from header
711 """read given format from header
712
712
713 This automatically compute the size of the format to read."""
713 This automatically compute the size of the format to read."""
714 data = self._fromheader(struct.calcsize(format))
714 data = self._fromheader(struct.calcsize(format))
715 return _unpack(format, data)
715 return _unpack(format, data)
716
716
717 def _initparams(self, mandatoryparams, advisoryparams):
717 def _initparams(self, mandatoryparams, advisoryparams):
718 """internal function to setup all logic related parameters"""
718 """internal function to setup all logic related parameters"""
719 # make it read only to prevent people touching it by mistake.
719 # make it read only to prevent people touching it by mistake.
720 self.mandatoryparams = tuple(mandatoryparams)
720 self.mandatoryparams = tuple(mandatoryparams)
721 self.advisoryparams = tuple(advisoryparams)
721 self.advisoryparams = tuple(advisoryparams)
722 # user friendly UI
722 # user friendly UI
723 self.params = dict(self.mandatoryparams)
723 self.params = dict(self.mandatoryparams)
724 self.params.update(dict(self.advisoryparams))
724 self.params.update(dict(self.advisoryparams))
725 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
725 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
726
726
727 def _readheader(self):
727 def _readheader(self):
728 """read the header and setup the object"""
728 """read the header and setup the object"""
729 typesize = self._unpackheader(_fparttypesize)[0]
729 typesize = self._unpackheader(_fparttypesize)[0]
730 self.type = self._fromheader(typesize)
730 self.type = self._fromheader(typesize)
731 self.ui.debug('part type: "%s"\n' % self.type)
731 self.ui.debug('part type: "%s"\n' % self.type)
732 self.id = self._unpackheader(_fpartid)[0]
732 self.id = self._unpackheader(_fpartid)[0]
733 self.ui.debug('part id: "%s"\n' % self.id)
733 self.ui.debug('part id: "%s"\n' % self.id)
734 ## reading parameters
734 ## reading parameters
735 # param count
735 # param count
736 mancount, advcount = self._unpackheader(_fpartparamcount)
736 mancount, advcount = self._unpackheader(_fpartparamcount)
737 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
737 self.ui.debug('part parameters: %i\n' % (mancount + advcount))
738 # param size
738 # param size
739 fparamsizes = _makefpartparamsizes(mancount + advcount)
739 fparamsizes = _makefpartparamsizes(mancount + advcount)
740 paramsizes = self._unpackheader(fparamsizes)
740 paramsizes = self._unpackheader(fparamsizes)
741 # make it a list of couple again
741 # make it a list of couple again
742 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
742 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
743 # split mandatory from advisory
743 # split mandatory from advisory
744 mansizes = paramsizes[:mancount]
744 mansizes = paramsizes[:mancount]
745 advsizes = paramsizes[mancount:]
745 advsizes = paramsizes[mancount:]
746 # retrive param value
746 # retrive param value
747 manparams = []
747 manparams = []
748 for key, value in mansizes:
748 for key, value in mansizes:
749 manparams.append((self._fromheader(key), self._fromheader(value)))
749 manparams.append((self._fromheader(key), self._fromheader(value)))
750 advparams = []
750 advparams = []
751 for key, value in advsizes:
751 for key, value in advsizes:
752 advparams.append((self._fromheader(key), self._fromheader(value)))
752 advparams.append((self._fromheader(key), self._fromheader(value)))
753 self._initparams(manparams, advparams)
753 self._initparams(manparams, advparams)
754 ## part payload
754 ## part payload
755 def payloadchunks():
755 def payloadchunks():
756 payloadsize = self._unpack(_fpayloadsize)[0]
756 payloadsize = self._unpack(_fpayloadsize)[0]
757 self.ui.debug('payload chunk size: %i\n' % payloadsize)
757 self.ui.debug('payload chunk size: %i\n' % payloadsize)
758 while payloadsize:
758 while payloadsize:
759 yield self._readexact(payloadsize)
759 yield self._readexact(payloadsize)
760 payloadsize = self._unpack(_fpayloadsize)[0]
760 payloadsize = self._unpack(_fpayloadsize)[0]
761 self.ui.debug('payload chunk size: %i\n' % payloadsize)
761 self.ui.debug('payload chunk size: %i\n' % payloadsize)
762 self._payloadstream = util.chunkbuffer(payloadchunks())
762 self._payloadstream = util.chunkbuffer(payloadchunks())
763 # we read the data, tell it
763 # we read the data, tell it
764 self._initialized = True
764 self._initialized = True
765
765
766 def read(self, size=None):
766 def read(self, size=None):
767 """read payload data"""
767 """read payload data"""
768 if not self._initialized:
768 if not self._initialized:
769 self._readheader()
769 self._readheader()
770 if size is None:
770 if size is None:
771 data = self._payloadstream.read()
771 data = self._payloadstream.read()
772 else:
772 else:
773 data = self._payloadstream.read(size)
773 data = self._payloadstream.read(size)
774 if size is None or len(data) < size:
774 if size is None or len(data) < size:
775 self.consumed = True
775 self.consumed = True
776 return data
776 return data
777
777
778 capabilities = {'HG2X': (),
779 'b2x:listkeys': (),
780 'b2x:pushkey': (),
781 'b2x:changegroup': (),
782 }
783
778 def bundle2caps(remote):
784 def bundle2caps(remote):
779 """return the bundlecapabilities of a peer as dict"""
785 """return the bundlecapabilities of a peer as dict"""
780 raw = remote.capable('bundle2-exp')
786 raw = remote.capable('bundle2-exp')
781 if not raw and raw != '':
787 if not raw and raw != '':
782 return {}
788 return {}
783 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
789 capsblob = urllib.unquote(remote.capable('bundle2-exp'))
784 return decodecaps(capsblob)
790 return decodecaps(capsblob)
785
791
786 @parthandler('b2x:changegroup')
792 @parthandler('b2x:changegroup')
787 def handlechangegroup(op, inpart):
793 def handlechangegroup(op, inpart):
788 """apply a changegroup part on the repo
794 """apply a changegroup part on the repo
789
795
790 This is a very early implementation that will massive rework before being
796 This is a very early implementation that will massive rework before being
791 inflicted to any end-user.
797 inflicted to any end-user.
792 """
798 """
793 # Make sure we trigger a transaction creation
799 # Make sure we trigger a transaction creation
794 #
800 #
795 # The addchangegroup function will get a transaction object by itself, but
801 # The addchangegroup function will get a transaction object by itself, but
796 # we need to make sure we trigger the creation of a transaction object used
802 # we need to make sure we trigger the creation of a transaction object used
797 # for the whole processing scope.
803 # for the whole processing scope.
798 op.gettransaction()
804 op.gettransaction()
799 cg = changegroup.unbundle10(inpart, 'UN')
805 cg = changegroup.unbundle10(inpart, 'UN')
800 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
806 ret = changegroup.addchangegroup(op.repo, cg, 'bundle2', 'bundle2')
801 op.records.add('changegroup', {'return': ret})
807 op.records.add('changegroup', {'return': ret})
802 if op.reply is not None:
808 if op.reply is not None:
803 # This is definitly not the final form of this
809 # This is definitly not the final form of this
804 # return. But one need to start somewhere.
810 # return. But one need to start somewhere.
805 part = op.reply.newpart('b2x:reply:changegroup')
811 part = op.reply.newpart('b2x:reply:changegroup')
806 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
812 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
807 part.addparam('return', '%i' % ret, mandatory=False)
813 part.addparam('return', '%i' % ret, mandatory=False)
808 assert not inpart.read()
814 assert not inpart.read()
809
815
810 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
816 @parthandler('b2x:reply:changegroup', ('return', 'in-reply-to'))
811 def handlechangegroup(op, inpart):
817 def handlechangegroup(op, inpart):
812 ret = int(inpart.params['return'])
818 ret = int(inpart.params['return'])
813 replyto = int(inpart.params['in-reply-to'])
819 replyto = int(inpart.params['in-reply-to'])
814 op.records.add('changegroup', {'return': ret}, replyto)
820 op.records.add('changegroup', {'return': ret}, replyto)
815
821
816 @parthandler('b2x:check:heads')
822 @parthandler('b2x:check:heads')
817 def handlechangegroup(op, inpart):
823 def handlechangegroup(op, inpart):
818 """check that head of the repo did not change
824 """check that head of the repo did not change
819
825
820 This is used to detect a push race when using unbundle.
826 This is used to detect a push race when using unbundle.
821 This replaces the "heads" argument of unbundle."""
827 This replaces the "heads" argument of unbundle."""
822 h = inpart.read(20)
828 h = inpart.read(20)
823 heads = []
829 heads = []
824 while len(h) == 20:
830 while len(h) == 20:
825 heads.append(h)
831 heads.append(h)
826 h = inpart.read(20)
832 h = inpart.read(20)
827 assert not h
833 assert not h
828 if heads != op.repo.heads():
834 if heads != op.repo.heads():
829 raise error.PushRaced('repository changed while pushing - '
835 raise error.PushRaced('repository changed while pushing - '
830 'please try again')
836 'please try again')
831
837
832 @parthandler('b2x:output')
838 @parthandler('b2x:output')
833 def handleoutput(op, inpart):
839 def handleoutput(op, inpart):
834 """forward output captured on the server to the client"""
840 """forward output captured on the server to the client"""
835 for line in inpart.read().splitlines():
841 for line in inpart.read().splitlines():
836 op.ui.write(('remote: %s\n' % line))
842 op.ui.write(('remote: %s\n' % line))
837
843
838 @parthandler('b2x:replycaps')
844 @parthandler('b2x:replycaps')
839 def handlereplycaps(op, inpart):
845 def handlereplycaps(op, inpart):
840 """Notify that a reply bundle should be created
846 """Notify that a reply bundle should be created
841
847
842 The payload contains the capabilities information for the reply"""
848 The payload contains the capabilities information for the reply"""
843 caps = decodecaps(inpart.read())
849 caps = decodecaps(inpart.read())
844 if op.reply is None:
850 if op.reply is None:
845 op.reply = bundle20(op.ui, caps)
851 op.reply = bundle20(op.ui, caps)
846
852
847 @parthandler('b2x:error:abort', ('message', 'hint'))
853 @parthandler('b2x:error:abort', ('message', 'hint'))
848 def handlereplycaps(op, inpart):
854 def handlereplycaps(op, inpart):
849 """Used to transmit abort error over the wire"""
855 """Used to transmit abort error over the wire"""
850 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
856 raise util.Abort(inpart.params['message'], hint=inpart.params.get('hint'))
851
857
852 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
858 @parthandler('b2x:error:unsupportedcontent', ('parttype', 'params'))
853 def handlereplycaps(op, inpart):
859 def handlereplycaps(op, inpart):
854 """Used to transmit unknown content error over the wire"""
860 """Used to transmit unknown content error over the wire"""
855 kwargs = {}
861 kwargs = {}
856 parttype = inpart.params.get('parttype')
862 parttype = inpart.params.get('parttype')
857 if parttype is not None:
863 if parttype is not None:
858 kwargs['parttype'] = parttype
864 kwargs['parttype'] = parttype
859 params = inpart.params.get('params')
865 params = inpart.params.get('params')
860 if params is not None:
866 if params is not None:
861 kwargs['params'] = params.split('\0')
867 kwargs['params'] = params.split('\0')
862
868
863 raise error.BundleValueError(**kwargs)
869 raise error.BundleValueError(**kwargs)
864
870
865 @parthandler('b2x:error:pushraced', ('message',))
871 @parthandler('b2x:error:pushraced', ('message',))
866 def handlereplycaps(op, inpart):
872 def handlereplycaps(op, inpart):
867 """Used to transmit push race error over the wire"""
873 """Used to transmit push race error over the wire"""
868 raise error.ResponseError(_('push failed:'), inpart.params['message'])
874 raise error.ResponseError(_('push failed:'), inpart.params['message'])
869
875
870 @parthandler('b2x:listkeys', ('namespace',))
876 @parthandler('b2x:listkeys', ('namespace',))
871 def handlelistkeys(op, inpart):
877 def handlelistkeys(op, inpart):
872 """retrieve pushkey namespace content stored in a bundle2"""
878 """retrieve pushkey namespace content stored in a bundle2"""
873 namespace = inpart.params['namespace']
879 namespace = inpart.params['namespace']
874 r = pushkey.decodekeys(inpart.read())
880 r = pushkey.decodekeys(inpart.read())
875 op.records.add('listkeys', (namespace, r))
881 op.records.add('listkeys', (namespace, r))
876
882
877 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
883 @parthandler('b2x:pushkey', ('namespace', 'key', 'old', 'new'))
878 def handlepushkey(op, inpart):
884 def handlepushkey(op, inpart):
879 """process a pushkey request"""
885 """process a pushkey request"""
880 dec = pushkey.decode
886 dec = pushkey.decode
881 namespace = dec(inpart.params['namespace'])
887 namespace = dec(inpart.params['namespace'])
882 key = dec(inpart.params['key'])
888 key = dec(inpart.params['key'])
883 old = dec(inpart.params['old'])
889 old = dec(inpart.params['old'])
884 new = dec(inpart.params['new'])
890 new = dec(inpart.params['new'])
885 ret = op.repo.pushkey(namespace, key, old, new)
891 ret = op.repo.pushkey(namespace, key, old, new)
886 record = {'namespace': namespace,
892 record = {'namespace': namespace,
887 'key': key,
893 'key': key,
888 'old': old,
894 'old': old,
889 'new': new}
895 'new': new}
890 op.records.add('pushkey', record)
896 op.records.add('pushkey', record)
891 if op.reply is not None:
897 if op.reply is not None:
892 rpart = op.reply.newpart('b2x:reply:pushkey')
898 rpart = op.reply.newpart('b2x:reply:pushkey')
893 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
899 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
894 rpart.addparam('return', '%i' % ret, mandatory=False)
900 rpart.addparam('return', '%i' % ret, mandatory=False)
895
901
896 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
902 @parthandler('b2x:reply:pushkey', ('return', 'in-reply-to'))
897 def handlepushkeyreply(op, inpart):
903 def handlepushkeyreply(op, inpart):
898 """retrieve the result of a pushkey request"""
904 """retrieve the result of a pushkey request"""
899 ret = int(inpart.params['return'])
905 ret = int(inpart.params['return'])
900 partid = int(inpart.params['in-reply-to'])
906 partid = int(inpart.params['in-reply-to'])
901 op.records.add('pushkey', {'return': ret}, partid)
907 op.records.add('pushkey', {'return': ret}, partid)
902
908
903 @parthandler('b2x:obsmarkers')
909 @parthandler('b2x:obsmarkers')
904 def handleobsmarker(op, inpart):
910 def handleobsmarker(op, inpart):
905 """add a stream of obsmarkers to the repo"""
911 """add a stream of obsmarkers to the repo"""
906 tr = op.gettransaction()
912 tr = op.gettransaction()
907 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
913 new = op.repo.obsstore.mergemarkers(tr, inpart.read())
908 if new:
914 if new:
909 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
915 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
910 op.records.add('obsmarkers', {'new': new})
916 op.records.add('obsmarkers', {'new': new})
911 if op.reply is not None:
917 if op.reply is not None:
912 rpart = op.reply.newpart('b2x:reply:obsmarkers')
918 rpart = op.reply.newpart('b2x:reply:obsmarkers')
913 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
919 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
914 rpart.addparam('new', '%i' % new, mandatory=False)
920 rpart.addparam('new', '%i' % new, mandatory=False)
915
921
916
922
917 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
923 @parthandler('b2x:reply:obsmarkers', ('new', 'in-reply-to'))
918 def handlepushkeyreply(op, inpart):
924 def handlepushkeyreply(op, inpart):
919 """retrieve the result of a pushkey request"""
925 """retrieve the result of a pushkey request"""
920 ret = int(inpart.params['new'])
926 ret = int(inpart.params['new'])
921 partid = int(inpart.params['in-reply-to'])
927 partid = int(inpart.params['in-reply-to'])
922 op.records.add('obsmarkers', {'new': ret}, partid)
928 op.records.add('obsmarkers', {'new': ret}, partid)
@@ -1,1031 +1,1031 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from i18n import _
8 from i18n import _
9 from node import hex, nullid
9 from node import hex, nullid
10 import errno, urllib
10 import errno, urllib
11 import util, scmutil, changegroup, base85, error
11 import util, scmutil, changegroup, base85, error
12 import discovery, phases, obsolete, bookmarks, bundle2, pushkey
12 import discovery, phases, obsolete, bookmarks, bundle2, pushkey
13
13
14 def readbundle(ui, fh, fname, vfs=None):
14 def readbundle(ui, fh, fname, vfs=None):
15 header = changegroup.readexactly(fh, 4)
15 header = changegroup.readexactly(fh, 4)
16
16
17 alg = None
17 alg = None
18 if not fname:
18 if not fname:
19 fname = "stream"
19 fname = "stream"
20 if not header.startswith('HG') and header.startswith('\0'):
20 if not header.startswith('HG') and header.startswith('\0'):
21 fh = changegroup.headerlessfixup(fh, header)
21 fh = changegroup.headerlessfixup(fh, header)
22 header = "HG10"
22 header = "HG10"
23 alg = 'UN'
23 alg = 'UN'
24 elif vfs:
24 elif vfs:
25 fname = vfs.join(fname)
25 fname = vfs.join(fname)
26
26
27 magic, version = header[0:2], header[2:4]
27 magic, version = header[0:2], header[2:4]
28
28
29 if magic != 'HG':
29 if magic != 'HG':
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
30 raise util.Abort(_('%s: not a Mercurial bundle') % fname)
31 if version == '10':
31 if version == '10':
32 if alg is None:
32 if alg is None:
33 alg = changegroup.readexactly(fh, 2)
33 alg = changegroup.readexactly(fh, 2)
34 return changegroup.unbundle10(fh, alg)
34 return changegroup.unbundle10(fh, alg)
35 elif version == '2X':
35 elif version == '2X':
36 return bundle2.unbundle20(ui, fh, header=magic + version)
36 return bundle2.unbundle20(ui, fh, header=magic + version)
37 else:
37 else:
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
38 raise util.Abort(_('%s: unknown bundle version %s') % (fname, version))
39
39
40
40
41 class pushoperation(object):
41 class pushoperation(object):
42 """A object that represent a single push operation
42 """A object that represent a single push operation
43
43
44 It purpose is to carry push related state and very common operation.
44 It purpose is to carry push related state and very common operation.
45
45
46 A new should be created at the beginning of each push and discarded
46 A new should be created at the beginning of each push and discarded
47 afterward.
47 afterward.
48 """
48 """
49
49
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
50 def __init__(self, repo, remote, force=False, revs=None, newbranch=False):
51 # repo we push from
51 # repo we push from
52 self.repo = repo
52 self.repo = repo
53 self.ui = repo.ui
53 self.ui = repo.ui
54 # repo we push to
54 # repo we push to
55 self.remote = remote
55 self.remote = remote
56 # force option provided
56 # force option provided
57 self.force = force
57 self.force = force
58 # revs to be pushed (None is "all")
58 # revs to be pushed (None is "all")
59 self.revs = revs
59 self.revs = revs
60 # allow push of new branch
60 # allow push of new branch
61 self.newbranch = newbranch
61 self.newbranch = newbranch
62 # did a local lock get acquired?
62 # did a local lock get acquired?
63 self.locallocked = None
63 self.locallocked = None
64 # step already performed
64 # step already performed
65 # (used to check what steps have been already performed through bundle2)
65 # (used to check what steps have been already performed through bundle2)
66 self.stepsdone = set()
66 self.stepsdone = set()
67 # Integer version of the push result
67 # Integer version of the push result
68 # - None means nothing to push
68 # - None means nothing to push
69 # - 0 means HTTP error
69 # - 0 means HTTP error
70 # - 1 means we pushed and remote head count is unchanged *or*
70 # - 1 means we pushed and remote head count is unchanged *or*
71 # we have outgoing changesets but refused to push
71 # we have outgoing changesets but refused to push
72 # - other values as described by addchangegroup()
72 # - other values as described by addchangegroup()
73 self.ret = None
73 self.ret = None
74 # discover.outgoing object (contains common and outgoing data)
74 # discover.outgoing object (contains common and outgoing data)
75 self.outgoing = None
75 self.outgoing = None
76 # all remote heads before the push
76 # all remote heads before the push
77 self.remoteheads = None
77 self.remoteheads = None
78 # testable as a boolean indicating if any nodes are missing locally.
78 # testable as a boolean indicating if any nodes are missing locally.
79 self.incoming = None
79 self.incoming = None
80 # phases changes that must be pushed along side the changesets
80 # phases changes that must be pushed along side the changesets
81 self.outdatedphases = None
81 self.outdatedphases = None
82 # phases changes that must be pushed if changeset push fails
82 # phases changes that must be pushed if changeset push fails
83 self.fallbackoutdatedphases = None
83 self.fallbackoutdatedphases = None
84 # outgoing obsmarkers
84 # outgoing obsmarkers
85 self.outobsmarkers = set()
85 self.outobsmarkers = set()
86 # outgoing bookmarks
86 # outgoing bookmarks
87 self.outbookmarks = []
87 self.outbookmarks = []
88
88
89 @util.propertycache
89 @util.propertycache
90 def futureheads(self):
90 def futureheads(self):
91 """future remote heads if the changeset push succeeds"""
91 """future remote heads if the changeset push succeeds"""
92 return self.outgoing.missingheads
92 return self.outgoing.missingheads
93
93
94 @util.propertycache
94 @util.propertycache
95 def fallbackheads(self):
95 def fallbackheads(self):
96 """future remote heads if the changeset push fails"""
96 """future remote heads if the changeset push fails"""
97 if self.revs is None:
97 if self.revs is None:
98 # not target to push, all common are relevant
98 # not target to push, all common are relevant
99 return self.outgoing.commonheads
99 return self.outgoing.commonheads
100 unfi = self.repo.unfiltered()
100 unfi = self.repo.unfiltered()
101 # I want cheads = heads(::missingheads and ::commonheads)
101 # I want cheads = heads(::missingheads and ::commonheads)
102 # (missingheads is revs with secret changeset filtered out)
102 # (missingheads is revs with secret changeset filtered out)
103 #
103 #
104 # This can be expressed as:
104 # This can be expressed as:
105 # cheads = ( (missingheads and ::commonheads)
105 # cheads = ( (missingheads and ::commonheads)
106 # + (commonheads and ::missingheads))"
106 # + (commonheads and ::missingheads))"
107 # )
107 # )
108 #
108 #
109 # while trying to push we already computed the following:
109 # while trying to push we already computed the following:
110 # common = (::commonheads)
110 # common = (::commonheads)
111 # missing = ((commonheads::missingheads) - commonheads)
111 # missing = ((commonheads::missingheads) - commonheads)
112 #
112 #
113 # We can pick:
113 # We can pick:
114 # * missingheads part of common (::commonheads)
114 # * missingheads part of common (::commonheads)
115 common = set(self.outgoing.common)
115 common = set(self.outgoing.common)
116 nm = self.repo.changelog.nodemap
116 nm = self.repo.changelog.nodemap
117 cheads = [node for node in self.revs if nm[node] in common]
117 cheads = [node for node in self.revs if nm[node] in common]
118 # and
118 # and
119 # * commonheads parents on missing
119 # * commonheads parents on missing
120 revset = unfi.set('%ln and parents(roots(%ln))',
120 revset = unfi.set('%ln and parents(roots(%ln))',
121 self.outgoing.commonheads,
121 self.outgoing.commonheads,
122 self.outgoing.missing)
122 self.outgoing.missing)
123 cheads.extend(c.node() for c in revset)
123 cheads.extend(c.node() for c in revset)
124 return cheads
124 return cheads
125
125
126 @property
126 @property
127 def commonheads(self):
127 def commonheads(self):
128 """set of all common heads after changeset bundle push"""
128 """set of all common heads after changeset bundle push"""
129 if self.ret:
129 if self.ret:
130 return self.futureheads
130 return self.futureheads
131 else:
131 else:
132 return self.fallbackheads
132 return self.fallbackheads
133
133
134 def push(repo, remote, force=False, revs=None, newbranch=False):
134 def push(repo, remote, force=False, revs=None, newbranch=False):
135 '''Push outgoing changesets (limited by revs) from a local
135 '''Push outgoing changesets (limited by revs) from a local
136 repository to remote. Return an integer:
136 repository to remote. Return an integer:
137 - None means nothing to push
137 - None means nothing to push
138 - 0 means HTTP error
138 - 0 means HTTP error
139 - 1 means we pushed and remote head count is unchanged *or*
139 - 1 means we pushed and remote head count is unchanged *or*
140 we have outgoing changesets but refused to push
140 we have outgoing changesets but refused to push
141 - other values as described by addchangegroup()
141 - other values as described by addchangegroup()
142 '''
142 '''
143 pushop = pushoperation(repo, remote, force, revs, newbranch)
143 pushop = pushoperation(repo, remote, force, revs, newbranch)
144 if pushop.remote.local():
144 if pushop.remote.local():
145 missing = (set(pushop.repo.requirements)
145 missing = (set(pushop.repo.requirements)
146 - pushop.remote.local().supported)
146 - pushop.remote.local().supported)
147 if missing:
147 if missing:
148 msg = _("required features are not"
148 msg = _("required features are not"
149 " supported in the destination:"
149 " supported in the destination:"
150 " %s") % (', '.join(sorted(missing)))
150 " %s") % (', '.join(sorted(missing)))
151 raise util.Abort(msg)
151 raise util.Abort(msg)
152
152
153 # there are two ways to push to remote repo:
153 # there are two ways to push to remote repo:
154 #
154 #
155 # addchangegroup assumes local user can lock remote
155 # addchangegroup assumes local user can lock remote
156 # repo (local filesystem, old ssh servers).
156 # repo (local filesystem, old ssh servers).
157 #
157 #
158 # unbundle assumes local user cannot lock remote repo (new ssh
158 # unbundle assumes local user cannot lock remote repo (new ssh
159 # servers, http servers).
159 # servers, http servers).
160
160
161 if not pushop.remote.canpush():
161 if not pushop.remote.canpush():
162 raise util.Abort(_("destination does not support push"))
162 raise util.Abort(_("destination does not support push"))
163 # get local lock as we might write phase data
163 # get local lock as we might write phase data
164 locallock = None
164 locallock = None
165 try:
165 try:
166 locallock = pushop.repo.lock()
166 locallock = pushop.repo.lock()
167 pushop.locallocked = True
167 pushop.locallocked = True
168 except IOError, err:
168 except IOError, err:
169 pushop.locallocked = False
169 pushop.locallocked = False
170 if err.errno != errno.EACCES:
170 if err.errno != errno.EACCES:
171 raise
171 raise
172 # source repo cannot be locked.
172 # source repo cannot be locked.
173 # We do not abort the push, but just disable the local phase
173 # We do not abort the push, but just disable the local phase
174 # synchronisation.
174 # synchronisation.
175 msg = 'cannot lock source repository: %s\n' % err
175 msg = 'cannot lock source repository: %s\n' % err
176 pushop.ui.debug(msg)
176 pushop.ui.debug(msg)
177 try:
177 try:
178 pushop.repo.checkpush(pushop)
178 pushop.repo.checkpush(pushop)
179 lock = None
179 lock = None
180 unbundle = pushop.remote.capable('unbundle')
180 unbundle = pushop.remote.capable('unbundle')
181 if not unbundle:
181 if not unbundle:
182 lock = pushop.remote.lock()
182 lock = pushop.remote.lock()
183 try:
183 try:
184 _pushdiscovery(pushop)
184 _pushdiscovery(pushop)
185 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
185 if (pushop.repo.ui.configbool('experimental', 'bundle2-exp',
186 False)
186 False)
187 and pushop.remote.capable('bundle2-exp')):
187 and pushop.remote.capable('bundle2-exp')):
188 _pushbundle2(pushop)
188 _pushbundle2(pushop)
189 _pushchangeset(pushop)
189 _pushchangeset(pushop)
190 _pushsyncphase(pushop)
190 _pushsyncphase(pushop)
191 _pushobsolete(pushop)
191 _pushobsolete(pushop)
192 _pushbookmark(pushop)
192 _pushbookmark(pushop)
193 finally:
193 finally:
194 if lock is not None:
194 if lock is not None:
195 lock.release()
195 lock.release()
196 finally:
196 finally:
197 if locallock is not None:
197 if locallock is not None:
198 locallock.release()
198 locallock.release()
199
199
200 return pushop.ret
200 return pushop.ret
201
201
202 # list of steps to perform discovery before push
202 # list of steps to perform discovery before push
203 pushdiscoveryorder = []
203 pushdiscoveryorder = []
204
204
205 # Mapping between step name and function
205 # Mapping between step name and function
206 #
206 #
207 # This exists to help extensions wrap steps if necessary
207 # This exists to help extensions wrap steps if necessary
208 pushdiscoverymapping = {}
208 pushdiscoverymapping = {}
209
209
210 def pushdiscovery(stepname):
210 def pushdiscovery(stepname):
211 """decorator for function performing discovery before push
211 """decorator for function performing discovery before push
212
212
213 The function is added to the step -> function mapping and appended to the
213 The function is added to the step -> function mapping and appended to the
214 list of steps. Beware that decorated function will be added in order (this
214 list of steps. Beware that decorated function will be added in order (this
215 may matter).
215 may matter).
216
216
217 You can only use this decorator for a new step, if you want to wrap a step
217 You can only use this decorator for a new step, if you want to wrap a step
218 from an extension, change the pushdiscovery dictionary directly."""
218 from an extension, change the pushdiscovery dictionary directly."""
219 def dec(func):
219 def dec(func):
220 assert stepname not in pushdiscoverymapping
220 assert stepname not in pushdiscoverymapping
221 pushdiscoverymapping[stepname] = func
221 pushdiscoverymapping[stepname] = func
222 pushdiscoveryorder.append(stepname)
222 pushdiscoveryorder.append(stepname)
223 return func
223 return func
224 return dec
224 return dec
225
225
226 def _pushdiscovery(pushop):
226 def _pushdiscovery(pushop):
227 """Run all discovery steps"""
227 """Run all discovery steps"""
228 for stepname in pushdiscoveryorder:
228 for stepname in pushdiscoveryorder:
229 step = pushdiscoverymapping[stepname]
229 step = pushdiscoverymapping[stepname]
230 step(pushop)
230 step(pushop)
231
231
232 @pushdiscovery('changeset')
232 @pushdiscovery('changeset')
233 def _pushdiscoverychangeset(pushop):
233 def _pushdiscoverychangeset(pushop):
234 """discover the changeset that need to be pushed"""
234 """discover the changeset that need to be pushed"""
235 unfi = pushop.repo.unfiltered()
235 unfi = pushop.repo.unfiltered()
236 fci = discovery.findcommonincoming
236 fci = discovery.findcommonincoming
237 commoninc = fci(unfi, pushop.remote, force=pushop.force)
237 commoninc = fci(unfi, pushop.remote, force=pushop.force)
238 common, inc, remoteheads = commoninc
238 common, inc, remoteheads = commoninc
239 fco = discovery.findcommonoutgoing
239 fco = discovery.findcommonoutgoing
240 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
240 outgoing = fco(unfi, pushop.remote, onlyheads=pushop.revs,
241 commoninc=commoninc, force=pushop.force)
241 commoninc=commoninc, force=pushop.force)
242 pushop.outgoing = outgoing
242 pushop.outgoing = outgoing
243 pushop.remoteheads = remoteheads
243 pushop.remoteheads = remoteheads
244 pushop.incoming = inc
244 pushop.incoming = inc
245
245
246 @pushdiscovery('phase')
246 @pushdiscovery('phase')
247 def _pushdiscoveryphase(pushop):
247 def _pushdiscoveryphase(pushop):
248 """discover the phase that needs to be pushed
248 """discover the phase that needs to be pushed
249
249
250 (computed for both success and failure case for changesets push)"""
250 (computed for both success and failure case for changesets push)"""
251 outgoing = pushop.outgoing
251 outgoing = pushop.outgoing
252 unfi = pushop.repo.unfiltered()
252 unfi = pushop.repo.unfiltered()
253 remotephases = pushop.remote.listkeys('phases')
253 remotephases = pushop.remote.listkeys('phases')
254 publishing = remotephases.get('publishing', False)
254 publishing = remotephases.get('publishing', False)
255 ana = phases.analyzeremotephases(pushop.repo,
255 ana = phases.analyzeremotephases(pushop.repo,
256 pushop.fallbackheads,
256 pushop.fallbackheads,
257 remotephases)
257 remotephases)
258 pheads, droots = ana
258 pheads, droots = ana
259 extracond = ''
259 extracond = ''
260 if not publishing:
260 if not publishing:
261 extracond = ' and public()'
261 extracond = ' and public()'
262 revset = 'heads((%%ln::%%ln) %s)' % extracond
262 revset = 'heads((%%ln::%%ln) %s)' % extracond
263 # Get the list of all revs draft on remote by public here.
263 # Get the list of all revs draft on remote by public here.
264 # XXX Beware that revset break if droots is not strictly
264 # XXX Beware that revset break if droots is not strictly
265 # XXX root we may want to ensure it is but it is costly
265 # XXX root we may want to ensure it is but it is costly
266 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
266 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
267 if not outgoing.missing:
267 if not outgoing.missing:
268 future = fallback
268 future = fallback
269 else:
269 else:
270 # adds changeset we are going to push as draft
270 # adds changeset we are going to push as draft
271 #
271 #
272 # should not be necessary for pushblishing server, but because of an
272 # should not be necessary for pushblishing server, but because of an
273 # issue fixed in xxxxx we have to do it anyway.
273 # issue fixed in xxxxx we have to do it anyway.
274 fdroots = list(unfi.set('roots(%ln + %ln::)',
274 fdroots = list(unfi.set('roots(%ln + %ln::)',
275 outgoing.missing, droots))
275 outgoing.missing, droots))
276 fdroots = [f.node() for f in fdroots]
276 fdroots = [f.node() for f in fdroots]
277 future = list(unfi.set(revset, fdroots, pushop.futureheads))
277 future = list(unfi.set(revset, fdroots, pushop.futureheads))
278 pushop.outdatedphases = future
278 pushop.outdatedphases = future
279 pushop.fallbackoutdatedphases = fallback
279 pushop.fallbackoutdatedphases = fallback
280
280
281 @pushdiscovery('obsmarker')
281 @pushdiscovery('obsmarker')
282 def _pushdiscoveryobsmarkers(pushop):
282 def _pushdiscoveryobsmarkers(pushop):
283 if (obsolete._enabled
283 if (obsolete._enabled
284 and pushop.repo.obsstore
284 and pushop.repo.obsstore
285 and 'obsolete' in pushop.remote.listkeys('namespaces')):
285 and 'obsolete' in pushop.remote.listkeys('namespaces')):
286 pushop.outobsmarkers = pushop.repo.obsstore
286 pushop.outobsmarkers = pushop.repo.obsstore
287
287
288 @pushdiscovery('bookmarks')
288 @pushdiscovery('bookmarks')
289 def _pushdiscoverybookmarks(pushop):
289 def _pushdiscoverybookmarks(pushop):
290 ui = pushop.ui
290 ui = pushop.ui
291 repo = pushop.repo.unfiltered()
291 repo = pushop.repo.unfiltered()
292 remote = pushop.remote
292 remote = pushop.remote
293 ui.debug("checking for updated bookmarks\n")
293 ui.debug("checking for updated bookmarks\n")
294 ancestors = ()
294 ancestors = ()
295 if pushop.revs:
295 if pushop.revs:
296 revnums = map(repo.changelog.rev, pushop.revs)
296 revnums = map(repo.changelog.rev, pushop.revs)
297 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
297 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
298 remotebookmark = remote.listkeys('bookmarks')
298 remotebookmark = remote.listkeys('bookmarks')
299
299
300 comp = bookmarks.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
300 comp = bookmarks.compare(repo, repo._bookmarks, remotebookmark, srchex=hex)
301 addsrc, adddst, advsrc, advdst, diverge, differ, invalid = comp
301 addsrc, adddst, advsrc, advdst, diverge, differ, invalid = comp
302 for b, scid, dcid in advsrc:
302 for b, scid, dcid in advsrc:
303 if not ancestors or repo[scid].rev() in ancestors:
303 if not ancestors or repo[scid].rev() in ancestors:
304 pushop.outbookmarks.append((b, dcid, scid))
304 pushop.outbookmarks.append((b, dcid, scid))
305
305
306 def _pushcheckoutgoing(pushop):
306 def _pushcheckoutgoing(pushop):
307 outgoing = pushop.outgoing
307 outgoing = pushop.outgoing
308 unfi = pushop.repo.unfiltered()
308 unfi = pushop.repo.unfiltered()
309 if not outgoing.missing:
309 if not outgoing.missing:
310 # nothing to push
310 # nothing to push
311 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
311 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
312 return False
312 return False
313 # something to push
313 # something to push
314 if not pushop.force:
314 if not pushop.force:
315 # if repo.obsstore == False --> no obsolete
315 # if repo.obsstore == False --> no obsolete
316 # then, save the iteration
316 # then, save the iteration
317 if unfi.obsstore:
317 if unfi.obsstore:
318 # this message are here for 80 char limit reason
318 # this message are here for 80 char limit reason
319 mso = _("push includes obsolete changeset: %s!")
319 mso = _("push includes obsolete changeset: %s!")
320 mst = "push includes %s changeset: %s!"
320 mst = "push includes %s changeset: %s!"
321 # plain versions for i18n tool to detect them
321 # plain versions for i18n tool to detect them
322 _("push includes unstable changeset: %s!")
322 _("push includes unstable changeset: %s!")
323 _("push includes bumped changeset: %s!")
323 _("push includes bumped changeset: %s!")
324 _("push includes divergent changeset: %s!")
324 _("push includes divergent changeset: %s!")
325 # If we are to push if there is at least one
325 # If we are to push if there is at least one
326 # obsolete or unstable changeset in missing, at
326 # obsolete or unstable changeset in missing, at
327 # least one of the missinghead will be obsolete or
327 # least one of the missinghead will be obsolete or
328 # unstable. So checking heads only is ok
328 # unstable. So checking heads only is ok
329 for node in outgoing.missingheads:
329 for node in outgoing.missingheads:
330 ctx = unfi[node]
330 ctx = unfi[node]
331 if ctx.obsolete():
331 if ctx.obsolete():
332 raise util.Abort(mso % ctx)
332 raise util.Abort(mso % ctx)
333 elif ctx.troubled():
333 elif ctx.troubled():
334 raise util.Abort(_(mst)
334 raise util.Abort(_(mst)
335 % (ctx.troubles()[0],
335 % (ctx.troubles()[0],
336 ctx))
336 ctx))
337 newbm = pushop.ui.configlist('bookmarks', 'pushing')
337 newbm = pushop.ui.configlist('bookmarks', 'pushing')
338 discovery.checkheads(unfi, pushop.remote, outgoing,
338 discovery.checkheads(unfi, pushop.remote, outgoing,
339 pushop.remoteheads,
339 pushop.remoteheads,
340 pushop.newbranch,
340 pushop.newbranch,
341 bool(pushop.incoming),
341 bool(pushop.incoming),
342 newbm)
342 newbm)
343 return True
343 return True
344
344
345 # List of names of steps to perform for an outgoing bundle2, order matters.
345 # List of names of steps to perform for an outgoing bundle2, order matters.
346 b2partsgenorder = []
346 b2partsgenorder = []
347
347
348 # Mapping between step name and function
348 # Mapping between step name and function
349 #
349 #
350 # This exists to help extensions wrap steps if necessary
350 # This exists to help extensions wrap steps if necessary
351 b2partsgenmapping = {}
351 b2partsgenmapping = {}
352
352
353 def b2partsgenerator(stepname):
353 def b2partsgenerator(stepname):
354 """decorator for function generating bundle2 part
354 """decorator for function generating bundle2 part
355
355
356 The function is added to the step -> function mapping and appended to the
356 The function is added to the step -> function mapping and appended to the
357 list of steps. Beware that decorated functions will be added in order
357 list of steps. Beware that decorated functions will be added in order
358 (this may matter).
358 (this may matter).
359
359
360 You can only use this decorator for new steps, if you want to wrap a step
360 You can only use this decorator for new steps, if you want to wrap a step
361 from an extension, attack the b2partsgenmapping dictionary directly."""
361 from an extension, attack the b2partsgenmapping dictionary directly."""
362 def dec(func):
362 def dec(func):
363 assert stepname not in b2partsgenmapping
363 assert stepname not in b2partsgenmapping
364 b2partsgenmapping[stepname] = func
364 b2partsgenmapping[stepname] = func
365 b2partsgenorder.append(stepname)
365 b2partsgenorder.append(stepname)
366 return func
366 return func
367 return dec
367 return dec
368
368
369 @b2partsgenerator('changeset')
369 @b2partsgenerator('changeset')
370 def _pushb2ctx(pushop, bundler):
370 def _pushb2ctx(pushop, bundler):
371 """handle changegroup push through bundle2
371 """handle changegroup push through bundle2
372
372
373 addchangegroup result is stored in the ``pushop.ret`` attribute.
373 addchangegroup result is stored in the ``pushop.ret`` attribute.
374 """
374 """
375 if 'changesets' in pushop.stepsdone:
375 if 'changesets' in pushop.stepsdone:
376 return
376 return
377 pushop.stepsdone.add('changesets')
377 pushop.stepsdone.add('changesets')
378 # Send known heads to the server for race detection.
378 # Send known heads to the server for race detection.
379 if not _pushcheckoutgoing(pushop):
379 if not _pushcheckoutgoing(pushop):
380 return
380 return
381 pushop.repo.prepushoutgoinghooks(pushop.repo,
381 pushop.repo.prepushoutgoinghooks(pushop.repo,
382 pushop.remote,
382 pushop.remote,
383 pushop.outgoing)
383 pushop.outgoing)
384 if not pushop.force:
384 if not pushop.force:
385 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
385 bundler.newpart('B2X:CHECK:HEADS', data=iter(pushop.remoteheads))
386 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
386 cg = changegroup.getlocalbundle(pushop.repo, 'push', pushop.outgoing)
387 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
387 cgpart = bundler.newpart('B2X:CHANGEGROUP', data=cg.getchunks())
388 def handlereply(op):
388 def handlereply(op):
389 """extract addchangroup returns from server reply"""
389 """extract addchangroup returns from server reply"""
390 cgreplies = op.records.getreplies(cgpart.id)
390 cgreplies = op.records.getreplies(cgpart.id)
391 assert len(cgreplies['changegroup']) == 1
391 assert len(cgreplies['changegroup']) == 1
392 pushop.ret = cgreplies['changegroup'][0]['return']
392 pushop.ret = cgreplies['changegroup'][0]['return']
393 return handlereply
393 return handlereply
394
394
395 @b2partsgenerator('phase')
395 @b2partsgenerator('phase')
396 def _pushb2phases(pushop, bundler):
396 def _pushb2phases(pushop, bundler):
397 """handle phase push through bundle2"""
397 """handle phase push through bundle2"""
398 if 'phases' in pushop.stepsdone:
398 if 'phases' in pushop.stepsdone:
399 return
399 return
400 b2caps = bundle2.bundle2caps(pushop.remote)
400 b2caps = bundle2.bundle2caps(pushop.remote)
401 if not 'b2x:pushkey' in b2caps:
401 if not 'b2x:pushkey' in b2caps:
402 return
402 return
403 pushop.stepsdone.add('phases')
403 pushop.stepsdone.add('phases')
404 part2node = []
404 part2node = []
405 enc = pushkey.encode
405 enc = pushkey.encode
406 for newremotehead in pushop.outdatedphases:
406 for newremotehead in pushop.outdatedphases:
407 part = bundler.newpart('b2x:pushkey')
407 part = bundler.newpart('b2x:pushkey')
408 part.addparam('namespace', enc('phases'))
408 part.addparam('namespace', enc('phases'))
409 part.addparam('key', enc(newremotehead.hex()))
409 part.addparam('key', enc(newremotehead.hex()))
410 part.addparam('old', enc(str(phases.draft)))
410 part.addparam('old', enc(str(phases.draft)))
411 part.addparam('new', enc(str(phases.public)))
411 part.addparam('new', enc(str(phases.public)))
412 part2node.append((part.id, newremotehead))
412 part2node.append((part.id, newremotehead))
413 def handlereply(op):
413 def handlereply(op):
414 for partid, node in part2node:
414 for partid, node in part2node:
415 partrep = op.records.getreplies(partid)
415 partrep = op.records.getreplies(partid)
416 results = partrep['pushkey']
416 results = partrep['pushkey']
417 assert len(results) <= 1
417 assert len(results) <= 1
418 msg = None
418 msg = None
419 if not results:
419 if not results:
420 msg = _('server ignored update of %s to public!\n') % node
420 msg = _('server ignored update of %s to public!\n') % node
421 elif not int(results[0]['return']):
421 elif not int(results[0]['return']):
422 msg = _('updating %s to public failed!\n') % node
422 msg = _('updating %s to public failed!\n') % node
423 if msg is not None:
423 if msg is not None:
424 pushop.ui.warn(msg)
424 pushop.ui.warn(msg)
425 return handlereply
425 return handlereply
426
426
427 @b2partsgenerator('bookmarks')
427 @b2partsgenerator('bookmarks')
428 def _pushb2bookmarks(pushop, bundler):
428 def _pushb2bookmarks(pushop, bundler):
429 """handle phase push through bundle2"""
429 """handle phase push through bundle2"""
430 if 'bookmarks' in pushop.stepsdone:
430 if 'bookmarks' in pushop.stepsdone:
431 return
431 return
432 b2caps = bundle2.bundle2caps(pushop.remote)
432 b2caps = bundle2.bundle2caps(pushop.remote)
433 if 'b2x:pushkey' not in b2caps:
433 if 'b2x:pushkey' not in b2caps:
434 return
434 return
435 pushop.stepsdone.add('bookmarks')
435 pushop.stepsdone.add('bookmarks')
436 part2book = []
436 part2book = []
437 enc = pushkey.encode
437 enc = pushkey.encode
438 for book, old, new in pushop.outbookmarks:
438 for book, old, new in pushop.outbookmarks:
439 part = bundler.newpart('b2x:pushkey')
439 part = bundler.newpart('b2x:pushkey')
440 part.addparam('namespace', enc('bookmarks'))
440 part.addparam('namespace', enc('bookmarks'))
441 part.addparam('key', enc(book))
441 part.addparam('key', enc(book))
442 part.addparam('old', enc(old))
442 part.addparam('old', enc(old))
443 part.addparam('new', enc(new))
443 part.addparam('new', enc(new))
444 part2book.append((part.id, book))
444 part2book.append((part.id, book))
445 def handlereply(op):
445 def handlereply(op):
446 for partid, book in part2book:
446 for partid, book in part2book:
447 partrep = op.records.getreplies(partid)
447 partrep = op.records.getreplies(partid)
448 results = partrep['pushkey']
448 results = partrep['pushkey']
449 assert len(results) <= 1
449 assert len(results) <= 1
450 if not results:
450 if not results:
451 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
451 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
452 else:
452 else:
453 ret = int(results[0]['return'])
453 ret = int(results[0]['return'])
454 if ret:
454 if ret:
455 pushop.ui.status(_("updating bookmark %s\n") % book)
455 pushop.ui.status(_("updating bookmark %s\n") % book)
456 else:
456 else:
457 pushop.ui.warn(_('updating bookmark %s failed!\n') % book)
457 pushop.ui.warn(_('updating bookmark %s failed!\n') % book)
458 return handlereply
458 return handlereply
459
459
460
460
461 def _pushbundle2(pushop):
461 def _pushbundle2(pushop):
462 """push data to the remote using bundle2
462 """push data to the remote using bundle2
463
463
464 The only currently supported type of data is changegroup but this will
464 The only currently supported type of data is changegroup but this will
465 evolve in the future."""
465 evolve in the future."""
466 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
466 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
467 # create reply capability
467 # create reply capability
468 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
468 capsblob = bundle2.encodecaps(bundle2.capabilities)
469 bundler.newpart('b2x:replycaps', data=capsblob)
469 bundler.newpart('b2x:replycaps', data=capsblob)
470 replyhandlers = []
470 replyhandlers = []
471 for partgenname in b2partsgenorder:
471 for partgenname in b2partsgenorder:
472 partgen = b2partsgenmapping[partgenname]
472 partgen = b2partsgenmapping[partgenname]
473 ret = partgen(pushop, bundler)
473 ret = partgen(pushop, bundler)
474 if callable(ret):
474 if callable(ret):
475 replyhandlers.append(ret)
475 replyhandlers.append(ret)
476 # do not push if nothing to push
476 # do not push if nothing to push
477 if bundler.nbparts <= 1:
477 if bundler.nbparts <= 1:
478 return
478 return
479 stream = util.chunkbuffer(bundler.getchunks())
479 stream = util.chunkbuffer(bundler.getchunks())
480 try:
480 try:
481 reply = pushop.remote.unbundle(stream, ['force'], 'push')
481 reply = pushop.remote.unbundle(stream, ['force'], 'push')
482 except error.BundleValueError, exc:
482 except error.BundleValueError, exc:
483 raise util.Abort('missing support for %s' % exc)
483 raise util.Abort('missing support for %s' % exc)
484 try:
484 try:
485 op = bundle2.processbundle(pushop.repo, reply)
485 op = bundle2.processbundle(pushop.repo, reply)
486 except error.BundleValueError, exc:
486 except error.BundleValueError, exc:
487 raise util.Abort('missing support for %s' % exc)
487 raise util.Abort('missing support for %s' % exc)
488 for rephand in replyhandlers:
488 for rephand in replyhandlers:
489 rephand(op)
489 rephand(op)
490
490
491 def _pushchangeset(pushop):
491 def _pushchangeset(pushop):
492 """Make the actual push of changeset bundle to remote repo"""
492 """Make the actual push of changeset bundle to remote repo"""
493 if 'changesets' in pushop.stepsdone:
493 if 'changesets' in pushop.stepsdone:
494 return
494 return
495 pushop.stepsdone.add('changesets')
495 pushop.stepsdone.add('changesets')
496 if not _pushcheckoutgoing(pushop):
496 if not _pushcheckoutgoing(pushop):
497 return
497 return
498 pushop.repo.prepushoutgoinghooks(pushop.repo,
498 pushop.repo.prepushoutgoinghooks(pushop.repo,
499 pushop.remote,
499 pushop.remote,
500 pushop.outgoing)
500 pushop.outgoing)
501 outgoing = pushop.outgoing
501 outgoing = pushop.outgoing
502 unbundle = pushop.remote.capable('unbundle')
502 unbundle = pushop.remote.capable('unbundle')
503 # TODO: get bundlecaps from remote
503 # TODO: get bundlecaps from remote
504 bundlecaps = None
504 bundlecaps = None
505 # create a changegroup from local
505 # create a changegroup from local
506 if pushop.revs is None and not (outgoing.excluded
506 if pushop.revs is None and not (outgoing.excluded
507 or pushop.repo.changelog.filteredrevs):
507 or pushop.repo.changelog.filteredrevs):
508 # push everything,
508 # push everything,
509 # use the fast path, no race possible on push
509 # use the fast path, no race possible on push
510 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
510 bundler = changegroup.bundle10(pushop.repo, bundlecaps)
511 cg = changegroup.getsubset(pushop.repo,
511 cg = changegroup.getsubset(pushop.repo,
512 outgoing,
512 outgoing,
513 bundler,
513 bundler,
514 'push',
514 'push',
515 fastpath=True)
515 fastpath=True)
516 else:
516 else:
517 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
517 cg = changegroup.getlocalbundle(pushop.repo, 'push', outgoing,
518 bundlecaps)
518 bundlecaps)
519
519
520 # apply changegroup to remote
520 # apply changegroup to remote
521 if unbundle:
521 if unbundle:
522 # local repo finds heads on server, finds out what
522 # local repo finds heads on server, finds out what
523 # revs it must push. once revs transferred, if server
523 # revs it must push. once revs transferred, if server
524 # finds it has different heads (someone else won
524 # finds it has different heads (someone else won
525 # commit/push race), server aborts.
525 # commit/push race), server aborts.
526 if pushop.force:
526 if pushop.force:
527 remoteheads = ['force']
527 remoteheads = ['force']
528 else:
528 else:
529 remoteheads = pushop.remoteheads
529 remoteheads = pushop.remoteheads
530 # ssh: return remote's addchangegroup()
530 # ssh: return remote's addchangegroup()
531 # http: return remote's addchangegroup() or 0 for error
531 # http: return remote's addchangegroup() or 0 for error
532 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
532 pushop.ret = pushop.remote.unbundle(cg, remoteheads,
533 pushop.repo.url())
533 pushop.repo.url())
534 else:
534 else:
535 # we return an integer indicating remote head count
535 # we return an integer indicating remote head count
536 # change
536 # change
537 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
537 pushop.ret = pushop.remote.addchangegroup(cg, 'push', pushop.repo.url())
538
538
539 def _pushsyncphase(pushop):
539 def _pushsyncphase(pushop):
540 """synchronise phase information locally and remotely"""
540 """synchronise phase information locally and remotely"""
541 cheads = pushop.commonheads
541 cheads = pushop.commonheads
542 # even when we don't push, exchanging phase data is useful
542 # even when we don't push, exchanging phase data is useful
543 remotephases = pushop.remote.listkeys('phases')
543 remotephases = pushop.remote.listkeys('phases')
544 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
544 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
545 and remotephases # server supports phases
545 and remotephases # server supports phases
546 and pushop.ret is None # nothing was pushed
546 and pushop.ret is None # nothing was pushed
547 and remotephases.get('publishing', False)):
547 and remotephases.get('publishing', False)):
548 # When:
548 # When:
549 # - this is a subrepo push
549 # - this is a subrepo push
550 # - and remote support phase
550 # - and remote support phase
551 # - and no changeset was pushed
551 # - and no changeset was pushed
552 # - and remote is publishing
552 # - and remote is publishing
553 # We may be in issue 3871 case!
553 # We may be in issue 3871 case!
554 # We drop the possible phase synchronisation done by
554 # We drop the possible phase synchronisation done by
555 # courtesy to publish changesets possibly locally draft
555 # courtesy to publish changesets possibly locally draft
556 # on the remote.
556 # on the remote.
557 remotephases = {'publishing': 'True'}
557 remotephases = {'publishing': 'True'}
558 if not remotephases: # old server or public only reply from non-publishing
558 if not remotephases: # old server or public only reply from non-publishing
559 _localphasemove(pushop, cheads)
559 _localphasemove(pushop, cheads)
560 # don't push any phase data as there is nothing to push
560 # don't push any phase data as there is nothing to push
561 else:
561 else:
562 ana = phases.analyzeremotephases(pushop.repo, cheads,
562 ana = phases.analyzeremotephases(pushop.repo, cheads,
563 remotephases)
563 remotephases)
564 pheads, droots = ana
564 pheads, droots = ana
565 ### Apply remote phase on local
565 ### Apply remote phase on local
566 if remotephases.get('publishing', False):
566 if remotephases.get('publishing', False):
567 _localphasemove(pushop, cheads)
567 _localphasemove(pushop, cheads)
568 else: # publish = False
568 else: # publish = False
569 _localphasemove(pushop, pheads)
569 _localphasemove(pushop, pheads)
570 _localphasemove(pushop, cheads, phases.draft)
570 _localphasemove(pushop, cheads, phases.draft)
571 ### Apply local phase on remote
571 ### Apply local phase on remote
572
572
573 if pushop.ret:
573 if pushop.ret:
574 if 'phases' in pushop.stepsdone:
574 if 'phases' in pushop.stepsdone:
575 # phases already pushed though bundle2
575 # phases already pushed though bundle2
576 return
576 return
577 outdated = pushop.outdatedphases
577 outdated = pushop.outdatedphases
578 else:
578 else:
579 outdated = pushop.fallbackoutdatedphases
579 outdated = pushop.fallbackoutdatedphases
580
580
581 pushop.stepsdone.add('phases')
581 pushop.stepsdone.add('phases')
582
582
583 # filter heads already turned public by the push
583 # filter heads already turned public by the push
584 outdated = [c for c in outdated if c.node() not in pheads]
584 outdated = [c for c in outdated if c.node() not in pheads]
585 b2caps = bundle2.bundle2caps(pushop.remote)
585 b2caps = bundle2.bundle2caps(pushop.remote)
586 if 'b2x:pushkey' in b2caps:
586 if 'b2x:pushkey' in b2caps:
587 # server supports bundle2, let's do a batched push through it
587 # server supports bundle2, let's do a batched push through it
588 #
588 #
589 # This will eventually be unified with the changesets bundle2 push
589 # This will eventually be unified with the changesets bundle2 push
590 bundler = bundle2.bundle20(pushop.ui, b2caps)
590 bundler = bundle2.bundle20(pushop.ui, b2caps)
591 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
591 capsblob = bundle2.encodecaps(pushop.repo.bundle2caps)
592 bundler.newpart('b2x:replycaps', data=capsblob)
592 bundler.newpart('b2x:replycaps', data=capsblob)
593 part2node = []
593 part2node = []
594 enc = pushkey.encode
594 enc = pushkey.encode
595 for newremotehead in outdated:
595 for newremotehead in outdated:
596 part = bundler.newpart('b2x:pushkey')
596 part = bundler.newpart('b2x:pushkey')
597 part.addparam('namespace', enc('phases'))
597 part.addparam('namespace', enc('phases'))
598 part.addparam('key', enc(newremotehead.hex()))
598 part.addparam('key', enc(newremotehead.hex()))
599 part.addparam('old', enc(str(phases.draft)))
599 part.addparam('old', enc(str(phases.draft)))
600 part.addparam('new', enc(str(phases.public)))
600 part.addparam('new', enc(str(phases.public)))
601 part2node.append((part.id, newremotehead))
601 part2node.append((part.id, newremotehead))
602 stream = util.chunkbuffer(bundler.getchunks())
602 stream = util.chunkbuffer(bundler.getchunks())
603 try:
603 try:
604 reply = pushop.remote.unbundle(stream, ['force'], 'push')
604 reply = pushop.remote.unbundle(stream, ['force'], 'push')
605 op = bundle2.processbundle(pushop.repo, reply)
605 op = bundle2.processbundle(pushop.repo, reply)
606 except error.BundleValueError, exc:
606 except error.BundleValueError, exc:
607 raise util.Abort('missing support for %s' % exc)
607 raise util.Abort('missing support for %s' % exc)
608 for partid, node in part2node:
608 for partid, node in part2node:
609 partrep = op.records.getreplies(partid)
609 partrep = op.records.getreplies(partid)
610 results = partrep['pushkey']
610 results = partrep['pushkey']
611 assert len(results) <= 1
611 assert len(results) <= 1
612 msg = None
612 msg = None
613 if not results:
613 if not results:
614 msg = _('server ignored update of %s to public!\n') % node
614 msg = _('server ignored update of %s to public!\n') % node
615 elif not int(results[0]['return']):
615 elif not int(results[0]['return']):
616 msg = _('updating %s to public failed!\n') % node
616 msg = _('updating %s to public failed!\n') % node
617 if msg is not None:
617 if msg is not None:
618 pushop.ui.warn(msg)
618 pushop.ui.warn(msg)
619
619
620 else:
620 else:
621 # fallback to independant pushkey command
621 # fallback to independant pushkey command
622 for newremotehead in outdated:
622 for newremotehead in outdated:
623 r = pushop.remote.pushkey('phases',
623 r = pushop.remote.pushkey('phases',
624 newremotehead.hex(),
624 newremotehead.hex(),
625 str(phases.draft),
625 str(phases.draft),
626 str(phases.public))
626 str(phases.public))
627 if not r:
627 if not r:
628 pushop.ui.warn(_('updating %s to public failed!\n')
628 pushop.ui.warn(_('updating %s to public failed!\n')
629 % newremotehead)
629 % newremotehead)
630
630
631 def _localphasemove(pushop, nodes, phase=phases.public):
631 def _localphasemove(pushop, nodes, phase=phases.public):
632 """move <nodes> to <phase> in the local source repo"""
632 """move <nodes> to <phase> in the local source repo"""
633 if pushop.locallocked:
633 if pushop.locallocked:
634 tr = pushop.repo.transaction('push-phase-sync')
634 tr = pushop.repo.transaction('push-phase-sync')
635 try:
635 try:
636 phases.advanceboundary(pushop.repo, tr, phase, nodes)
636 phases.advanceboundary(pushop.repo, tr, phase, nodes)
637 tr.close()
637 tr.close()
638 finally:
638 finally:
639 tr.release()
639 tr.release()
640 else:
640 else:
641 # repo is not locked, do not change any phases!
641 # repo is not locked, do not change any phases!
642 # Informs the user that phases should have been moved when
642 # Informs the user that phases should have been moved when
643 # applicable.
643 # applicable.
644 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
644 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
645 phasestr = phases.phasenames[phase]
645 phasestr = phases.phasenames[phase]
646 if actualmoves:
646 if actualmoves:
647 pushop.ui.status(_('cannot lock source repo, skipping '
647 pushop.ui.status(_('cannot lock source repo, skipping '
648 'local %s phase update\n') % phasestr)
648 'local %s phase update\n') % phasestr)
649
649
650 def _pushobsolete(pushop):
650 def _pushobsolete(pushop):
651 """utility function to push obsolete markers to a remote"""
651 """utility function to push obsolete markers to a remote"""
652 if 'obsmarkers' in pushop.stepsdone:
652 if 'obsmarkers' in pushop.stepsdone:
653 return
653 return
654 pushop.ui.debug('try to push obsolete markers to remote\n')
654 pushop.ui.debug('try to push obsolete markers to remote\n')
655 repo = pushop.repo
655 repo = pushop.repo
656 remote = pushop.remote
656 remote = pushop.remote
657 pushop.stepsdone.add('obsmarkers')
657 pushop.stepsdone.add('obsmarkers')
658 if (pushop.outobsmarkers):
658 if (pushop.outobsmarkers):
659 rslts = []
659 rslts = []
660 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
660 remotedata = obsolete._pushkeyescape(pushop.outobsmarkers)
661 for key in sorted(remotedata, reverse=True):
661 for key in sorted(remotedata, reverse=True):
662 # reverse sort to ensure we end with dump0
662 # reverse sort to ensure we end with dump0
663 data = remotedata[key]
663 data = remotedata[key]
664 rslts.append(remote.pushkey('obsolete', key, '', data))
664 rslts.append(remote.pushkey('obsolete', key, '', data))
665 if [r for r in rslts if not r]:
665 if [r for r in rslts if not r]:
666 msg = _('failed to push some obsolete markers!\n')
666 msg = _('failed to push some obsolete markers!\n')
667 repo.ui.warn(msg)
667 repo.ui.warn(msg)
668
668
669 def _pushbookmark(pushop):
669 def _pushbookmark(pushop):
670 """Update bookmark position on remote"""
670 """Update bookmark position on remote"""
671 if pushop.ret == 0 or 'bookmarks' in pushop.stepsdone:
671 if pushop.ret == 0 or 'bookmarks' in pushop.stepsdone:
672 return
672 return
673 pushop.stepsdone.add('bookmarks')
673 pushop.stepsdone.add('bookmarks')
674 ui = pushop.ui
674 ui = pushop.ui
675 remote = pushop.remote
675 remote = pushop.remote
676 for b, old, new in pushop.outbookmarks:
676 for b, old, new in pushop.outbookmarks:
677 if remote.pushkey('bookmarks', b, old, new):
677 if remote.pushkey('bookmarks', b, old, new):
678 ui.status(_("updating bookmark %s\n") % b)
678 ui.status(_("updating bookmark %s\n") % b)
679 else:
679 else:
680 ui.warn(_('updating bookmark %s failed!\n') % b)
680 ui.warn(_('updating bookmark %s failed!\n') % b)
681
681
682 class pulloperation(object):
682 class pulloperation(object):
683 """A object that represent a single pull operation
683 """A object that represent a single pull operation
684
684
685 It purpose is to carry push related state and very common operation.
685 It purpose is to carry push related state and very common operation.
686
686
687 A new should be created at the beginning of each pull and discarded
687 A new should be created at the beginning of each pull and discarded
688 afterward.
688 afterward.
689 """
689 """
690
690
691 def __init__(self, repo, remote, heads=None, force=False):
691 def __init__(self, repo, remote, heads=None, force=False):
692 # repo we pull into
692 # repo we pull into
693 self.repo = repo
693 self.repo = repo
694 # repo we pull from
694 # repo we pull from
695 self.remote = remote
695 self.remote = remote
696 # revision we try to pull (None is "all")
696 # revision we try to pull (None is "all")
697 self.heads = heads
697 self.heads = heads
698 # do we force pull?
698 # do we force pull?
699 self.force = force
699 self.force = force
700 # the name the pull transaction
700 # the name the pull transaction
701 self._trname = 'pull\n' + util.hidepassword(remote.url())
701 self._trname = 'pull\n' + util.hidepassword(remote.url())
702 # hold the transaction once created
702 # hold the transaction once created
703 self._tr = None
703 self._tr = None
704 # set of common changeset between local and remote before pull
704 # set of common changeset between local and remote before pull
705 self.common = None
705 self.common = None
706 # set of pulled head
706 # set of pulled head
707 self.rheads = None
707 self.rheads = None
708 # list of missing changeset to fetch remotely
708 # list of missing changeset to fetch remotely
709 self.fetch = None
709 self.fetch = None
710 # result of changegroup pulling (used as return code by pull)
710 # result of changegroup pulling (used as return code by pull)
711 self.cgresult = None
711 self.cgresult = None
712 # list of step remaining todo (related to future bundle2 usage)
712 # list of step remaining todo (related to future bundle2 usage)
713 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
713 self.todosteps = set(['changegroup', 'phases', 'obsmarkers'])
714
714
715 @util.propertycache
715 @util.propertycache
716 def pulledsubset(self):
716 def pulledsubset(self):
717 """heads of the set of changeset target by the pull"""
717 """heads of the set of changeset target by the pull"""
718 # compute target subset
718 # compute target subset
719 if self.heads is None:
719 if self.heads is None:
720 # We pulled every thing possible
720 # We pulled every thing possible
721 # sync on everything common
721 # sync on everything common
722 c = set(self.common)
722 c = set(self.common)
723 ret = list(self.common)
723 ret = list(self.common)
724 for n in self.rheads:
724 for n in self.rheads:
725 if n not in c:
725 if n not in c:
726 ret.append(n)
726 ret.append(n)
727 return ret
727 return ret
728 else:
728 else:
729 # We pulled a specific subset
729 # We pulled a specific subset
730 # sync on this subset
730 # sync on this subset
731 return self.heads
731 return self.heads
732
732
733 def gettransaction(self):
733 def gettransaction(self):
734 """get appropriate pull transaction, creating it if needed"""
734 """get appropriate pull transaction, creating it if needed"""
735 if self._tr is None:
735 if self._tr is None:
736 self._tr = self.repo.transaction(self._trname)
736 self._tr = self.repo.transaction(self._trname)
737 return self._tr
737 return self._tr
738
738
739 def closetransaction(self):
739 def closetransaction(self):
740 """close transaction if created"""
740 """close transaction if created"""
741 if self._tr is not None:
741 if self._tr is not None:
742 self._tr.close()
742 self._tr.close()
743
743
744 def releasetransaction(self):
744 def releasetransaction(self):
745 """release transaction if created"""
745 """release transaction if created"""
746 if self._tr is not None:
746 if self._tr is not None:
747 self._tr.release()
747 self._tr.release()
748
748
749 def pull(repo, remote, heads=None, force=False):
749 def pull(repo, remote, heads=None, force=False):
750 pullop = pulloperation(repo, remote, heads, force)
750 pullop = pulloperation(repo, remote, heads, force)
751 if pullop.remote.local():
751 if pullop.remote.local():
752 missing = set(pullop.remote.requirements) - pullop.repo.supported
752 missing = set(pullop.remote.requirements) - pullop.repo.supported
753 if missing:
753 if missing:
754 msg = _("required features are not"
754 msg = _("required features are not"
755 " supported in the destination:"
755 " supported in the destination:"
756 " %s") % (', '.join(sorted(missing)))
756 " %s") % (', '.join(sorted(missing)))
757 raise util.Abort(msg)
757 raise util.Abort(msg)
758
758
759 lock = pullop.repo.lock()
759 lock = pullop.repo.lock()
760 try:
760 try:
761 _pulldiscovery(pullop)
761 _pulldiscovery(pullop)
762 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
762 if (pullop.repo.ui.configbool('experimental', 'bundle2-exp', False)
763 and pullop.remote.capable('bundle2-exp')):
763 and pullop.remote.capable('bundle2-exp')):
764 _pullbundle2(pullop)
764 _pullbundle2(pullop)
765 if 'changegroup' in pullop.todosteps:
765 if 'changegroup' in pullop.todosteps:
766 _pullchangeset(pullop)
766 _pullchangeset(pullop)
767 if 'phases' in pullop.todosteps:
767 if 'phases' in pullop.todosteps:
768 _pullphase(pullop)
768 _pullphase(pullop)
769 if 'obsmarkers' in pullop.todosteps:
769 if 'obsmarkers' in pullop.todosteps:
770 _pullobsolete(pullop)
770 _pullobsolete(pullop)
771 pullop.closetransaction()
771 pullop.closetransaction()
772 finally:
772 finally:
773 pullop.releasetransaction()
773 pullop.releasetransaction()
774 lock.release()
774 lock.release()
775
775
776 return pullop.cgresult
776 return pullop.cgresult
777
777
778 def _pulldiscovery(pullop):
778 def _pulldiscovery(pullop):
779 """discovery phase for the pull
779 """discovery phase for the pull
780
780
781 Current handle changeset discovery only, will change handle all discovery
781 Current handle changeset discovery only, will change handle all discovery
782 at some point."""
782 at some point."""
783 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
783 tmp = discovery.findcommonincoming(pullop.repo.unfiltered(),
784 pullop.remote,
784 pullop.remote,
785 heads=pullop.heads,
785 heads=pullop.heads,
786 force=pullop.force)
786 force=pullop.force)
787 pullop.common, pullop.fetch, pullop.rheads = tmp
787 pullop.common, pullop.fetch, pullop.rheads = tmp
788
788
789 def _pullbundle2(pullop):
789 def _pullbundle2(pullop):
790 """pull data using bundle2
790 """pull data using bundle2
791
791
792 For now, the only supported data are changegroup."""
792 For now, the only supported data are changegroup."""
793 remotecaps = bundle2.bundle2caps(pullop.remote)
793 remotecaps = bundle2.bundle2caps(pullop.remote)
794 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
794 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
795 # pulling changegroup
795 # pulling changegroup
796 pullop.todosteps.remove('changegroup')
796 pullop.todosteps.remove('changegroup')
797
797
798 kwargs['common'] = pullop.common
798 kwargs['common'] = pullop.common
799 kwargs['heads'] = pullop.heads or pullop.rheads
799 kwargs['heads'] = pullop.heads or pullop.rheads
800 if 'b2x:listkeys' in remotecaps:
800 if 'b2x:listkeys' in remotecaps:
801 kwargs['listkeys'] = ['phase']
801 kwargs['listkeys'] = ['phase']
802 if not pullop.fetch:
802 if not pullop.fetch:
803 pullop.repo.ui.status(_("no changes found\n"))
803 pullop.repo.ui.status(_("no changes found\n"))
804 pullop.cgresult = 0
804 pullop.cgresult = 0
805 else:
805 else:
806 if pullop.heads is None and list(pullop.common) == [nullid]:
806 if pullop.heads is None and list(pullop.common) == [nullid]:
807 pullop.repo.ui.status(_("requesting all changes\n"))
807 pullop.repo.ui.status(_("requesting all changes\n"))
808 _pullbundle2extraprepare(pullop, kwargs)
808 _pullbundle2extraprepare(pullop, kwargs)
809 if kwargs.keys() == ['format']:
809 if kwargs.keys() == ['format']:
810 return # nothing to pull
810 return # nothing to pull
811 bundle = pullop.remote.getbundle('pull', **kwargs)
811 bundle = pullop.remote.getbundle('pull', **kwargs)
812 try:
812 try:
813 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
813 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
814 except error.BundleValueError, exc:
814 except error.BundleValueError, exc:
815 raise util.Abort('missing support for %s' % exc)
815 raise util.Abort('missing support for %s' % exc)
816
816
817 if pullop.fetch:
817 if pullop.fetch:
818 assert len(op.records['changegroup']) == 1
818 assert len(op.records['changegroup']) == 1
819 pullop.cgresult = op.records['changegroup'][0]['return']
819 pullop.cgresult = op.records['changegroup'][0]['return']
820
820
821 # processing phases change
821 # processing phases change
822 for namespace, value in op.records['listkeys']:
822 for namespace, value in op.records['listkeys']:
823 if namespace == 'phases':
823 if namespace == 'phases':
824 _pullapplyphases(pullop, value)
824 _pullapplyphases(pullop, value)
825
825
826 def _pullbundle2extraprepare(pullop, kwargs):
826 def _pullbundle2extraprepare(pullop, kwargs):
827 """hook function so that extensions can extend the getbundle call"""
827 """hook function so that extensions can extend the getbundle call"""
828 pass
828 pass
829
829
830 def _pullchangeset(pullop):
830 def _pullchangeset(pullop):
831 """pull changeset from unbundle into the local repo"""
831 """pull changeset from unbundle into the local repo"""
832 # We delay the open of the transaction as late as possible so we
832 # We delay the open of the transaction as late as possible so we
833 # don't open transaction for nothing or you break future useful
833 # don't open transaction for nothing or you break future useful
834 # rollback call
834 # rollback call
835 pullop.todosteps.remove('changegroup')
835 pullop.todosteps.remove('changegroup')
836 if not pullop.fetch:
836 if not pullop.fetch:
837 pullop.repo.ui.status(_("no changes found\n"))
837 pullop.repo.ui.status(_("no changes found\n"))
838 pullop.cgresult = 0
838 pullop.cgresult = 0
839 return
839 return
840 pullop.gettransaction()
840 pullop.gettransaction()
841 if pullop.heads is None and list(pullop.common) == [nullid]:
841 if pullop.heads is None and list(pullop.common) == [nullid]:
842 pullop.repo.ui.status(_("requesting all changes\n"))
842 pullop.repo.ui.status(_("requesting all changes\n"))
843 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
843 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
844 # issue1320, avoid a race if remote changed after discovery
844 # issue1320, avoid a race if remote changed after discovery
845 pullop.heads = pullop.rheads
845 pullop.heads = pullop.rheads
846
846
847 if pullop.remote.capable('getbundle'):
847 if pullop.remote.capable('getbundle'):
848 # TODO: get bundlecaps from remote
848 # TODO: get bundlecaps from remote
849 cg = pullop.remote.getbundle('pull', common=pullop.common,
849 cg = pullop.remote.getbundle('pull', common=pullop.common,
850 heads=pullop.heads or pullop.rheads)
850 heads=pullop.heads or pullop.rheads)
851 elif pullop.heads is None:
851 elif pullop.heads is None:
852 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
852 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
853 elif not pullop.remote.capable('changegroupsubset'):
853 elif not pullop.remote.capable('changegroupsubset'):
854 raise util.Abort(_("partial pull cannot be done because "
854 raise util.Abort(_("partial pull cannot be done because "
855 "other repository doesn't support "
855 "other repository doesn't support "
856 "changegroupsubset."))
856 "changegroupsubset."))
857 else:
857 else:
858 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
858 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
859 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
859 pullop.cgresult = changegroup.addchangegroup(pullop.repo, cg, 'pull',
860 pullop.remote.url())
860 pullop.remote.url())
861
861
862 def _pullphase(pullop):
862 def _pullphase(pullop):
863 # Get remote phases data from remote
863 # Get remote phases data from remote
864 remotephases = pullop.remote.listkeys('phases')
864 remotephases = pullop.remote.listkeys('phases')
865 _pullapplyphases(pullop, remotephases)
865 _pullapplyphases(pullop, remotephases)
866
866
867 def _pullapplyphases(pullop, remotephases):
867 def _pullapplyphases(pullop, remotephases):
868 """apply phase movement from observed remote state"""
868 """apply phase movement from observed remote state"""
869 pullop.todosteps.remove('phases')
869 pullop.todosteps.remove('phases')
870 publishing = bool(remotephases.get('publishing', False))
870 publishing = bool(remotephases.get('publishing', False))
871 if remotephases and not publishing:
871 if remotephases and not publishing:
872 # remote is new and unpublishing
872 # remote is new and unpublishing
873 pheads, _dr = phases.analyzeremotephases(pullop.repo,
873 pheads, _dr = phases.analyzeremotephases(pullop.repo,
874 pullop.pulledsubset,
874 pullop.pulledsubset,
875 remotephases)
875 remotephases)
876 dheads = pullop.pulledsubset
876 dheads = pullop.pulledsubset
877 else:
877 else:
878 # Remote is old or publishing all common changesets
878 # Remote is old or publishing all common changesets
879 # should be seen as public
879 # should be seen as public
880 pheads = pullop.pulledsubset
880 pheads = pullop.pulledsubset
881 dheads = []
881 dheads = []
882 unfi = pullop.repo.unfiltered()
882 unfi = pullop.repo.unfiltered()
883 phase = unfi._phasecache.phase
883 phase = unfi._phasecache.phase
884 rev = unfi.changelog.nodemap.get
884 rev = unfi.changelog.nodemap.get
885 public = phases.public
885 public = phases.public
886 draft = phases.draft
886 draft = phases.draft
887
887
888 # exclude changesets already public locally and update the others
888 # exclude changesets already public locally and update the others
889 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
889 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
890 if pheads:
890 if pheads:
891 tr = pullop.gettransaction()
891 tr = pullop.gettransaction()
892 phases.advanceboundary(pullop.repo, tr, public, pheads)
892 phases.advanceboundary(pullop.repo, tr, public, pheads)
893
893
894 # exclude changesets already draft locally and update the others
894 # exclude changesets already draft locally and update the others
895 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
895 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
896 if dheads:
896 if dheads:
897 tr = pullop.gettransaction()
897 tr = pullop.gettransaction()
898 phases.advanceboundary(pullop.repo, tr, draft, dheads)
898 phases.advanceboundary(pullop.repo, tr, draft, dheads)
899
899
900 def _pullobsolete(pullop):
900 def _pullobsolete(pullop):
901 """utility function to pull obsolete markers from a remote
901 """utility function to pull obsolete markers from a remote
902
902
903 The `gettransaction` is function that return the pull transaction, creating
903 The `gettransaction` is function that return the pull transaction, creating
904 one if necessary. We return the transaction to inform the calling code that
904 one if necessary. We return the transaction to inform the calling code that
905 a new transaction have been created (when applicable).
905 a new transaction have been created (when applicable).
906
906
907 Exists mostly to allow overriding for experimentation purpose"""
907 Exists mostly to allow overriding for experimentation purpose"""
908 pullop.todosteps.remove('obsmarkers')
908 pullop.todosteps.remove('obsmarkers')
909 tr = None
909 tr = None
910 if obsolete._enabled:
910 if obsolete._enabled:
911 pullop.repo.ui.debug('fetching remote obsolete markers\n')
911 pullop.repo.ui.debug('fetching remote obsolete markers\n')
912 remoteobs = pullop.remote.listkeys('obsolete')
912 remoteobs = pullop.remote.listkeys('obsolete')
913 if 'dump0' in remoteobs:
913 if 'dump0' in remoteobs:
914 tr = pullop.gettransaction()
914 tr = pullop.gettransaction()
915 for key in sorted(remoteobs, reverse=True):
915 for key in sorted(remoteobs, reverse=True):
916 if key.startswith('dump'):
916 if key.startswith('dump'):
917 data = base85.b85decode(remoteobs[key])
917 data = base85.b85decode(remoteobs[key])
918 pullop.repo.obsstore.mergemarkers(tr, data)
918 pullop.repo.obsstore.mergemarkers(tr, data)
919 pullop.repo.invalidatevolatilesets()
919 pullop.repo.invalidatevolatilesets()
920 return tr
920 return tr
921
921
922 def caps20to10(repo):
922 def caps20to10(repo):
923 """return a set with appropriate options to use bundle20 during getbundle"""
923 """return a set with appropriate options to use bundle20 during getbundle"""
924 caps = set(['HG2X'])
924 caps = set(['HG2X'])
925 capsblob = bundle2.encodecaps(repo.bundle2caps)
925 capsblob = bundle2.encodecaps(bundle2.capabilities)
926 caps.add('bundle2=' + urllib.quote(capsblob))
926 caps.add('bundle2=' + urllib.quote(capsblob))
927 return caps
927 return caps
928
928
929 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
929 def getbundle(repo, source, heads=None, common=None, bundlecaps=None,
930 **kwargs):
930 **kwargs):
931 """return a full bundle (with potentially multiple kind of parts)
931 """return a full bundle (with potentially multiple kind of parts)
932
932
933 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
933 Could be a bundle HG10 or a bundle HG2X depending on bundlecaps
934 passed. For now, the bundle can contain only changegroup, but this will
934 passed. For now, the bundle can contain only changegroup, but this will
935 changes when more part type will be available for bundle2.
935 changes when more part type will be available for bundle2.
936
936
937 This is different from changegroup.getbundle that only returns an HG10
937 This is different from changegroup.getbundle that only returns an HG10
938 changegroup bundle. They may eventually get reunited in the future when we
938 changegroup bundle. They may eventually get reunited in the future when we
939 have a clearer idea of the API we what to query different data.
939 have a clearer idea of the API we what to query different data.
940
940
941 The implementation is at a very early stage and will get massive rework
941 The implementation is at a very early stage and will get massive rework
942 when the API of bundle is refined.
942 when the API of bundle is refined.
943 """
943 """
944 cg = None
944 cg = None
945 if kwargs.get('cg', True):
945 if kwargs.get('cg', True):
946 # build changegroup bundle here.
946 # build changegroup bundle here.
947 cg = changegroup.getbundle(repo, source, heads=heads,
947 cg = changegroup.getbundle(repo, source, heads=heads,
948 common=common, bundlecaps=bundlecaps)
948 common=common, bundlecaps=bundlecaps)
949 elif 'HG2X' not in bundlecaps:
949 elif 'HG2X' not in bundlecaps:
950 raise ValueError(_('request for bundle10 must include changegroup'))
950 raise ValueError(_('request for bundle10 must include changegroup'))
951 if bundlecaps is None or 'HG2X' not in bundlecaps:
951 if bundlecaps is None or 'HG2X' not in bundlecaps:
952 if kwargs:
952 if kwargs:
953 raise ValueError(_('unsupported getbundle arguments: %s')
953 raise ValueError(_('unsupported getbundle arguments: %s')
954 % ', '.join(sorted(kwargs.keys())))
954 % ', '.join(sorted(kwargs.keys())))
955 return cg
955 return cg
956 # very crude first implementation,
956 # very crude first implementation,
957 # the bundle API will change and the generation will be done lazily.
957 # the bundle API will change and the generation will be done lazily.
958 b2caps = {}
958 b2caps = {}
959 for bcaps in bundlecaps:
959 for bcaps in bundlecaps:
960 if bcaps.startswith('bundle2='):
960 if bcaps.startswith('bundle2='):
961 blob = urllib.unquote(bcaps[len('bundle2='):])
961 blob = urllib.unquote(bcaps[len('bundle2='):])
962 b2caps.update(bundle2.decodecaps(blob))
962 b2caps.update(bundle2.decodecaps(blob))
963 bundler = bundle2.bundle20(repo.ui, b2caps)
963 bundler = bundle2.bundle20(repo.ui, b2caps)
964 if cg:
964 if cg:
965 bundler.newpart('b2x:changegroup', data=cg.getchunks())
965 bundler.newpart('b2x:changegroup', data=cg.getchunks())
966 listkeys = kwargs.get('listkeys', ())
966 listkeys = kwargs.get('listkeys', ())
967 for namespace in listkeys:
967 for namespace in listkeys:
968 part = bundler.newpart('b2x:listkeys')
968 part = bundler.newpart('b2x:listkeys')
969 part.addparam('namespace', namespace)
969 part.addparam('namespace', namespace)
970 keys = repo.listkeys(namespace).items()
970 keys = repo.listkeys(namespace).items()
971 part.data = pushkey.encodekeys(keys)
971 part.data = pushkey.encodekeys(keys)
972 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
972 _getbundleextrapart(bundler, repo, source, heads=heads, common=common,
973 bundlecaps=bundlecaps, **kwargs)
973 bundlecaps=bundlecaps, **kwargs)
974 return util.chunkbuffer(bundler.getchunks())
974 return util.chunkbuffer(bundler.getchunks())
975
975
976 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
976 def _getbundleextrapart(bundler, repo, source, heads=None, common=None,
977 bundlecaps=None, **kwargs):
977 bundlecaps=None, **kwargs):
978 """hook function to let extensions add parts to the requested bundle"""
978 """hook function to let extensions add parts to the requested bundle"""
979 pass
979 pass
980
980
981 def check_heads(repo, their_heads, context):
981 def check_heads(repo, their_heads, context):
982 """check if the heads of a repo have been modified
982 """check if the heads of a repo have been modified
983
983
984 Used by peer for unbundling.
984 Used by peer for unbundling.
985 """
985 """
986 heads = repo.heads()
986 heads = repo.heads()
987 heads_hash = util.sha1(''.join(sorted(heads))).digest()
987 heads_hash = util.sha1(''.join(sorted(heads))).digest()
988 if not (their_heads == ['force'] or their_heads == heads or
988 if not (their_heads == ['force'] or their_heads == heads or
989 their_heads == ['hashed', heads_hash]):
989 their_heads == ['hashed', heads_hash]):
990 # someone else committed/pushed/unbundled while we
990 # someone else committed/pushed/unbundled while we
991 # were transferring data
991 # were transferring data
992 raise error.PushRaced('repository changed while %s - '
992 raise error.PushRaced('repository changed while %s - '
993 'please try again' % context)
993 'please try again' % context)
994
994
995 def unbundle(repo, cg, heads, source, url):
995 def unbundle(repo, cg, heads, source, url):
996 """Apply a bundle to a repo.
996 """Apply a bundle to a repo.
997
997
998 this function makes sure the repo is locked during the application and have
998 this function makes sure the repo is locked during the application and have
999 mechanism to check that no push race occurred between the creation of the
999 mechanism to check that no push race occurred between the creation of the
1000 bundle and its application.
1000 bundle and its application.
1001
1001
1002 If the push was raced as PushRaced exception is raised."""
1002 If the push was raced as PushRaced exception is raised."""
1003 r = 0
1003 r = 0
1004 # need a transaction when processing a bundle2 stream
1004 # need a transaction when processing a bundle2 stream
1005 tr = None
1005 tr = None
1006 lock = repo.lock()
1006 lock = repo.lock()
1007 try:
1007 try:
1008 check_heads(repo, heads, 'uploading changes')
1008 check_heads(repo, heads, 'uploading changes')
1009 # push can proceed
1009 # push can proceed
1010 if util.safehasattr(cg, 'params'):
1010 if util.safehasattr(cg, 'params'):
1011 try:
1011 try:
1012 tr = repo.transaction('unbundle')
1012 tr = repo.transaction('unbundle')
1013 tr.hookargs['bundle2-exp'] = '1'
1013 tr.hookargs['bundle2-exp'] = '1'
1014 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1014 r = bundle2.processbundle(repo, cg, lambda: tr).reply
1015 cl = repo.unfiltered().changelog
1015 cl = repo.unfiltered().changelog
1016 p = cl.writepending() and repo.root or ""
1016 p = cl.writepending() and repo.root or ""
1017 repo.hook('b2x-pretransactionclose', throw=True, source=source,
1017 repo.hook('b2x-pretransactionclose', throw=True, source=source,
1018 url=url, pending=p, **tr.hookargs)
1018 url=url, pending=p, **tr.hookargs)
1019 tr.close()
1019 tr.close()
1020 repo.hook('b2x-transactionclose', source=source, url=url,
1020 repo.hook('b2x-transactionclose', source=source, url=url,
1021 **tr.hookargs)
1021 **tr.hookargs)
1022 except Exception, exc:
1022 except Exception, exc:
1023 exc.duringunbundle2 = True
1023 exc.duringunbundle2 = True
1024 raise
1024 raise
1025 else:
1025 else:
1026 r = changegroup.addchangegroup(repo, cg, source, url)
1026 r = changegroup.addchangegroup(repo, cg, source, url)
1027 finally:
1027 finally:
1028 if tr is not None:
1028 if tr is not None:
1029 tr.release()
1029 tr.release()
1030 lock.release()
1030 lock.release()
1031 return r
1031 return r
@@ -1,1781 +1,1775 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from node import hex, nullid, short
7 from node import hex, nullid, short
8 from i18n import _
8 from i18n import _
9 import urllib
9 import urllib
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
10 import peer, changegroup, subrepo, pushkey, obsolete, repoview
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
11 import changelog, dirstate, filelog, manifest, context, bookmarks, phases
12 import lock as lockmod
12 import lock as lockmod
13 import transaction, store, encoding, exchange, bundle2
13 import transaction, store, encoding, exchange, bundle2
14 import scmutil, util, extensions, hook, error, revset
14 import scmutil, util, extensions, hook, error, revset
15 import match as matchmod
15 import match as matchmod
16 import merge as mergemod
16 import merge as mergemod
17 import tags as tagsmod
17 import tags as tagsmod
18 from lock import release
18 from lock import release
19 import weakref, errno, os, time, inspect
19 import weakref, errno, os, time, inspect
20 import branchmap, pathutil
20 import branchmap, pathutil
21 propertycache = util.propertycache
21 propertycache = util.propertycache
22 filecache = scmutil.filecache
22 filecache = scmutil.filecache
23
23
24 class repofilecache(filecache):
24 class repofilecache(filecache):
25 """All filecache usage on repo are done for logic that should be unfiltered
25 """All filecache usage on repo are done for logic that should be unfiltered
26 """
26 """
27
27
28 def __get__(self, repo, type=None):
28 def __get__(self, repo, type=None):
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
29 return super(repofilecache, self).__get__(repo.unfiltered(), type)
30 def __set__(self, repo, value):
30 def __set__(self, repo, value):
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
31 return super(repofilecache, self).__set__(repo.unfiltered(), value)
32 def __delete__(self, repo):
32 def __delete__(self, repo):
33 return super(repofilecache, self).__delete__(repo.unfiltered())
33 return super(repofilecache, self).__delete__(repo.unfiltered())
34
34
35 class storecache(repofilecache):
35 class storecache(repofilecache):
36 """filecache for files in the store"""
36 """filecache for files in the store"""
37 def join(self, obj, fname):
37 def join(self, obj, fname):
38 return obj.sjoin(fname)
38 return obj.sjoin(fname)
39
39
40 class unfilteredpropertycache(propertycache):
40 class unfilteredpropertycache(propertycache):
41 """propertycache that apply to unfiltered repo only"""
41 """propertycache that apply to unfiltered repo only"""
42
42
43 def __get__(self, repo, type=None):
43 def __get__(self, repo, type=None):
44 unfi = repo.unfiltered()
44 unfi = repo.unfiltered()
45 if unfi is repo:
45 if unfi is repo:
46 return super(unfilteredpropertycache, self).__get__(unfi)
46 return super(unfilteredpropertycache, self).__get__(unfi)
47 return getattr(unfi, self.name)
47 return getattr(unfi, self.name)
48
48
49 class filteredpropertycache(propertycache):
49 class filteredpropertycache(propertycache):
50 """propertycache that must take filtering in account"""
50 """propertycache that must take filtering in account"""
51
51
52 def cachevalue(self, obj, value):
52 def cachevalue(self, obj, value):
53 object.__setattr__(obj, self.name, value)
53 object.__setattr__(obj, self.name, value)
54
54
55
55
56 def hasunfilteredcache(repo, name):
56 def hasunfilteredcache(repo, name):
57 """check if a repo has an unfilteredpropertycache value for <name>"""
57 """check if a repo has an unfilteredpropertycache value for <name>"""
58 return name in vars(repo.unfiltered())
58 return name in vars(repo.unfiltered())
59
59
60 def unfilteredmethod(orig):
60 def unfilteredmethod(orig):
61 """decorate method that always need to be run on unfiltered version"""
61 """decorate method that always need to be run on unfiltered version"""
62 def wrapper(repo, *args, **kwargs):
62 def wrapper(repo, *args, **kwargs):
63 return orig(repo.unfiltered(), *args, **kwargs)
63 return orig(repo.unfiltered(), *args, **kwargs)
64 return wrapper
64 return wrapper
65
65
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
66 moderncaps = set(('lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
67 'unbundle'))
67 'unbundle'))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
68 legacycaps = moderncaps.union(set(['changegroupsubset']))
69
69
70 class localpeer(peer.peerrepository):
70 class localpeer(peer.peerrepository):
71 '''peer for a local repo; reflects only the most recent API'''
71 '''peer for a local repo; reflects only the most recent API'''
72
72
73 def __init__(self, repo, caps=moderncaps):
73 def __init__(self, repo, caps=moderncaps):
74 peer.peerrepository.__init__(self)
74 peer.peerrepository.__init__(self)
75 self._repo = repo.filtered('served')
75 self._repo = repo.filtered('served')
76 self.ui = repo.ui
76 self.ui = repo.ui
77 self._caps = repo._restrictcapabilities(caps)
77 self._caps = repo._restrictcapabilities(caps)
78 self.requirements = repo.requirements
78 self.requirements = repo.requirements
79 self.supportedformats = repo.supportedformats
79 self.supportedformats = repo.supportedformats
80
80
81 def close(self):
81 def close(self):
82 self._repo.close()
82 self._repo.close()
83
83
84 def _capabilities(self):
84 def _capabilities(self):
85 return self._caps
85 return self._caps
86
86
87 def local(self):
87 def local(self):
88 return self._repo
88 return self._repo
89
89
90 def canpush(self):
90 def canpush(self):
91 return True
91 return True
92
92
93 def url(self):
93 def url(self):
94 return self._repo.url()
94 return self._repo.url()
95
95
96 def lookup(self, key):
96 def lookup(self, key):
97 return self._repo.lookup(key)
97 return self._repo.lookup(key)
98
98
99 def branchmap(self):
99 def branchmap(self):
100 return self._repo.branchmap()
100 return self._repo.branchmap()
101
101
102 def heads(self):
102 def heads(self):
103 return self._repo.heads()
103 return self._repo.heads()
104
104
105 def known(self, nodes):
105 def known(self, nodes):
106 return self._repo.known(nodes)
106 return self._repo.known(nodes)
107
107
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
108 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
109 format='HG10', **kwargs):
109 format='HG10', **kwargs):
110 cg = exchange.getbundle(self._repo, source, heads=heads,
110 cg = exchange.getbundle(self._repo, source, heads=heads,
111 common=common, bundlecaps=bundlecaps, **kwargs)
111 common=common, bundlecaps=bundlecaps, **kwargs)
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
112 if bundlecaps is not None and 'HG2X' in bundlecaps:
113 # When requesting a bundle2, getbundle returns a stream to make the
113 # When requesting a bundle2, getbundle returns a stream to make the
114 # wire level function happier. We need to build a proper object
114 # wire level function happier. We need to build a proper object
115 # from it in local peer.
115 # from it in local peer.
116 cg = bundle2.unbundle20(self.ui, cg)
116 cg = bundle2.unbundle20(self.ui, cg)
117 return cg
117 return cg
118
118
119 # TODO We might want to move the next two calls into legacypeer and add
119 # TODO We might want to move the next two calls into legacypeer and add
120 # unbundle instead.
120 # unbundle instead.
121
121
122 def unbundle(self, cg, heads, url):
122 def unbundle(self, cg, heads, url):
123 """apply a bundle on a repo
123 """apply a bundle on a repo
124
124
125 This function handles the repo locking itself."""
125 This function handles the repo locking itself."""
126 try:
126 try:
127 cg = exchange.readbundle(self.ui, cg, None)
127 cg = exchange.readbundle(self.ui, cg, None)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
128 ret = exchange.unbundle(self._repo, cg, heads, 'push', url)
129 if util.safehasattr(ret, 'getchunks'):
129 if util.safehasattr(ret, 'getchunks'):
130 # This is a bundle20 object, turn it into an unbundler.
130 # This is a bundle20 object, turn it into an unbundler.
131 # This little dance should be dropped eventually when the API
131 # This little dance should be dropped eventually when the API
132 # is finally improved.
132 # is finally improved.
133 stream = util.chunkbuffer(ret.getchunks())
133 stream = util.chunkbuffer(ret.getchunks())
134 ret = bundle2.unbundle20(self.ui, stream)
134 ret = bundle2.unbundle20(self.ui, stream)
135 return ret
135 return ret
136 except error.PushRaced, exc:
136 except error.PushRaced, exc:
137 raise error.ResponseError(_('push failed:'), str(exc))
137 raise error.ResponseError(_('push failed:'), str(exc))
138
138
139 def lock(self):
139 def lock(self):
140 return self._repo.lock()
140 return self._repo.lock()
141
141
142 def addchangegroup(self, cg, source, url):
142 def addchangegroup(self, cg, source, url):
143 return changegroup.addchangegroup(self._repo, cg, source, url)
143 return changegroup.addchangegroup(self._repo, cg, source, url)
144
144
145 def pushkey(self, namespace, key, old, new):
145 def pushkey(self, namespace, key, old, new):
146 return self._repo.pushkey(namespace, key, old, new)
146 return self._repo.pushkey(namespace, key, old, new)
147
147
148 def listkeys(self, namespace):
148 def listkeys(self, namespace):
149 return self._repo.listkeys(namespace)
149 return self._repo.listkeys(namespace)
150
150
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
151 def debugwireargs(self, one, two, three=None, four=None, five=None):
152 '''used to test argument passing over the wire'''
152 '''used to test argument passing over the wire'''
153 return "%s %s %s %s %s" % (one, two, three, four, five)
153 return "%s %s %s %s %s" % (one, two, three, four, five)
154
154
155 class locallegacypeer(localpeer):
155 class locallegacypeer(localpeer):
156 '''peer extension which implements legacy methods too; used for tests with
156 '''peer extension which implements legacy methods too; used for tests with
157 restricted capabilities'''
157 restricted capabilities'''
158
158
159 def __init__(self, repo):
159 def __init__(self, repo):
160 localpeer.__init__(self, repo, caps=legacycaps)
160 localpeer.__init__(self, repo, caps=legacycaps)
161
161
162 def branches(self, nodes):
162 def branches(self, nodes):
163 return self._repo.branches(nodes)
163 return self._repo.branches(nodes)
164
164
165 def between(self, pairs):
165 def between(self, pairs):
166 return self._repo.between(pairs)
166 return self._repo.between(pairs)
167
167
168 def changegroup(self, basenodes, source):
168 def changegroup(self, basenodes, source):
169 return changegroup.changegroup(self._repo, basenodes, source)
169 return changegroup.changegroup(self._repo, basenodes, source)
170
170
171 def changegroupsubset(self, bases, heads, source):
171 def changegroupsubset(self, bases, heads, source):
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
172 return changegroup.changegroupsubset(self._repo, bases, heads, source)
173
173
174 class localrepository(object):
174 class localrepository(object):
175
175
176 supportedformats = set(('revlogv1', 'generaldelta'))
176 supportedformats = set(('revlogv1', 'generaldelta'))
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
177 _basesupported = supportedformats | set(('store', 'fncache', 'shared',
178 'dotencode'))
178 'dotencode'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
179 openerreqs = set(('revlogv1', 'generaldelta'))
180 requirements = ['revlogv1']
180 requirements = ['revlogv1']
181 filtername = None
181 filtername = None
182
182
183 bundle2caps = {'HG2X': (),
184 'b2x:listkeys': (),
185 'b2x:pushkey': (),
186 'b2x:changegroup': (),
187 }
188
189 # a list of (ui, featureset) functions.
183 # a list of (ui, featureset) functions.
190 # only functions defined in module of enabled extensions are invoked
184 # only functions defined in module of enabled extensions are invoked
191 featuresetupfuncs = set()
185 featuresetupfuncs = set()
192
186
193 def _baserequirements(self, create):
187 def _baserequirements(self, create):
194 return self.requirements[:]
188 return self.requirements[:]
195
189
196 def __init__(self, baseui, path=None, create=False):
190 def __init__(self, baseui, path=None, create=False):
197 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
191 self.wvfs = scmutil.vfs(path, expandpath=True, realpath=True)
198 self.wopener = self.wvfs
192 self.wopener = self.wvfs
199 self.root = self.wvfs.base
193 self.root = self.wvfs.base
200 self.path = self.wvfs.join(".hg")
194 self.path = self.wvfs.join(".hg")
201 self.origroot = path
195 self.origroot = path
202 self.auditor = pathutil.pathauditor(self.root, self._checknested)
196 self.auditor = pathutil.pathauditor(self.root, self._checknested)
203 self.vfs = scmutil.vfs(self.path)
197 self.vfs = scmutil.vfs(self.path)
204 self.opener = self.vfs
198 self.opener = self.vfs
205 self.baseui = baseui
199 self.baseui = baseui
206 self.ui = baseui.copy()
200 self.ui = baseui.copy()
207 self.ui.copy = baseui.copy # prevent copying repo configuration
201 self.ui.copy = baseui.copy # prevent copying repo configuration
208 # A list of callback to shape the phase if no data were found.
202 # A list of callback to shape the phase if no data were found.
209 # Callback are in the form: func(repo, roots) --> processed root.
203 # Callback are in the form: func(repo, roots) --> processed root.
210 # This list it to be filled by extension during repo setup
204 # This list it to be filled by extension during repo setup
211 self._phasedefaults = []
205 self._phasedefaults = []
212 try:
206 try:
213 self.ui.readconfig(self.join("hgrc"), self.root)
207 self.ui.readconfig(self.join("hgrc"), self.root)
214 extensions.loadall(self.ui)
208 extensions.loadall(self.ui)
215 except IOError:
209 except IOError:
216 pass
210 pass
217
211
218 if self.featuresetupfuncs:
212 if self.featuresetupfuncs:
219 self.supported = set(self._basesupported) # use private copy
213 self.supported = set(self._basesupported) # use private copy
220 extmods = set(m.__name__ for n, m
214 extmods = set(m.__name__ for n, m
221 in extensions.extensions(self.ui))
215 in extensions.extensions(self.ui))
222 for setupfunc in self.featuresetupfuncs:
216 for setupfunc in self.featuresetupfuncs:
223 if setupfunc.__module__ in extmods:
217 if setupfunc.__module__ in extmods:
224 setupfunc(self.ui, self.supported)
218 setupfunc(self.ui, self.supported)
225 else:
219 else:
226 self.supported = self._basesupported
220 self.supported = self._basesupported
227
221
228 if not self.vfs.isdir():
222 if not self.vfs.isdir():
229 if create:
223 if create:
230 if not self.wvfs.exists():
224 if not self.wvfs.exists():
231 self.wvfs.makedirs()
225 self.wvfs.makedirs()
232 self.vfs.makedir(notindexed=True)
226 self.vfs.makedir(notindexed=True)
233 requirements = self._baserequirements(create)
227 requirements = self._baserequirements(create)
234 if self.ui.configbool('format', 'usestore', True):
228 if self.ui.configbool('format', 'usestore', True):
235 self.vfs.mkdir("store")
229 self.vfs.mkdir("store")
236 requirements.append("store")
230 requirements.append("store")
237 if self.ui.configbool('format', 'usefncache', True):
231 if self.ui.configbool('format', 'usefncache', True):
238 requirements.append("fncache")
232 requirements.append("fncache")
239 if self.ui.configbool('format', 'dotencode', True):
233 if self.ui.configbool('format', 'dotencode', True):
240 requirements.append('dotencode')
234 requirements.append('dotencode')
241 # create an invalid changelog
235 # create an invalid changelog
242 self.vfs.append(
236 self.vfs.append(
243 "00changelog.i",
237 "00changelog.i",
244 '\0\0\0\2' # represents revlogv2
238 '\0\0\0\2' # represents revlogv2
245 ' dummy changelog to prevent using the old repo layout'
239 ' dummy changelog to prevent using the old repo layout'
246 )
240 )
247 if self.ui.configbool('format', 'generaldelta', False):
241 if self.ui.configbool('format', 'generaldelta', False):
248 requirements.append("generaldelta")
242 requirements.append("generaldelta")
249 requirements = set(requirements)
243 requirements = set(requirements)
250 else:
244 else:
251 raise error.RepoError(_("repository %s not found") % path)
245 raise error.RepoError(_("repository %s not found") % path)
252 elif create:
246 elif create:
253 raise error.RepoError(_("repository %s already exists") % path)
247 raise error.RepoError(_("repository %s already exists") % path)
254 else:
248 else:
255 try:
249 try:
256 requirements = scmutil.readrequires(self.vfs, self.supported)
250 requirements = scmutil.readrequires(self.vfs, self.supported)
257 except IOError, inst:
251 except IOError, inst:
258 if inst.errno != errno.ENOENT:
252 if inst.errno != errno.ENOENT:
259 raise
253 raise
260 requirements = set()
254 requirements = set()
261
255
262 self.sharedpath = self.path
256 self.sharedpath = self.path
263 try:
257 try:
264 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
258 vfs = scmutil.vfs(self.vfs.read("sharedpath").rstrip('\n'),
265 realpath=True)
259 realpath=True)
266 s = vfs.base
260 s = vfs.base
267 if not vfs.exists():
261 if not vfs.exists():
268 raise error.RepoError(
262 raise error.RepoError(
269 _('.hg/sharedpath points to nonexistent directory %s') % s)
263 _('.hg/sharedpath points to nonexistent directory %s') % s)
270 self.sharedpath = s
264 self.sharedpath = s
271 except IOError, inst:
265 except IOError, inst:
272 if inst.errno != errno.ENOENT:
266 if inst.errno != errno.ENOENT:
273 raise
267 raise
274
268
275 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
269 self.store = store.store(requirements, self.sharedpath, scmutil.vfs)
276 self.spath = self.store.path
270 self.spath = self.store.path
277 self.svfs = self.store.vfs
271 self.svfs = self.store.vfs
278 self.sopener = self.svfs
272 self.sopener = self.svfs
279 self.sjoin = self.store.join
273 self.sjoin = self.store.join
280 self.vfs.createmode = self.store.createmode
274 self.vfs.createmode = self.store.createmode
281 self._applyrequirements(requirements)
275 self._applyrequirements(requirements)
282 if create:
276 if create:
283 self._writerequirements()
277 self._writerequirements()
284
278
285
279
286 self._branchcaches = {}
280 self._branchcaches = {}
287 self.filterpats = {}
281 self.filterpats = {}
288 self._datafilters = {}
282 self._datafilters = {}
289 self._transref = self._lockref = self._wlockref = None
283 self._transref = self._lockref = self._wlockref = None
290
284
291 # A cache for various files under .hg/ that tracks file changes,
285 # A cache for various files under .hg/ that tracks file changes,
292 # (used by the filecache decorator)
286 # (used by the filecache decorator)
293 #
287 #
294 # Maps a property name to its util.filecacheentry
288 # Maps a property name to its util.filecacheentry
295 self._filecache = {}
289 self._filecache = {}
296
290
297 # hold sets of revision to be filtered
291 # hold sets of revision to be filtered
298 # should be cleared when something might have changed the filter value:
292 # should be cleared when something might have changed the filter value:
299 # - new changesets,
293 # - new changesets,
300 # - phase change,
294 # - phase change,
301 # - new obsolescence marker,
295 # - new obsolescence marker,
302 # - working directory parent change,
296 # - working directory parent change,
303 # - bookmark changes
297 # - bookmark changes
304 self.filteredrevcache = {}
298 self.filteredrevcache = {}
305
299
306 def close(self):
300 def close(self):
307 pass
301 pass
308
302
309 def _restrictcapabilities(self, caps):
303 def _restrictcapabilities(self, caps):
310 # bundle2 is not ready for prime time, drop it unless explicitly
304 # bundle2 is not ready for prime time, drop it unless explicitly
311 # required by the tests (or some brave tester)
305 # required by the tests (or some brave tester)
312 if self.ui.configbool('experimental', 'bundle2-exp', False):
306 if self.ui.configbool('experimental', 'bundle2-exp', False):
313 caps = set(caps)
307 caps = set(caps)
314 capsblob = bundle2.encodecaps(self.bundle2caps)
308 capsblob = bundle2.encodecaps(bundle2.capabilities)
315 caps.add('bundle2-exp=' + urllib.quote(capsblob))
309 caps.add('bundle2-exp=' + urllib.quote(capsblob))
316 return caps
310 return caps
317
311
318 def _applyrequirements(self, requirements):
312 def _applyrequirements(self, requirements):
319 self.requirements = requirements
313 self.requirements = requirements
320 self.sopener.options = dict((r, 1) for r in requirements
314 self.sopener.options = dict((r, 1) for r in requirements
321 if r in self.openerreqs)
315 if r in self.openerreqs)
322 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
316 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
323 if chunkcachesize is not None:
317 if chunkcachesize is not None:
324 self.sopener.options['chunkcachesize'] = chunkcachesize
318 self.sopener.options['chunkcachesize'] = chunkcachesize
325
319
326 def _writerequirements(self):
320 def _writerequirements(self):
327 reqfile = self.opener("requires", "w")
321 reqfile = self.opener("requires", "w")
328 for r in sorted(self.requirements):
322 for r in sorted(self.requirements):
329 reqfile.write("%s\n" % r)
323 reqfile.write("%s\n" % r)
330 reqfile.close()
324 reqfile.close()
331
325
332 def _checknested(self, path):
326 def _checknested(self, path):
333 """Determine if path is a legal nested repository."""
327 """Determine if path is a legal nested repository."""
334 if not path.startswith(self.root):
328 if not path.startswith(self.root):
335 return False
329 return False
336 subpath = path[len(self.root) + 1:]
330 subpath = path[len(self.root) + 1:]
337 normsubpath = util.pconvert(subpath)
331 normsubpath = util.pconvert(subpath)
338
332
339 # XXX: Checking against the current working copy is wrong in
333 # XXX: Checking against the current working copy is wrong in
340 # the sense that it can reject things like
334 # the sense that it can reject things like
341 #
335 #
342 # $ hg cat -r 10 sub/x.txt
336 # $ hg cat -r 10 sub/x.txt
343 #
337 #
344 # if sub/ is no longer a subrepository in the working copy
338 # if sub/ is no longer a subrepository in the working copy
345 # parent revision.
339 # parent revision.
346 #
340 #
347 # However, it can of course also allow things that would have
341 # However, it can of course also allow things that would have
348 # been rejected before, such as the above cat command if sub/
342 # been rejected before, such as the above cat command if sub/
349 # is a subrepository now, but was a normal directory before.
343 # is a subrepository now, but was a normal directory before.
350 # The old path auditor would have rejected by mistake since it
344 # The old path auditor would have rejected by mistake since it
351 # panics when it sees sub/.hg/.
345 # panics when it sees sub/.hg/.
352 #
346 #
353 # All in all, checking against the working copy seems sensible
347 # All in all, checking against the working copy seems sensible
354 # since we want to prevent access to nested repositories on
348 # since we want to prevent access to nested repositories on
355 # the filesystem *now*.
349 # the filesystem *now*.
356 ctx = self[None]
350 ctx = self[None]
357 parts = util.splitpath(subpath)
351 parts = util.splitpath(subpath)
358 while parts:
352 while parts:
359 prefix = '/'.join(parts)
353 prefix = '/'.join(parts)
360 if prefix in ctx.substate:
354 if prefix in ctx.substate:
361 if prefix == normsubpath:
355 if prefix == normsubpath:
362 return True
356 return True
363 else:
357 else:
364 sub = ctx.sub(prefix)
358 sub = ctx.sub(prefix)
365 return sub.checknested(subpath[len(prefix) + 1:])
359 return sub.checknested(subpath[len(prefix) + 1:])
366 else:
360 else:
367 parts.pop()
361 parts.pop()
368 return False
362 return False
369
363
370 def peer(self):
364 def peer(self):
371 return localpeer(self) # not cached to avoid reference cycle
365 return localpeer(self) # not cached to avoid reference cycle
372
366
373 def unfiltered(self):
367 def unfiltered(self):
374 """Return unfiltered version of the repository
368 """Return unfiltered version of the repository
375
369
376 Intended to be overwritten by filtered repo."""
370 Intended to be overwritten by filtered repo."""
377 return self
371 return self
378
372
379 def filtered(self, name):
373 def filtered(self, name):
380 """Return a filtered version of a repository"""
374 """Return a filtered version of a repository"""
381 # build a new class with the mixin and the current class
375 # build a new class with the mixin and the current class
382 # (possibly subclass of the repo)
376 # (possibly subclass of the repo)
383 class proxycls(repoview.repoview, self.unfiltered().__class__):
377 class proxycls(repoview.repoview, self.unfiltered().__class__):
384 pass
378 pass
385 return proxycls(self, name)
379 return proxycls(self, name)
386
380
387 @repofilecache('bookmarks')
381 @repofilecache('bookmarks')
388 def _bookmarks(self):
382 def _bookmarks(self):
389 return bookmarks.bmstore(self)
383 return bookmarks.bmstore(self)
390
384
391 @repofilecache('bookmarks.current')
385 @repofilecache('bookmarks.current')
392 def _bookmarkcurrent(self):
386 def _bookmarkcurrent(self):
393 return bookmarks.readcurrent(self)
387 return bookmarks.readcurrent(self)
394
388
395 def bookmarkheads(self, bookmark):
389 def bookmarkheads(self, bookmark):
396 name = bookmark.split('@', 1)[0]
390 name = bookmark.split('@', 1)[0]
397 heads = []
391 heads = []
398 for mark, n in self._bookmarks.iteritems():
392 for mark, n in self._bookmarks.iteritems():
399 if mark.split('@', 1)[0] == name:
393 if mark.split('@', 1)[0] == name:
400 heads.append(n)
394 heads.append(n)
401 return heads
395 return heads
402
396
403 @storecache('phaseroots')
397 @storecache('phaseroots')
404 def _phasecache(self):
398 def _phasecache(self):
405 return phases.phasecache(self, self._phasedefaults)
399 return phases.phasecache(self, self._phasedefaults)
406
400
407 @storecache('obsstore')
401 @storecache('obsstore')
408 def obsstore(self):
402 def obsstore(self):
409 store = obsolete.obsstore(self.sopener)
403 store = obsolete.obsstore(self.sopener)
410 if store and not obsolete._enabled:
404 if store and not obsolete._enabled:
411 # message is rare enough to not be translated
405 # message is rare enough to not be translated
412 msg = 'obsolete feature not enabled but %i markers found!\n'
406 msg = 'obsolete feature not enabled but %i markers found!\n'
413 self.ui.warn(msg % len(list(store)))
407 self.ui.warn(msg % len(list(store)))
414 return store
408 return store
415
409
416 @storecache('00changelog.i')
410 @storecache('00changelog.i')
417 def changelog(self):
411 def changelog(self):
418 c = changelog.changelog(self.sopener)
412 c = changelog.changelog(self.sopener)
419 if 'HG_PENDING' in os.environ:
413 if 'HG_PENDING' in os.environ:
420 p = os.environ['HG_PENDING']
414 p = os.environ['HG_PENDING']
421 if p.startswith(self.root):
415 if p.startswith(self.root):
422 c.readpending('00changelog.i.a')
416 c.readpending('00changelog.i.a')
423 return c
417 return c
424
418
425 @storecache('00manifest.i')
419 @storecache('00manifest.i')
426 def manifest(self):
420 def manifest(self):
427 return manifest.manifest(self.sopener)
421 return manifest.manifest(self.sopener)
428
422
429 @repofilecache('dirstate')
423 @repofilecache('dirstate')
430 def dirstate(self):
424 def dirstate(self):
431 warned = [0]
425 warned = [0]
432 def validate(node):
426 def validate(node):
433 try:
427 try:
434 self.changelog.rev(node)
428 self.changelog.rev(node)
435 return node
429 return node
436 except error.LookupError:
430 except error.LookupError:
437 if not warned[0]:
431 if not warned[0]:
438 warned[0] = True
432 warned[0] = True
439 self.ui.warn(_("warning: ignoring unknown"
433 self.ui.warn(_("warning: ignoring unknown"
440 " working parent %s!\n") % short(node))
434 " working parent %s!\n") % short(node))
441 return nullid
435 return nullid
442
436
443 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
437 return dirstate.dirstate(self.opener, self.ui, self.root, validate)
444
438
445 def __getitem__(self, changeid):
439 def __getitem__(self, changeid):
446 if changeid is None:
440 if changeid is None:
447 return context.workingctx(self)
441 return context.workingctx(self)
448 return context.changectx(self, changeid)
442 return context.changectx(self, changeid)
449
443
450 def __contains__(self, changeid):
444 def __contains__(self, changeid):
451 try:
445 try:
452 return bool(self.lookup(changeid))
446 return bool(self.lookup(changeid))
453 except error.RepoLookupError:
447 except error.RepoLookupError:
454 return False
448 return False
455
449
456 def __nonzero__(self):
450 def __nonzero__(self):
457 return True
451 return True
458
452
459 def __len__(self):
453 def __len__(self):
460 return len(self.changelog)
454 return len(self.changelog)
461
455
462 def __iter__(self):
456 def __iter__(self):
463 return iter(self.changelog)
457 return iter(self.changelog)
464
458
465 def revs(self, expr, *args):
459 def revs(self, expr, *args):
466 '''Return a list of revisions matching the given revset'''
460 '''Return a list of revisions matching the given revset'''
467 expr = revset.formatspec(expr, *args)
461 expr = revset.formatspec(expr, *args)
468 m = revset.match(None, expr)
462 m = revset.match(None, expr)
469 return m(self, revset.spanset(self))
463 return m(self, revset.spanset(self))
470
464
471 def set(self, expr, *args):
465 def set(self, expr, *args):
472 '''
466 '''
473 Yield a context for each matching revision, after doing arg
467 Yield a context for each matching revision, after doing arg
474 replacement via revset.formatspec
468 replacement via revset.formatspec
475 '''
469 '''
476 for r in self.revs(expr, *args):
470 for r in self.revs(expr, *args):
477 yield self[r]
471 yield self[r]
478
472
479 def url(self):
473 def url(self):
480 return 'file:' + self.root
474 return 'file:' + self.root
481
475
482 def hook(self, name, throw=False, **args):
476 def hook(self, name, throw=False, **args):
483 """Call a hook, passing this repo instance.
477 """Call a hook, passing this repo instance.
484
478
485 This a convenience method to aid invoking hooks. Extensions likely
479 This a convenience method to aid invoking hooks. Extensions likely
486 won't call this unless they have registered a custom hook or are
480 won't call this unless they have registered a custom hook or are
487 replacing code that is expected to call a hook.
481 replacing code that is expected to call a hook.
488 """
482 """
489 return hook.hook(self.ui, self, name, throw, **args)
483 return hook.hook(self.ui, self, name, throw, **args)
490
484
491 @unfilteredmethod
485 @unfilteredmethod
492 def _tag(self, names, node, message, local, user, date, extra={},
486 def _tag(self, names, node, message, local, user, date, extra={},
493 editor=False):
487 editor=False):
494 if isinstance(names, str):
488 if isinstance(names, str):
495 names = (names,)
489 names = (names,)
496
490
497 branches = self.branchmap()
491 branches = self.branchmap()
498 for name in names:
492 for name in names:
499 self.hook('pretag', throw=True, node=hex(node), tag=name,
493 self.hook('pretag', throw=True, node=hex(node), tag=name,
500 local=local)
494 local=local)
501 if name in branches:
495 if name in branches:
502 self.ui.warn(_("warning: tag %s conflicts with existing"
496 self.ui.warn(_("warning: tag %s conflicts with existing"
503 " branch name\n") % name)
497 " branch name\n") % name)
504
498
505 def writetags(fp, names, munge, prevtags):
499 def writetags(fp, names, munge, prevtags):
506 fp.seek(0, 2)
500 fp.seek(0, 2)
507 if prevtags and prevtags[-1] != '\n':
501 if prevtags and prevtags[-1] != '\n':
508 fp.write('\n')
502 fp.write('\n')
509 for name in names:
503 for name in names:
510 m = munge and munge(name) or name
504 m = munge and munge(name) or name
511 if (self._tagscache.tagtypes and
505 if (self._tagscache.tagtypes and
512 name in self._tagscache.tagtypes):
506 name in self._tagscache.tagtypes):
513 old = self.tags().get(name, nullid)
507 old = self.tags().get(name, nullid)
514 fp.write('%s %s\n' % (hex(old), m))
508 fp.write('%s %s\n' % (hex(old), m))
515 fp.write('%s %s\n' % (hex(node), m))
509 fp.write('%s %s\n' % (hex(node), m))
516 fp.close()
510 fp.close()
517
511
518 prevtags = ''
512 prevtags = ''
519 if local:
513 if local:
520 try:
514 try:
521 fp = self.opener('localtags', 'r+')
515 fp = self.opener('localtags', 'r+')
522 except IOError:
516 except IOError:
523 fp = self.opener('localtags', 'a')
517 fp = self.opener('localtags', 'a')
524 else:
518 else:
525 prevtags = fp.read()
519 prevtags = fp.read()
526
520
527 # local tags are stored in the current charset
521 # local tags are stored in the current charset
528 writetags(fp, names, None, prevtags)
522 writetags(fp, names, None, prevtags)
529 for name in names:
523 for name in names:
530 self.hook('tag', node=hex(node), tag=name, local=local)
524 self.hook('tag', node=hex(node), tag=name, local=local)
531 return
525 return
532
526
533 try:
527 try:
534 fp = self.wfile('.hgtags', 'rb+')
528 fp = self.wfile('.hgtags', 'rb+')
535 except IOError, e:
529 except IOError, e:
536 if e.errno != errno.ENOENT:
530 if e.errno != errno.ENOENT:
537 raise
531 raise
538 fp = self.wfile('.hgtags', 'ab')
532 fp = self.wfile('.hgtags', 'ab')
539 else:
533 else:
540 prevtags = fp.read()
534 prevtags = fp.read()
541
535
542 # committed tags are stored in UTF-8
536 # committed tags are stored in UTF-8
543 writetags(fp, names, encoding.fromlocal, prevtags)
537 writetags(fp, names, encoding.fromlocal, prevtags)
544
538
545 fp.close()
539 fp.close()
546
540
547 self.invalidatecaches()
541 self.invalidatecaches()
548
542
549 if '.hgtags' not in self.dirstate:
543 if '.hgtags' not in self.dirstate:
550 self[None].add(['.hgtags'])
544 self[None].add(['.hgtags'])
551
545
552 m = matchmod.exact(self.root, '', ['.hgtags'])
546 m = matchmod.exact(self.root, '', ['.hgtags'])
553 tagnode = self.commit(message, user, date, extra=extra, match=m,
547 tagnode = self.commit(message, user, date, extra=extra, match=m,
554 editor=editor)
548 editor=editor)
555
549
556 for name in names:
550 for name in names:
557 self.hook('tag', node=hex(node), tag=name, local=local)
551 self.hook('tag', node=hex(node), tag=name, local=local)
558
552
559 return tagnode
553 return tagnode
560
554
561 def tag(self, names, node, message, local, user, date, editor=False):
555 def tag(self, names, node, message, local, user, date, editor=False):
562 '''tag a revision with one or more symbolic names.
556 '''tag a revision with one or more symbolic names.
563
557
564 names is a list of strings or, when adding a single tag, names may be a
558 names is a list of strings or, when adding a single tag, names may be a
565 string.
559 string.
566
560
567 if local is True, the tags are stored in a per-repository file.
561 if local is True, the tags are stored in a per-repository file.
568 otherwise, they are stored in the .hgtags file, and a new
562 otherwise, they are stored in the .hgtags file, and a new
569 changeset is committed with the change.
563 changeset is committed with the change.
570
564
571 keyword arguments:
565 keyword arguments:
572
566
573 local: whether to store tags in non-version-controlled file
567 local: whether to store tags in non-version-controlled file
574 (default False)
568 (default False)
575
569
576 message: commit message to use if committing
570 message: commit message to use if committing
577
571
578 user: name of user to use if committing
572 user: name of user to use if committing
579
573
580 date: date tuple to use if committing'''
574 date: date tuple to use if committing'''
581
575
582 if not local:
576 if not local:
583 for x in self.status()[:5]:
577 for x in self.status()[:5]:
584 if '.hgtags' in x:
578 if '.hgtags' in x:
585 raise util.Abort(_('working copy of .hgtags is changed '
579 raise util.Abort(_('working copy of .hgtags is changed '
586 '(please commit .hgtags manually)'))
580 '(please commit .hgtags manually)'))
587
581
588 self.tags() # instantiate the cache
582 self.tags() # instantiate the cache
589 self._tag(names, node, message, local, user, date, editor=editor)
583 self._tag(names, node, message, local, user, date, editor=editor)
590
584
591 @filteredpropertycache
585 @filteredpropertycache
592 def _tagscache(self):
586 def _tagscache(self):
593 '''Returns a tagscache object that contains various tags related
587 '''Returns a tagscache object that contains various tags related
594 caches.'''
588 caches.'''
595
589
596 # This simplifies its cache management by having one decorated
590 # This simplifies its cache management by having one decorated
597 # function (this one) and the rest simply fetch things from it.
591 # function (this one) and the rest simply fetch things from it.
598 class tagscache(object):
592 class tagscache(object):
599 def __init__(self):
593 def __init__(self):
600 # These two define the set of tags for this repository. tags
594 # These two define the set of tags for this repository. tags
601 # maps tag name to node; tagtypes maps tag name to 'global' or
595 # maps tag name to node; tagtypes maps tag name to 'global' or
602 # 'local'. (Global tags are defined by .hgtags across all
596 # 'local'. (Global tags are defined by .hgtags across all
603 # heads, and local tags are defined in .hg/localtags.)
597 # heads, and local tags are defined in .hg/localtags.)
604 # They constitute the in-memory cache of tags.
598 # They constitute the in-memory cache of tags.
605 self.tags = self.tagtypes = None
599 self.tags = self.tagtypes = None
606
600
607 self.nodetagscache = self.tagslist = None
601 self.nodetagscache = self.tagslist = None
608
602
609 cache = tagscache()
603 cache = tagscache()
610 cache.tags, cache.tagtypes = self._findtags()
604 cache.tags, cache.tagtypes = self._findtags()
611
605
612 return cache
606 return cache
613
607
614 def tags(self):
608 def tags(self):
615 '''return a mapping of tag to node'''
609 '''return a mapping of tag to node'''
616 t = {}
610 t = {}
617 if self.changelog.filteredrevs:
611 if self.changelog.filteredrevs:
618 tags, tt = self._findtags()
612 tags, tt = self._findtags()
619 else:
613 else:
620 tags = self._tagscache.tags
614 tags = self._tagscache.tags
621 for k, v in tags.iteritems():
615 for k, v in tags.iteritems():
622 try:
616 try:
623 # ignore tags to unknown nodes
617 # ignore tags to unknown nodes
624 self.changelog.rev(v)
618 self.changelog.rev(v)
625 t[k] = v
619 t[k] = v
626 except (error.LookupError, ValueError):
620 except (error.LookupError, ValueError):
627 pass
621 pass
628 return t
622 return t
629
623
630 def _findtags(self):
624 def _findtags(self):
631 '''Do the hard work of finding tags. Return a pair of dicts
625 '''Do the hard work of finding tags. Return a pair of dicts
632 (tags, tagtypes) where tags maps tag name to node, and tagtypes
626 (tags, tagtypes) where tags maps tag name to node, and tagtypes
633 maps tag name to a string like \'global\' or \'local\'.
627 maps tag name to a string like \'global\' or \'local\'.
634 Subclasses or extensions are free to add their own tags, but
628 Subclasses or extensions are free to add their own tags, but
635 should be aware that the returned dicts will be retained for the
629 should be aware that the returned dicts will be retained for the
636 duration of the localrepo object.'''
630 duration of the localrepo object.'''
637
631
638 # XXX what tagtype should subclasses/extensions use? Currently
632 # XXX what tagtype should subclasses/extensions use? Currently
639 # mq and bookmarks add tags, but do not set the tagtype at all.
633 # mq and bookmarks add tags, but do not set the tagtype at all.
640 # Should each extension invent its own tag type? Should there
634 # Should each extension invent its own tag type? Should there
641 # be one tagtype for all such "virtual" tags? Or is the status
635 # be one tagtype for all such "virtual" tags? Or is the status
642 # quo fine?
636 # quo fine?
643
637
644 alltags = {} # map tag name to (node, hist)
638 alltags = {} # map tag name to (node, hist)
645 tagtypes = {}
639 tagtypes = {}
646
640
647 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
641 tagsmod.findglobaltags(self.ui, self, alltags, tagtypes)
648 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
642 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
649
643
650 # Build the return dicts. Have to re-encode tag names because
644 # Build the return dicts. Have to re-encode tag names because
651 # the tags module always uses UTF-8 (in order not to lose info
645 # the tags module always uses UTF-8 (in order not to lose info
652 # writing to the cache), but the rest of Mercurial wants them in
646 # writing to the cache), but the rest of Mercurial wants them in
653 # local encoding.
647 # local encoding.
654 tags = {}
648 tags = {}
655 for (name, (node, hist)) in alltags.iteritems():
649 for (name, (node, hist)) in alltags.iteritems():
656 if node != nullid:
650 if node != nullid:
657 tags[encoding.tolocal(name)] = node
651 tags[encoding.tolocal(name)] = node
658 tags['tip'] = self.changelog.tip()
652 tags['tip'] = self.changelog.tip()
659 tagtypes = dict([(encoding.tolocal(name), value)
653 tagtypes = dict([(encoding.tolocal(name), value)
660 for (name, value) in tagtypes.iteritems()])
654 for (name, value) in tagtypes.iteritems()])
661 return (tags, tagtypes)
655 return (tags, tagtypes)
662
656
663 def tagtype(self, tagname):
657 def tagtype(self, tagname):
664 '''
658 '''
665 return the type of the given tag. result can be:
659 return the type of the given tag. result can be:
666
660
667 'local' : a local tag
661 'local' : a local tag
668 'global' : a global tag
662 'global' : a global tag
669 None : tag does not exist
663 None : tag does not exist
670 '''
664 '''
671
665
672 return self._tagscache.tagtypes.get(tagname)
666 return self._tagscache.tagtypes.get(tagname)
673
667
674 def tagslist(self):
668 def tagslist(self):
675 '''return a list of tags ordered by revision'''
669 '''return a list of tags ordered by revision'''
676 if not self._tagscache.tagslist:
670 if not self._tagscache.tagslist:
677 l = []
671 l = []
678 for t, n in self.tags().iteritems():
672 for t, n in self.tags().iteritems():
679 l.append((self.changelog.rev(n), t, n))
673 l.append((self.changelog.rev(n), t, n))
680 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
674 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
681
675
682 return self._tagscache.tagslist
676 return self._tagscache.tagslist
683
677
684 def nodetags(self, node):
678 def nodetags(self, node):
685 '''return the tags associated with a node'''
679 '''return the tags associated with a node'''
686 if not self._tagscache.nodetagscache:
680 if not self._tagscache.nodetagscache:
687 nodetagscache = {}
681 nodetagscache = {}
688 for t, n in self._tagscache.tags.iteritems():
682 for t, n in self._tagscache.tags.iteritems():
689 nodetagscache.setdefault(n, []).append(t)
683 nodetagscache.setdefault(n, []).append(t)
690 for tags in nodetagscache.itervalues():
684 for tags in nodetagscache.itervalues():
691 tags.sort()
685 tags.sort()
692 self._tagscache.nodetagscache = nodetagscache
686 self._tagscache.nodetagscache = nodetagscache
693 return self._tagscache.nodetagscache.get(node, [])
687 return self._tagscache.nodetagscache.get(node, [])
694
688
695 def nodebookmarks(self, node):
689 def nodebookmarks(self, node):
696 marks = []
690 marks = []
697 for bookmark, n in self._bookmarks.iteritems():
691 for bookmark, n in self._bookmarks.iteritems():
698 if n == node:
692 if n == node:
699 marks.append(bookmark)
693 marks.append(bookmark)
700 return sorted(marks)
694 return sorted(marks)
701
695
702 def branchmap(self):
696 def branchmap(self):
703 '''returns a dictionary {branch: [branchheads]} with branchheads
697 '''returns a dictionary {branch: [branchheads]} with branchheads
704 ordered by increasing revision number'''
698 ordered by increasing revision number'''
705 branchmap.updatecache(self)
699 branchmap.updatecache(self)
706 return self._branchcaches[self.filtername]
700 return self._branchcaches[self.filtername]
707
701
708 def branchtip(self, branch):
702 def branchtip(self, branch):
709 '''return the tip node for a given branch'''
703 '''return the tip node for a given branch'''
710 try:
704 try:
711 return self.branchmap().branchtip(branch)
705 return self.branchmap().branchtip(branch)
712 except KeyError:
706 except KeyError:
713 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
707 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
714
708
715 def lookup(self, key):
709 def lookup(self, key):
716 return self[key].node()
710 return self[key].node()
717
711
718 def lookupbranch(self, key, remote=None):
712 def lookupbranch(self, key, remote=None):
719 repo = remote or self
713 repo = remote or self
720 if key in repo.branchmap():
714 if key in repo.branchmap():
721 return key
715 return key
722
716
723 repo = (remote and remote.local()) and remote or self
717 repo = (remote and remote.local()) and remote or self
724 return repo[key].branch()
718 return repo[key].branch()
725
719
726 def known(self, nodes):
720 def known(self, nodes):
727 nm = self.changelog.nodemap
721 nm = self.changelog.nodemap
728 pc = self._phasecache
722 pc = self._phasecache
729 result = []
723 result = []
730 for n in nodes:
724 for n in nodes:
731 r = nm.get(n)
725 r = nm.get(n)
732 resp = not (r is None or pc.phase(self, r) >= phases.secret)
726 resp = not (r is None or pc.phase(self, r) >= phases.secret)
733 result.append(resp)
727 result.append(resp)
734 return result
728 return result
735
729
736 def local(self):
730 def local(self):
737 return self
731 return self
738
732
739 def cancopy(self):
733 def cancopy(self):
740 # so statichttprepo's override of local() works
734 # so statichttprepo's override of local() works
741 if not self.local():
735 if not self.local():
742 return False
736 return False
743 if not self.ui.configbool('phases', 'publish', True):
737 if not self.ui.configbool('phases', 'publish', True):
744 return True
738 return True
745 # if publishing we can't copy if there is filtered content
739 # if publishing we can't copy if there is filtered content
746 return not self.filtered('visible').changelog.filteredrevs
740 return not self.filtered('visible').changelog.filteredrevs
747
741
748 def join(self, f):
742 def join(self, f):
749 return os.path.join(self.path, f)
743 return os.path.join(self.path, f)
750
744
751 def wjoin(self, f):
745 def wjoin(self, f):
752 return os.path.join(self.root, f)
746 return os.path.join(self.root, f)
753
747
754 def file(self, f):
748 def file(self, f):
755 if f[0] == '/':
749 if f[0] == '/':
756 f = f[1:]
750 f = f[1:]
757 return filelog.filelog(self.sopener, f)
751 return filelog.filelog(self.sopener, f)
758
752
759 def changectx(self, changeid):
753 def changectx(self, changeid):
760 return self[changeid]
754 return self[changeid]
761
755
762 def parents(self, changeid=None):
756 def parents(self, changeid=None):
763 '''get list of changectxs for parents of changeid'''
757 '''get list of changectxs for parents of changeid'''
764 return self[changeid].parents()
758 return self[changeid].parents()
765
759
766 def setparents(self, p1, p2=nullid):
760 def setparents(self, p1, p2=nullid):
767 copies = self.dirstate.setparents(p1, p2)
761 copies = self.dirstate.setparents(p1, p2)
768 pctx = self[p1]
762 pctx = self[p1]
769 if copies:
763 if copies:
770 # Adjust copy records, the dirstate cannot do it, it
764 # Adjust copy records, the dirstate cannot do it, it
771 # requires access to parents manifests. Preserve them
765 # requires access to parents manifests. Preserve them
772 # only for entries added to first parent.
766 # only for entries added to first parent.
773 for f in copies:
767 for f in copies:
774 if f not in pctx and copies[f] in pctx:
768 if f not in pctx and copies[f] in pctx:
775 self.dirstate.copy(copies[f], f)
769 self.dirstate.copy(copies[f], f)
776 if p2 == nullid:
770 if p2 == nullid:
777 for f, s in sorted(self.dirstate.copies().items()):
771 for f, s in sorted(self.dirstate.copies().items()):
778 if f not in pctx and s not in pctx:
772 if f not in pctx and s not in pctx:
779 self.dirstate.copy(None, f)
773 self.dirstate.copy(None, f)
780
774
781 def filectx(self, path, changeid=None, fileid=None):
775 def filectx(self, path, changeid=None, fileid=None):
782 """changeid can be a changeset revision, node, or tag.
776 """changeid can be a changeset revision, node, or tag.
783 fileid can be a file revision or node."""
777 fileid can be a file revision or node."""
784 return context.filectx(self, path, changeid, fileid)
778 return context.filectx(self, path, changeid, fileid)
785
779
786 def getcwd(self):
780 def getcwd(self):
787 return self.dirstate.getcwd()
781 return self.dirstate.getcwd()
788
782
789 def pathto(self, f, cwd=None):
783 def pathto(self, f, cwd=None):
790 return self.dirstate.pathto(f, cwd)
784 return self.dirstate.pathto(f, cwd)
791
785
792 def wfile(self, f, mode='r'):
786 def wfile(self, f, mode='r'):
793 return self.wopener(f, mode)
787 return self.wopener(f, mode)
794
788
795 def _link(self, f):
789 def _link(self, f):
796 return self.wvfs.islink(f)
790 return self.wvfs.islink(f)
797
791
798 def _loadfilter(self, filter):
792 def _loadfilter(self, filter):
799 if filter not in self.filterpats:
793 if filter not in self.filterpats:
800 l = []
794 l = []
801 for pat, cmd in self.ui.configitems(filter):
795 for pat, cmd in self.ui.configitems(filter):
802 if cmd == '!':
796 if cmd == '!':
803 continue
797 continue
804 mf = matchmod.match(self.root, '', [pat])
798 mf = matchmod.match(self.root, '', [pat])
805 fn = None
799 fn = None
806 params = cmd
800 params = cmd
807 for name, filterfn in self._datafilters.iteritems():
801 for name, filterfn in self._datafilters.iteritems():
808 if cmd.startswith(name):
802 if cmd.startswith(name):
809 fn = filterfn
803 fn = filterfn
810 params = cmd[len(name):].lstrip()
804 params = cmd[len(name):].lstrip()
811 break
805 break
812 if not fn:
806 if not fn:
813 fn = lambda s, c, **kwargs: util.filter(s, c)
807 fn = lambda s, c, **kwargs: util.filter(s, c)
814 # Wrap old filters not supporting keyword arguments
808 # Wrap old filters not supporting keyword arguments
815 if not inspect.getargspec(fn)[2]:
809 if not inspect.getargspec(fn)[2]:
816 oldfn = fn
810 oldfn = fn
817 fn = lambda s, c, **kwargs: oldfn(s, c)
811 fn = lambda s, c, **kwargs: oldfn(s, c)
818 l.append((mf, fn, params))
812 l.append((mf, fn, params))
819 self.filterpats[filter] = l
813 self.filterpats[filter] = l
820 return self.filterpats[filter]
814 return self.filterpats[filter]
821
815
822 def _filter(self, filterpats, filename, data):
816 def _filter(self, filterpats, filename, data):
823 for mf, fn, cmd in filterpats:
817 for mf, fn, cmd in filterpats:
824 if mf(filename):
818 if mf(filename):
825 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
819 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
826 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
820 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
827 break
821 break
828
822
829 return data
823 return data
830
824
831 @unfilteredpropertycache
825 @unfilteredpropertycache
832 def _encodefilterpats(self):
826 def _encodefilterpats(self):
833 return self._loadfilter('encode')
827 return self._loadfilter('encode')
834
828
835 @unfilteredpropertycache
829 @unfilteredpropertycache
836 def _decodefilterpats(self):
830 def _decodefilterpats(self):
837 return self._loadfilter('decode')
831 return self._loadfilter('decode')
838
832
839 def adddatafilter(self, name, filter):
833 def adddatafilter(self, name, filter):
840 self._datafilters[name] = filter
834 self._datafilters[name] = filter
841
835
842 def wread(self, filename):
836 def wread(self, filename):
843 if self._link(filename):
837 if self._link(filename):
844 data = self.wvfs.readlink(filename)
838 data = self.wvfs.readlink(filename)
845 else:
839 else:
846 data = self.wopener.read(filename)
840 data = self.wopener.read(filename)
847 return self._filter(self._encodefilterpats, filename, data)
841 return self._filter(self._encodefilterpats, filename, data)
848
842
849 def wwrite(self, filename, data, flags):
843 def wwrite(self, filename, data, flags):
850 data = self._filter(self._decodefilterpats, filename, data)
844 data = self._filter(self._decodefilterpats, filename, data)
851 if 'l' in flags:
845 if 'l' in flags:
852 self.wopener.symlink(data, filename)
846 self.wopener.symlink(data, filename)
853 else:
847 else:
854 self.wopener.write(filename, data)
848 self.wopener.write(filename, data)
855 if 'x' in flags:
849 if 'x' in flags:
856 self.wvfs.setflags(filename, False, True)
850 self.wvfs.setflags(filename, False, True)
857
851
858 def wwritedata(self, filename, data):
852 def wwritedata(self, filename, data):
859 return self._filter(self._decodefilterpats, filename, data)
853 return self._filter(self._decodefilterpats, filename, data)
860
854
861 def transaction(self, desc, report=None):
855 def transaction(self, desc, report=None):
862 tr = self._transref and self._transref() or None
856 tr = self._transref and self._transref() or None
863 if tr and tr.running():
857 if tr and tr.running():
864 return tr.nest()
858 return tr.nest()
865
859
866 # abort here if the journal already exists
860 # abort here if the journal already exists
867 if self.svfs.exists("journal"):
861 if self.svfs.exists("journal"):
868 raise error.RepoError(
862 raise error.RepoError(
869 _("abandoned transaction found"),
863 _("abandoned transaction found"),
870 hint=_("run 'hg recover' to clean up transaction"))
864 hint=_("run 'hg recover' to clean up transaction"))
871
865
872 def onclose():
866 def onclose():
873 self.store.write(self._transref())
867 self.store.write(self._transref())
874
868
875 self._writejournal(desc)
869 self._writejournal(desc)
876 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
870 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
877 rp = report and report or self.ui.warn
871 rp = report and report or self.ui.warn
878 tr = transaction.transaction(rp, self.sopener,
872 tr = transaction.transaction(rp, self.sopener,
879 "journal",
873 "journal",
880 aftertrans(renames),
874 aftertrans(renames),
881 self.store.createmode,
875 self.store.createmode,
882 onclose)
876 onclose)
883 self._transref = weakref.ref(tr)
877 self._transref = weakref.ref(tr)
884 return tr
878 return tr
885
879
886 def _journalfiles(self):
880 def _journalfiles(self):
887 return ((self.svfs, 'journal'),
881 return ((self.svfs, 'journal'),
888 (self.vfs, 'journal.dirstate'),
882 (self.vfs, 'journal.dirstate'),
889 (self.vfs, 'journal.branch'),
883 (self.vfs, 'journal.branch'),
890 (self.vfs, 'journal.desc'),
884 (self.vfs, 'journal.desc'),
891 (self.vfs, 'journal.bookmarks'),
885 (self.vfs, 'journal.bookmarks'),
892 (self.svfs, 'journal.phaseroots'))
886 (self.svfs, 'journal.phaseroots'))
893
887
894 def undofiles(self):
888 def undofiles(self):
895 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
889 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
896
890
897 def _writejournal(self, desc):
891 def _writejournal(self, desc):
898 self.opener.write("journal.dirstate",
892 self.opener.write("journal.dirstate",
899 self.opener.tryread("dirstate"))
893 self.opener.tryread("dirstate"))
900 self.opener.write("journal.branch",
894 self.opener.write("journal.branch",
901 encoding.fromlocal(self.dirstate.branch()))
895 encoding.fromlocal(self.dirstate.branch()))
902 self.opener.write("journal.desc",
896 self.opener.write("journal.desc",
903 "%d\n%s\n" % (len(self), desc))
897 "%d\n%s\n" % (len(self), desc))
904 self.opener.write("journal.bookmarks",
898 self.opener.write("journal.bookmarks",
905 self.opener.tryread("bookmarks"))
899 self.opener.tryread("bookmarks"))
906 self.sopener.write("journal.phaseroots",
900 self.sopener.write("journal.phaseroots",
907 self.sopener.tryread("phaseroots"))
901 self.sopener.tryread("phaseroots"))
908
902
909 def recover(self):
903 def recover(self):
910 lock = self.lock()
904 lock = self.lock()
911 try:
905 try:
912 if self.svfs.exists("journal"):
906 if self.svfs.exists("journal"):
913 self.ui.status(_("rolling back interrupted transaction\n"))
907 self.ui.status(_("rolling back interrupted transaction\n"))
914 transaction.rollback(self.sopener, "journal",
908 transaction.rollback(self.sopener, "journal",
915 self.ui.warn)
909 self.ui.warn)
916 self.invalidate()
910 self.invalidate()
917 return True
911 return True
918 else:
912 else:
919 self.ui.warn(_("no interrupted transaction available\n"))
913 self.ui.warn(_("no interrupted transaction available\n"))
920 return False
914 return False
921 finally:
915 finally:
922 lock.release()
916 lock.release()
923
917
924 def rollback(self, dryrun=False, force=False):
918 def rollback(self, dryrun=False, force=False):
925 wlock = lock = None
919 wlock = lock = None
926 try:
920 try:
927 wlock = self.wlock()
921 wlock = self.wlock()
928 lock = self.lock()
922 lock = self.lock()
929 if self.svfs.exists("undo"):
923 if self.svfs.exists("undo"):
930 return self._rollback(dryrun, force)
924 return self._rollback(dryrun, force)
931 else:
925 else:
932 self.ui.warn(_("no rollback information available\n"))
926 self.ui.warn(_("no rollback information available\n"))
933 return 1
927 return 1
934 finally:
928 finally:
935 release(lock, wlock)
929 release(lock, wlock)
936
930
937 @unfilteredmethod # Until we get smarter cache management
931 @unfilteredmethod # Until we get smarter cache management
938 def _rollback(self, dryrun, force):
932 def _rollback(self, dryrun, force):
939 ui = self.ui
933 ui = self.ui
940 try:
934 try:
941 args = self.opener.read('undo.desc').splitlines()
935 args = self.opener.read('undo.desc').splitlines()
942 (oldlen, desc, detail) = (int(args[0]), args[1], None)
936 (oldlen, desc, detail) = (int(args[0]), args[1], None)
943 if len(args) >= 3:
937 if len(args) >= 3:
944 detail = args[2]
938 detail = args[2]
945 oldtip = oldlen - 1
939 oldtip = oldlen - 1
946
940
947 if detail and ui.verbose:
941 if detail and ui.verbose:
948 msg = (_('repository tip rolled back to revision %s'
942 msg = (_('repository tip rolled back to revision %s'
949 ' (undo %s: %s)\n')
943 ' (undo %s: %s)\n')
950 % (oldtip, desc, detail))
944 % (oldtip, desc, detail))
951 else:
945 else:
952 msg = (_('repository tip rolled back to revision %s'
946 msg = (_('repository tip rolled back to revision %s'
953 ' (undo %s)\n')
947 ' (undo %s)\n')
954 % (oldtip, desc))
948 % (oldtip, desc))
955 except IOError:
949 except IOError:
956 msg = _('rolling back unknown transaction\n')
950 msg = _('rolling back unknown transaction\n')
957 desc = None
951 desc = None
958
952
959 if not force and self['.'] != self['tip'] and desc == 'commit':
953 if not force and self['.'] != self['tip'] and desc == 'commit':
960 raise util.Abort(
954 raise util.Abort(
961 _('rollback of last commit while not checked out '
955 _('rollback of last commit while not checked out '
962 'may lose data'), hint=_('use -f to force'))
956 'may lose data'), hint=_('use -f to force'))
963
957
964 ui.status(msg)
958 ui.status(msg)
965 if dryrun:
959 if dryrun:
966 return 0
960 return 0
967
961
968 parents = self.dirstate.parents()
962 parents = self.dirstate.parents()
969 self.destroying()
963 self.destroying()
970 transaction.rollback(self.sopener, 'undo', ui.warn)
964 transaction.rollback(self.sopener, 'undo', ui.warn)
971 if self.vfs.exists('undo.bookmarks'):
965 if self.vfs.exists('undo.bookmarks'):
972 self.vfs.rename('undo.bookmarks', 'bookmarks')
966 self.vfs.rename('undo.bookmarks', 'bookmarks')
973 if self.svfs.exists('undo.phaseroots'):
967 if self.svfs.exists('undo.phaseroots'):
974 self.svfs.rename('undo.phaseroots', 'phaseroots')
968 self.svfs.rename('undo.phaseroots', 'phaseroots')
975 self.invalidate()
969 self.invalidate()
976
970
977 parentgone = (parents[0] not in self.changelog.nodemap or
971 parentgone = (parents[0] not in self.changelog.nodemap or
978 parents[1] not in self.changelog.nodemap)
972 parents[1] not in self.changelog.nodemap)
979 if parentgone:
973 if parentgone:
980 self.vfs.rename('undo.dirstate', 'dirstate')
974 self.vfs.rename('undo.dirstate', 'dirstate')
981 try:
975 try:
982 branch = self.opener.read('undo.branch')
976 branch = self.opener.read('undo.branch')
983 self.dirstate.setbranch(encoding.tolocal(branch))
977 self.dirstate.setbranch(encoding.tolocal(branch))
984 except IOError:
978 except IOError:
985 ui.warn(_('named branch could not be reset: '
979 ui.warn(_('named branch could not be reset: '
986 'current branch is still \'%s\'\n')
980 'current branch is still \'%s\'\n')
987 % self.dirstate.branch())
981 % self.dirstate.branch())
988
982
989 self.dirstate.invalidate()
983 self.dirstate.invalidate()
990 parents = tuple([p.rev() for p in self.parents()])
984 parents = tuple([p.rev() for p in self.parents()])
991 if len(parents) > 1:
985 if len(parents) > 1:
992 ui.status(_('working directory now based on '
986 ui.status(_('working directory now based on '
993 'revisions %d and %d\n') % parents)
987 'revisions %d and %d\n') % parents)
994 else:
988 else:
995 ui.status(_('working directory now based on '
989 ui.status(_('working directory now based on '
996 'revision %d\n') % parents)
990 'revision %d\n') % parents)
997 # TODO: if we know which new heads may result from this rollback, pass
991 # TODO: if we know which new heads may result from this rollback, pass
998 # them to destroy(), which will prevent the branchhead cache from being
992 # them to destroy(), which will prevent the branchhead cache from being
999 # invalidated.
993 # invalidated.
1000 self.destroyed()
994 self.destroyed()
1001 return 0
995 return 0
1002
996
1003 def invalidatecaches(self):
997 def invalidatecaches(self):
1004
998
1005 if '_tagscache' in vars(self):
999 if '_tagscache' in vars(self):
1006 # can't use delattr on proxy
1000 # can't use delattr on proxy
1007 del self.__dict__['_tagscache']
1001 del self.__dict__['_tagscache']
1008
1002
1009 self.unfiltered()._branchcaches.clear()
1003 self.unfiltered()._branchcaches.clear()
1010 self.invalidatevolatilesets()
1004 self.invalidatevolatilesets()
1011
1005
1012 def invalidatevolatilesets(self):
1006 def invalidatevolatilesets(self):
1013 self.filteredrevcache.clear()
1007 self.filteredrevcache.clear()
1014 obsolete.clearobscaches(self)
1008 obsolete.clearobscaches(self)
1015
1009
1016 def invalidatedirstate(self):
1010 def invalidatedirstate(self):
1017 '''Invalidates the dirstate, causing the next call to dirstate
1011 '''Invalidates the dirstate, causing the next call to dirstate
1018 to check if it was modified since the last time it was read,
1012 to check if it was modified since the last time it was read,
1019 rereading it if it has.
1013 rereading it if it has.
1020
1014
1021 This is different to dirstate.invalidate() that it doesn't always
1015 This is different to dirstate.invalidate() that it doesn't always
1022 rereads the dirstate. Use dirstate.invalidate() if you want to
1016 rereads the dirstate. Use dirstate.invalidate() if you want to
1023 explicitly read the dirstate again (i.e. restoring it to a previous
1017 explicitly read the dirstate again (i.e. restoring it to a previous
1024 known good state).'''
1018 known good state).'''
1025 if hasunfilteredcache(self, 'dirstate'):
1019 if hasunfilteredcache(self, 'dirstate'):
1026 for k in self.dirstate._filecache:
1020 for k in self.dirstate._filecache:
1027 try:
1021 try:
1028 delattr(self.dirstate, k)
1022 delattr(self.dirstate, k)
1029 except AttributeError:
1023 except AttributeError:
1030 pass
1024 pass
1031 delattr(self.unfiltered(), 'dirstate')
1025 delattr(self.unfiltered(), 'dirstate')
1032
1026
1033 def invalidate(self):
1027 def invalidate(self):
1034 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1028 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1035 for k in self._filecache:
1029 for k in self._filecache:
1036 # dirstate is invalidated separately in invalidatedirstate()
1030 # dirstate is invalidated separately in invalidatedirstate()
1037 if k == 'dirstate':
1031 if k == 'dirstate':
1038 continue
1032 continue
1039
1033
1040 try:
1034 try:
1041 delattr(unfiltered, k)
1035 delattr(unfiltered, k)
1042 except AttributeError:
1036 except AttributeError:
1043 pass
1037 pass
1044 self.invalidatecaches()
1038 self.invalidatecaches()
1045 self.store.invalidatecaches()
1039 self.store.invalidatecaches()
1046
1040
1047 def invalidateall(self):
1041 def invalidateall(self):
1048 '''Fully invalidates both store and non-store parts, causing the
1042 '''Fully invalidates both store and non-store parts, causing the
1049 subsequent operation to reread any outside changes.'''
1043 subsequent operation to reread any outside changes.'''
1050 # extension should hook this to invalidate its caches
1044 # extension should hook this to invalidate its caches
1051 self.invalidate()
1045 self.invalidate()
1052 self.invalidatedirstate()
1046 self.invalidatedirstate()
1053
1047
1054 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1048 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc):
1055 try:
1049 try:
1056 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1050 l = lockmod.lock(vfs, lockname, 0, releasefn, desc=desc)
1057 except error.LockHeld, inst:
1051 except error.LockHeld, inst:
1058 if not wait:
1052 if not wait:
1059 raise
1053 raise
1060 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1054 self.ui.warn(_("waiting for lock on %s held by %r\n") %
1061 (desc, inst.locker))
1055 (desc, inst.locker))
1062 # default to 600 seconds timeout
1056 # default to 600 seconds timeout
1063 l = lockmod.lock(vfs, lockname,
1057 l = lockmod.lock(vfs, lockname,
1064 int(self.ui.config("ui", "timeout", "600")),
1058 int(self.ui.config("ui", "timeout", "600")),
1065 releasefn, desc=desc)
1059 releasefn, desc=desc)
1066 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1060 self.ui.warn(_("got lock after %s seconds\n") % l.delay)
1067 if acquirefn:
1061 if acquirefn:
1068 acquirefn()
1062 acquirefn()
1069 return l
1063 return l
1070
1064
1071 def _afterlock(self, callback):
1065 def _afterlock(self, callback):
1072 """add a callback to the current repository lock.
1066 """add a callback to the current repository lock.
1073
1067
1074 The callback will be executed on lock release."""
1068 The callback will be executed on lock release."""
1075 l = self._lockref and self._lockref()
1069 l = self._lockref and self._lockref()
1076 if l:
1070 if l:
1077 l.postrelease.append(callback)
1071 l.postrelease.append(callback)
1078 else:
1072 else:
1079 callback()
1073 callback()
1080
1074
1081 def lock(self, wait=True):
1075 def lock(self, wait=True):
1082 '''Lock the repository store (.hg/store) and return a weak reference
1076 '''Lock the repository store (.hg/store) and return a weak reference
1083 to the lock. Use this before modifying the store (e.g. committing or
1077 to the lock. Use this before modifying the store (e.g. committing or
1084 stripping). If you are opening a transaction, get a lock as well.)'''
1078 stripping). If you are opening a transaction, get a lock as well.)'''
1085 l = self._lockref and self._lockref()
1079 l = self._lockref and self._lockref()
1086 if l is not None and l.held:
1080 if l is not None and l.held:
1087 l.lock()
1081 l.lock()
1088 return l
1082 return l
1089
1083
1090 def unlock():
1084 def unlock():
1091 for k, ce in self._filecache.items():
1085 for k, ce in self._filecache.items():
1092 if k == 'dirstate' or k not in self.__dict__:
1086 if k == 'dirstate' or k not in self.__dict__:
1093 continue
1087 continue
1094 ce.refresh()
1088 ce.refresh()
1095
1089
1096 l = self._lock(self.svfs, "lock", wait, unlock,
1090 l = self._lock(self.svfs, "lock", wait, unlock,
1097 self.invalidate, _('repository %s') % self.origroot)
1091 self.invalidate, _('repository %s') % self.origroot)
1098 self._lockref = weakref.ref(l)
1092 self._lockref = weakref.ref(l)
1099 return l
1093 return l
1100
1094
1101 def wlock(self, wait=True):
1095 def wlock(self, wait=True):
1102 '''Lock the non-store parts of the repository (everything under
1096 '''Lock the non-store parts of the repository (everything under
1103 .hg except .hg/store) and return a weak reference to the lock.
1097 .hg except .hg/store) and return a weak reference to the lock.
1104 Use this before modifying files in .hg.'''
1098 Use this before modifying files in .hg.'''
1105 l = self._wlockref and self._wlockref()
1099 l = self._wlockref and self._wlockref()
1106 if l is not None and l.held:
1100 if l is not None and l.held:
1107 l.lock()
1101 l.lock()
1108 return l
1102 return l
1109
1103
1110 def unlock():
1104 def unlock():
1111 self.dirstate.write()
1105 self.dirstate.write()
1112 self._filecache['dirstate'].refresh()
1106 self._filecache['dirstate'].refresh()
1113
1107
1114 l = self._lock(self.vfs, "wlock", wait, unlock,
1108 l = self._lock(self.vfs, "wlock", wait, unlock,
1115 self.invalidatedirstate, _('working directory of %s') %
1109 self.invalidatedirstate, _('working directory of %s') %
1116 self.origroot)
1110 self.origroot)
1117 self._wlockref = weakref.ref(l)
1111 self._wlockref = weakref.ref(l)
1118 return l
1112 return l
1119
1113
1120 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1114 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1121 """
1115 """
1122 commit an individual file as part of a larger transaction
1116 commit an individual file as part of a larger transaction
1123 """
1117 """
1124
1118
1125 fname = fctx.path()
1119 fname = fctx.path()
1126 text = fctx.data()
1120 text = fctx.data()
1127 flog = self.file(fname)
1121 flog = self.file(fname)
1128 fparent1 = manifest1.get(fname, nullid)
1122 fparent1 = manifest1.get(fname, nullid)
1129 fparent2 = fparent2o = manifest2.get(fname, nullid)
1123 fparent2 = fparent2o = manifest2.get(fname, nullid)
1130
1124
1131 meta = {}
1125 meta = {}
1132 copy = fctx.renamed()
1126 copy = fctx.renamed()
1133 if copy and copy[0] != fname:
1127 if copy and copy[0] != fname:
1134 # Mark the new revision of this file as a copy of another
1128 # Mark the new revision of this file as a copy of another
1135 # file. This copy data will effectively act as a parent
1129 # file. This copy data will effectively act as a parent
1136 # of this new revision. If this is a merge, the first
1130 # of this new revision. If this is a merge, the first
1137 # parent will be the nullid (meaning "look up the copy data")
1131 # parent will be the nullid (meaning "look up the copy data")
1138 # and the second one will be the other parent. For example:
1132 # and the second one will be the other parent. For example:
1139 #
1133 #
1140 # 0 --- 1 --- 3 rev1 changes file foo
1134 # 0 --- 1 --- 3 rev1 changes file foo
1141 # \ / rev2 renames foo to bar and changes it
1135 # \ / rev2 renames foo to bar and changes it
1142 # \- 2 -/ rev3 should have bar with all changes and
1136 # \- 2 -/ rev3 should have bar with all changes and
1143 # should record that bar descends from
1137 # should record that bar descends from
1144 # bar in rev2 and foo in rev1
1138 # bar in rev2 and foo in rev1
1145 #
1139 #
1146 # this allows this merge to succeed:
1140 # this allows this merge to succeed:
1147 #
1141 #
1148 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1142 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
1149 # \ / merging rev3 and rev4 should use bar@rev2
1143 # \ / merging rev3 and rev4 should use bar@rev2
1150 # \- 2 --- 4 as the merge base
1144 # \- 2 --- 4 as the merge base
1151 #
1145 #
1152
1146
1153 cfname = copy[0]
1147 cfname = copy[0]
1154 crev = manifest1.get(cfname)
1148 crev = manifest1.get(cfname)
1155 newfparent = fparent2
1149 newfparent = fparent2
1156
1150
1157 if manifest2: # branch merge
1151 if manifest2: # branch merge
1158 if fparent2 == nullid or crev is None: # copied on remote side
1152 if fparent2 == nullid or crev is None: # copied on remote side
1159 if cfname in manifest2:
1153 if cfname in manifest2:
1160 crev = manifest2[cfname]
1154 crev = manifest2[cfname]
1161 newfparent = fparent1
1155 newfparent = fparent1
1162
1156
1163 # find source in nearest ancestor if we've lost track
1157 # find source in nearest ancestor if we've lost track
1164 if not crev:
1158 if not crev:
1165 self.ui.debug(" %s: searching for copy revision for %s\n" %
1159 self.ui.debug(" %s: searching for copy revision for %s\n" %
1166 (fname, cfname))
1160 (fname, cfname))
1167 for ancestor in self[None].ancestors():
1161 for ancestor in self[None].ancestors():
1168 if cfname in ancestor:
1162 if cfname in ancestor:
1169 crev = ancestor[cfname].filenode()
1163 crev = ancestor[cfname].filenode()
1170 break
1164 break
1171
1165
1172 if crev:
1166 if crev:
1173 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1167 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
1174 meta["copy"] = cfname
1168 meta["copy"] = cfname
1175 meta["copyrev"] = hex(crev)
1169 meta["copyrev"] = hex(crev)
1176 fparent1, fparent2 = nullid, newfparent
1170 fparent1, fparent2 = nullid, newfparent
1177 else:
1171 else:
1178 self.ui.warn(_("warning: can't find ancestor for '%s' "
1172 self.ui.warn(_("warning: can't find ancestor for '%s' "
1179 "copied from '%s'!\n") % (fname, cfname))
1173 "copied from '%s'!\n") % (fname, cfname))
1180
1174
1181 elif fparent1 == nullid:
1175 elif fparent1 == nullid:
1182 fparent1, fparent2 = fparent2, nullid
1176 fparent1, fparent2 = fparent2, nullid
1183 elif fparent2 != nullid:
1177 elif fparent2 != nullid:
1184 # is one parent an ancestor of the other?
1178 # is one parent an ancestor of the other?
1185 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1179 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
1186 if fparent1 in fparentancestors:
1180 if fparent1 in fparentancestors:
1187 fparent1, fparent2 = fparent2, nullid
1181 fparent1, fparent2 = fparent2, nullid
1188 elif fparent2 in fparentancestors:
1182 elif fparent2 in fparentancestors:
1189 fparent2 = nullid
1183 fparent2 = nullid
1190
1184
1191 # is the file changed?
1185 # is the file changed?
1192 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1186 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
1193 changelist.append(fname)
1187 changelist.append(fname)
1194 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1188 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
1195
1189
1196 # are just the flags changed during merge?
1190 # are just the flags changed during merge?
1197 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
1191 if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags():
1198 changelist.append(fname)
1192 changelist.append(fname)
1199
1193
1200 return fparent1
1194 return fparent1
1201
1195
1202 @unfilteredmethod
1196 @unfilteredmethod
1203 def commit(self, text="", user=None, date=None, match=None, force=False,
1197 def commit(self, text="", user=None, date=None, match=None, force=False,
1204 editor=False, extra={}):
1198 editor=False, extra={}):
1205 """Add a new revision to current repository.
1199 """Add a new revision to current repository.
1206
1200
1207 Revision information is gathered from the working directory,
1201 Revision information is gathered from the working directory,
1208 match can be used to filter the committed files. If editor is
1202 match can be used to filter the committed files. If editor is
1209 supplied, it is called to get a commit message.
1203 supplied, it is called to get a commit message.
1210 """
1204 """
1211
1205
1212 def fail(f, msg):
1206 def fail(f, msg):
1213 raise util.Abort('%s: %s' % (f, msg))
1207 raise util.Abort('%s: %s' % (f, msg))
1214
1208
1215 if not match:
1209 if not match:
1216 match = matchmod.always(self.root, '')
1210 match = matchmod.always(self.root, '')
1217
1211
1218 if not force:
1212 if not force:
1219 vdirs = []
1213 vdirs = []
1220 match.explicitdir = vdirs.append
1214 match.explicitdir = vdirs.append
1221 match.bad = fail
1215 match.bad = fail
1222
1216
1223 wlock = self.wlock()
1217 wlock = self.wlock()
1224 try:
1218 try:
1225 wctx = self[None]
1219 wctx = self[None]
1226 merge = len(wctx.parents()) > 1
1220 merge = len(wctx.parents()) > 1
1227
1221
1228 if (not force and merge and match and
1222 if (not force and merge and match and
1229 (match.files() or match.anypats())):
1223 (match.files() or match.anypats())):
1230 raise util.Abort(_('cannot partially commit a merge '
1224 raise util.Abort(_('cannot partially commit a merge '
1231 '(do not specify files or patterns)'))
1225 '(do not specify files or patterns)'))
1232
1226
1233 changes = self.status(match=match, clean=force)
1227 changes = self.status(match=match, clean=force)
1234 if force:
1228 if force:
1235 changes[0].extend(changes[6]) # mq may commit unchanged files
1229 changes[0].extend(changes[6]) # mq may commit unchanged files
1236
1230
1237 # check subrepos
1231 # check subrepos
1238 subs = []
1232 subs = []
1239 commitsubs = set()
1233 commitsubs = set()
1240 newstate = wctx.substate.copy()
1234 newstate = wctx.substate.copy()
1241 # only manage subrepos and .hgsubstate if .hgsub is present
1235 # only manage subrepos and .hgsubstate if .hgsub is present
1242 if '.hgsub' in wctx:
1236 if '.hgsub' in wctx:
1243 # we'll decide whether to track this ourselves, thanks
1237 # we'll decide whether to track this ourselves, thanks
1244 for c in changes[:3]:
1238 for c in changes[:3]:
1245 if '.hgsubstate' in c:
1239 if '.hgsubstate' in c:
1246 c.remove('.hgsubstate')
1240 c.remove('.hgsubstate')
1247
1241
1248 # compare current state to last committed state
1242 # compare current state to last committed state
1249 # build new substate based on last committed state
1243 # build new substate based on last committed state
1250 oldstate = wctx.p1().substate
1244 oldstate = wctx.p1().substate
1251 for s in sorted(newstate.keys()):
1245 for s in sorted(newstate.keys()):
1252 if not match(s):
1246 if not match(s):
1253 # ignore working copy, use old state if present
1247 # ignore working copy, use old state if present
1254 if s in oldstate:
1248 if s in oldstate:
1255 newstate[s] = oldstate[s]
1249 newstate[s] = oldstate[s]
1256 continue
1250 continue
1257 if not force:
1251 if not force:
1258 raise util.Abort(
1252 raise util.Abort(
1259 _("commit with new subrepo %s excluded") % s)
1253 _("commit with new subrepo %s excluded") % s)
1260 if wctx.sub(s).dirty(True):
1254 if wctx.sub(s).dirty(True):
1261 if not self.ui.configbool('ui', 'commitsubrepos'):
1255 if not self.ui.configbool('ui', 'commitsubrepos'):
1262 raise util.Abort(
1256 raise util.Abort(
1263 _("uncommitted changes in subrepo %s") % s,
1257 _("uncommitted changes in subrepo %s") % s,
1264 hint=_("use --subrepos for recursive commit"))
1258 hint=_("use --subrepos for recursive commit"))
1265 subs.append(s)
1259 subs.append(s)
1266 commitsubs.add(s)
1260 commitsubs.add(s)
1267 else:
1261 else:
1268 bs = wctx.sub(s).basestate()
1262 bs = wctx.sub(s).basestate()
1269 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1263 newstate[s] = (newstate[s][0], bs, newstate[s][2])
1270 if oldstate.get(s, (None, None, None))[1] != bs:
1264 if oldstate.get(s, (None, None, None))[1] != bs:
1271 subs.append(s)
1265 subs.append(s)
1272
1266
1273 # check for removed subrepos
1267 # check for removed subrepos
1274 for p in wctx.parents():
1268 for p in wctx.parents():
1275 r = [s for s in p.substate if s not in newstate]
1269 r = [s for s in p.substate if s not in newstate]
1276 subs += [s for s in r if match(s)]
1270 subs += [s for s in r if match(s)]
1277 if subs:
1271 if subs:
1278 if (not match('.hgsub') and
1272 if (not match('.hgsub') and
1279 '.hgsub' in (wctx.modified() + wctx.added())):
1273 '.hgsub' in (wctx.modified() + wctx.added())):
1280 raise util.Abort(
1274 raise util.Abort(
1281 _("can't commit subrepos without .hgsub"))
1275 _("can't commit subrepos without .hgsub"))
1282 changes[0].insert(0, '.hgsubstate')
1276 changes[0].insert(0, '.hgsubstate')
1283
1277
1284 elif '.hgsub' in changes[2]:
1278 elif '.hgsub' in changes[2]:
1285 # clean up .hgsubstate when .hgsub is removed
1279 # clean up .hgsubstate when .hgsub is removed
1286 if ('.hgsubstate' in wctx and
1280 if ('.hgsubstate' in wctx and
1287 '.hgsubstate' not in changes[0] + changes[1] + changes[2]):
1281 '.hgsubstate' not in changes[0] + changes[1] + changes[2]):
1288 changes[2].insert(0, '.hgsubstate')
1282 changes[2].insert(0, '.hgsubstate')
1289
1283
1290 # make sure all explicit patterns are matched
1284 # make sure all explicit patterns are matched
1291 if not force and match.files():
1285 if not force and match.files():
1292 matched = set(changes[0] + changes[1] + changes[2])
1286 matched = set(changes[0] + changes[1] + changes[2])
1293
1287
1294 for f in match.files():
1288 for f in match.files():
1295 f = self.dirstate.normalize(f)
1289 f = self.dirstate.normalize(f)
1296 if f == '.' or f in matched or f in wctx.substate:
1290 if f == '.' or f in matched or f in wctx.substate:
1297 continue
1291 continue
1298 if f in changes[3]: # missing
1292 if f in changes[3]: # missing
1299 fail(f, _('file not found!'))
1293 fail(f, _('file not found!'))
1300 if f in vdirs: # visited directory
1294 if f in vdirs: # visited directory
1301 d = f + '/'
1295 d = f + '/'
1302 for mf in matched:
1296 for mf in matched:
1303 if mf.startswith(d):
1297 if mf.startswith(d):
1304 break
1298 break
1305 else:
1299 else:
1306 fail(f, _("no match under directory!"))
1300 fail(f, _("no match under directory!"))
1307 elif f not in self.dirstate:
1301 elif f not in self.dirstate:
1308 fail(f, _("file not tracked!"))
1302 fail(f, _("file not tracked!"))
1309
1303
1310 cctx = context.workingctx(self, text, user, date, extra, changes)
1304 cctx = context.workingctx(self, text, user, date, extra, changes)
1311
1305
1312 if (not force and not extra.get("close") and not merge
1306 if (not force and not extra.get("close") and not merge
1313 and not cctx.files()
1307 and not cctx.files()
1314 and wctx.branch() == wctx.p1().branch()):
1308 and wctx.branch() == wctx.p1().branch()):
1315 return None
1309 return None
1316
1310
1317 if merge and cctx.deleted():
1311 if merge and cctx.deleted():
1318 raise util.Abort(_("cannot commit merge with missing files"))
1312 raise util.Abort(_("cannot commit merge with missing files"))
1319
1313
1320 ms = mergemod.mergestate(self)
1314 ms = mergemod.mergestate(self)
1321 for f in changes[0]:
1315 for f in changes[0]:
1322 if f in ms and ms[f] == 'u':
1316 if f in ms and ms[f] == 'u':
1323 raise util.Abort(_("unresolved merge conflicts "
1317 raise util.Abort(_("unresolved merge conflicts "
1324 "(see hg help resolve)"))
1318 "(see hg help resolve)"))
1325
1319
1326 if editor:
1320 if editor:
1327 cctx._text = editor(self, cctx, subs)
1321 cctx._text = editor(self, cctx, subs)
1328 edited = (text != cctx._text)
1322 edited = (text != cctx._text)
1329
1323
1330 # Save commit message in case this transaction gets rolled back
1324 # Save commit message in case this transaction gets rolled back
1331 # (e.g. by a pretxncommit hook). Leave the content alone on
1325 # (e.g. by a pretxncommit hook). Leave the content alone on
1332 # the assumption that the user will use the same editor again.
1326 # the assumption that the user will use the same editor again.
1333 msgfn = self.savecommitmessage(cctx._text)
1327 msgfn = self.savecommitmessage(cctx._text)
1334
1328
1335 # commit subs and write new state
1329 # commit subs and write new state
1336 if subs:
1330 if subs:
1337 for s in sorted(commitsubs):
1331 for s in sorted(commitsubs):
1338 sub = wctx.sub(s)
1332 sub = wctx.sub(s)
1339 self.ui.status(_('committing subrepository %s\n') %
1333 self.ui.status(_('committing subrepository %s\n') %
1340 subrepo.subrelpath(sub))
1334 subrepo.subrelpath(sub))
1341 sr = sub.commit(cctx._text, user, date)
1335 sr = sub.commit(cctx._text, user, date)
1342 newstate[s] = (newstate[s][0], sr)
1336 newstate[s] = (newstate[s][0], sr)
1343 subrepo.writestate(self, newstate)
1337 subrepo.writestate(self, newstate)
1344
1338
1345 p1, p2 = self.dirstate.parents()
1339 p1, p2 = self.dirstate.parents()
1346 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1340 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
1347 try:
1341 try:
1348 self.hook("precommit", throw=True, parent1=hookp1,
1342 self.hook("precommit", throw=True, parent1=hookp1,
1349 parent2=hookp2)
1343 parent2=hookp2)
1350 ret = self.commitctx(cctx, True)
1344 ret = self.commitctx(cctx, True)
1351 except: # re-raises
1345 except: # re-raises
1352 if edited:
1346 if edited:
1353 self.ui.write(
1347 self.ui.write(
1354 _('note: commit message saved in %s\n') % msgfn)
1348 _('note: commit message saved in %s\n') % msgfn)
1355 raise
1349 raise
1356
1350
1357 # update bookmarks, dirstate and mergestate
1351 # update bookmarks, dirstate and mergestate
1358 bookmarks.update(self, [p1, p2], ret)
1352 bookmarks.update(self, [p1, p2], ret)
1359 cctx.markcommitted(ret)
1353 cctx.markcommitted(ret)
1360 ms.reset()
1354 ms.reset()
1361 finally:
1355 finally:
1362 wlock.release()
1356 wlock.release()
1363
1357
1364 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1358 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
1365 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1359 self.hook("commit", node=node, parent1=parent1, parent2=parent2)
1366 self._afterlock(commithook)
1360 self._afterlock(commithook)
1367 return ret
1361 return ret
1368
1362
1369 @unfilteredmethod
1363 @unfilteredmethod
1370 def commitctx(self, ctx, error=False):
1364 def commitctx(self, ctx, error=False):
1371 """Add a new revision to current repository.
1365 """Add a new revision to current repository.
1372 Revision information is passed via the context argument.
1366 Revision information is passed via the context argument.
1373 """
1367 """
1374
1368
1375 tr = lock = None
1369 tr = lock = None
1376 removed = list(ctx.removed())
1370 removed = list(ctx.removed())
1377 p1, p2 = ctx.p1(), ctx.p2()
1371 p1, p2 = ctx.p1(), ctx.p2()
1378 user = ctx.user()
1372 user = ctx.user()
1379
1373
1380 lock = self.lock()
1374 lock = self.lock()
1381 try:
1375 try:
1382 tr = self.transaction("commit")
1376 tr = self.transaction("commit")
1383 trp = weakref.proxy(tr)
1377 trp = weakref.proxy(tr)
1384
1378
1385 if ctx.files():
1379 if ctx.files():
1386 m1 = p1.manifest().copy()
1380 m1 = p1.manifest().copy()
1387 m2 = p2.manifest()
1381 m2 = p2.manifest()
1388
1382
1389 # check in files
1383 # check in files
1390 new = {}
1384 new = {}
1391 changed = []
1385 changed = []
1392 linkrev = len(self)
1386 linkrev = len(self)
1393 for f in sorted(ctx.modified() + ctx.added()):
1387 for f in sorted(ctx.modified() + ctx.added()):
1394 self.ui.note(f + "\n")
1388 self.ui.note(f + "\n")
1395 try:
1389 try:
1396 fctx = ctx[f]
1390 fctx = ctx[f]
1397 if fctx is None:
1391 if fctx is None:
1398 removed.append(f)
1392 removed.append(f)
1399 else:
1393 else:
1400 new[f] = self._filecommit(fctx, m1, m2, linkrev,
1394 new[f] = self._filecommit(fctx, m1, m2, linkrev,
1401 trp, changed)
1395 trp, changed)
1402 m1.set(f, fctx.flags())
1396 m1.set(f, fctx.flags())
1403 except OSError, inst:
1397 except OSError, inst:
1404 self.ui.warn(_("trouble committing %s!\n") % f)
1398 self.ui.warn(_("trouble committing %s!\n") % f)
1405 raise
1399 raise
1406 except IOError, inst:
1400 except IOError, inst:
1407 errcode = getattr(inst, 'errno', errno.ENOENT)
1401 errcode = getattr(inst, 'errno', errno.ENOENT)
1408 if error or errcode and errcode != errno.ENOENT:
1402 if error or errcode and errcode != errno.ENOENT:
1409 self.ui.warn(_("trouble committing %s!\n") % f)
1403 self.ui.warn(_("trouble committing %s!\n") % f)
1410 raise
1404 raise
1411
1405
1412 # update manifest
1406 # update manifest
1413 m1.update(new)
1407 m1.update(new)
1414 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1408 removed = [f for f in sorted(removed) if f in m1 or f in m2]
1415 drop = [f for f in removed if f in m1]
1409 drop = [f for f in removed if f in m1]
1416 for f in drop:
1410 for f in drop:
1417 del m1[f]
1411 del m1[f]
1418 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1412 mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(),
1419 p2.manifestnode(), (new, drop))
1413 p2.manifestnode(), (new, drop))
1420 files = changed + removed
1414 files = changed + removed
1421 else:
1415 else:
1422 mn = p1.manifestnode()
1416 mn = p1.manifestnode()
1423 files = []
1417 files = []
1424
1418
1425 # update changelog
1419 # update changelog
1426 self.changelog.delayupdate()
1420 self.changelog.delayupdate()
1427 n = self.changelog.add(mn, files, ctx.description(),
1421 n = self.changelog.add(mn, files, ctx.description(),
1428 trp, p1.node(), p2.node(),
1422 trp, p1.node(), p2.node(),
1429 user, ctx.date(), ctx.extra().copy())
1423 user, ctx.date(), ctx.extra().copy())
1430 p = lambda: self.changelog.writepending() and self.root or ""
1424 p = lambda: self.changelog.writepending() and self.root or ""
1431 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1425 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
1432 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1426 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
1433 parent2=xp2, pending=p)
1427 parent2=xp2, pending=p)
1434 self.changelog.finalize(trp)
1428 self.changelog.finalize(trp)
1435 # set the new commit is proper phase
1429 # set the new commit is proper phase
1436 targetphase = subrepo.newcommitphase(self.ui, ctx)
1430 targetphase = subrepo.newcommitphase(self.ui, ctx)
1437 if targetphase:
1431 if targetphase:
1438 # retract boundary do not alter parent changeset.
1432 # retract boundary do not alter parent changeset.
1439 # if a parent have higher the resulting phase will
1433 # if a parent have higher the resulting phase will
1440 # be compliant anyway
1434 # be compliant anyway
1441 #
1435 #
1442 # if minimal phase was 0 we don't need to retract anything
1436 # if minimal phase was 0 we don't need to retract anything
1443 phases.retractboundary(self, tr, targetphase, [n])
1437 phases.retractboundary(self, tr, targetphase, [n])
1444 tr.close()
1438 tr.close()
1445 branchmap.updatecache(self.filtered('served'))
1439 branchmap.updatecache(self.filtered('served'))
1446 return n
1440 return n
1447 finally:
1441 finally:
1448 if tr:
1442 if tr:
1449 tr.release()
1443 tr.release()
1450 lock.release()
1444 lock.release()
1451
1445
1452 @unfilteredmethod
1446 @unfilteredmethod
1453 def destroying(self):
1447 def destroying(self):
1454 '''Inform the repository that nodes are about to be destroyed.
1448 '''Inform the repository that nodes are about to be destroyed.
1455 Intended for use by strip and rollback, so there's a common
1449 Intended for use by strip and rollback, so there's a common
1456 place for anything that has to be done before destroying history.
1450 place for anything that has to be done before destroying history.
1457
1451
1458 This is mostly useful for saving state that is in memory and waiting
1452 This is mostly useful for saving state that is in memory and waiting
1459 to be flushed when the current lock is released. Because a call to
1453 to be flushed when the current lock is released. Because a call to
1460 destroyed is imminent, the repo will be invalidated causing those
1454 destroyed is imminent, the repo will be invalidated causing those
1461 changes to stay in memory (waiting for the next unlock), or vanish
1455 changes to stay in memory (waiting for the next unlock), or vanish
1462 completely.
1456 completely.
1463 '''
1457 '''
1464 # When using the same lock to commit and strip, the phasecache is left
1458 # When using the same lock to commit and strip, the phasecache is left
1465 # dirty after committing. Then when we strip, the repo is invalidated,
1459 # dirty after committing. Then when we strip, the repo is invalidated,
1466 # causing those changes to disappear.
1460 # causing those changes to disappear.
1467 if '_phasecache' in vars(self):
1461 if '_phasecache' in vars(self):
1468 self._phasecache.write()
1462 self._phasecache.write()
1469
1463
1470 @unfilteredmethod
1464 @unfilteredmethod
1471 def destroyed(self):
1465 def destroyed(self):
1472 '''Inform the repository that nodes have been destroyed.
1466 '''Inform the repository that nodes have been destroyed.
1473 Intended for use by strip and rollback, so there's a common
1467 Intended for use by strip and rollback, so there's a common
1474 place for anything that has to be done after destroying history.
1468 place for anything that has to be done after destroying history.
1475 '''
1469 '''
1476 # When one tries to:
1470 # When one tries to:
1477 # 1) destroy nodes thus calling this method (e.g. strip)
1471 # 1) destroy nodes thus calling this method (e.g. strip)
1478 # 2) use phasecache somewhere (e.g. commit)
1472 # 2) use phasecache somewhere (e.g. commit)
1479 #
1473 #
1480 # then 2) will fail because the phasecache contains nodes that were
1474 # then 2) will fail because the phasecache contains nodes that were
1481 # removed. We can either remove phasecache from the filecache,
1475 # removed. We can either remove phasecache from the filecache,
1482 # causing it to reload next time it is accessed, or simply filter
1476 # causing it to reload next time it is accessed, or simply filter
1483 # the removed nodes now and write the updated cache.
1477 # the removed nodes now and write the updated cache.
1484 self._phasecache.filterunknown(self)
1478 self._phasecache.filterunknown(self)
1485 self._phasecache.write()
1479 self._phasecache.write()
1486
1480
1487 # update the 'served' branch cache to help read only server process
1481 # update the 'served' branch cache to help read only server process
1488 # Thanks to branchcache collaboration this is done from the nearest
1482 # Thanks to branchcache collaboration this is done from the nearest
1489 # filtered subset and it is expected to be fast.
1483 # filtered subset and it is expected to be fast.
1490 branchmap.updatecache(self.filtered('served'))
1484 branchmap.updatecache(self.filtered('served'))
1491
1485
1492 # Ensure the persistent tag cache is updated. Doing it now
1486 # Ensure the persistent tag cache is updated. Doing it now
1493 # means that the tag cache only has to worry about destroyed
1487 # means that the tag cache only has to worry about destroyed
1494 # heads immediately after a strip/rollback. That in turn
1488 # heads immediately after a strip/rollback. That in turn
1495 # guarantees that "cachetip == currenttip" (comparing both rev
1489 # guarantees that "cachetip == currenttip" (comparing both rev
1496 # and node) always means no nodes have been added or destroyed.
1490 # and node) always means no nodes have been added or destroyed.
1497
1491
1498 # XXX this is suboptimal when qrefresh'ing: we strip the current
1492 # XXX this is suboptimal when qrefresh'ing: we strip the current
1499 # head, refresh the tag cache, then immediately add a new head.
1493 # head, refresh the tag cache, then immediately add a new head.
1500 # But I think doing it this way is necessary for the "instant
1494 # But I think doing it this way is necessary for the "instant
1501 # tag cache retrieval" case to work.
1495 # tag cache retrieval" case to work.
1502 self.invalidate()
1496 self.invalidate()
1503
1497
1504 def walk(self, match, node=None):
1498 def walk(self, match, node=None):
1505 '''
1499 '''
1506 walk recursively through the directory tree or a given
1500 walk recursively through the directory tree or a given
1507 changeset, finding all files matched by the match
1501 changeset, finding all files matched by the match
1508 function
1502 function
1509 '''
1503 '''
1510 return self[node].walk(match)
1504 return self[node].walk(match)
1511
1505
1512 def status(self, node1='.', node2=None, match=None,
1506 def status(self, node1='.', node2=None, match=None,
1513 ignored=False, clean=False, unknown=False,
1507 ignored=False, clean=False, unknown=False,
1514 listsubrepos=False):
1508 listsubrepos=False):
1515 '''a convenience method that calls node1.status(node2)'''
1509 '''a convenience method that calls node1.status(node2)'''
1516 return self[node1].status(node2, match, ignored, clean, unknown,
1510 return self[node1].status(node2, match, ignored, clean, unknown,
1517 listsubrepos)
1511 listsubrepos)
1518
1512
1519 def heads(self, start=None):
1513 def heads(self, start=None):
1520 heads = self.changelog.heads(start)
1514 heads = self.changelog.heads(start)
1521 # sort the output in rev descending order
1515 # sort the output in rev descending order
1522 return sorted(heads, key=self.changelog.rev, reverse=True)
1516 return sorted(heads, key=self.changelog.rev, reverse=True)
1523
1517
1524 def branchheads(self, branch=None, start=None, closed=False):
1518 def branchheads(self, branch=None, start=None, closed=False):
1525 '''return a (possibly filtered) list of heads for the given branch
1519 '''return a (possibly filtered) list of heads for the given branch
1526
1520
1527 Heads are returned in topological order, from newest to oldest.
1521 Heads are returned in topological order, from newest to oldest.
1528 If branch is None, use the dirstate branch.
1522 If branch is None, use the dirstate branch.
1529 If start is not None, return only heads reachable from start.
1523 If start is not None, return only heads reachable from start.
1530 If closed is True, return heads that are marked as closed as well.
1524 If closed is True, return heads that are marked as closed as well.
1531 '''
1525 '''
1532 if branch is None:
1526 if branch is None:
1533 branch = self[None].branch()
1527 branch = self[None].branch()
1534 branches = self.branchmap()
1528 branches = self.branchmap()
1535 if branch not in branches:
1529 if branch not in branches:
1536 return []
1530 return []
1537 # the cache returns heads ordered lowest to highest
1531 # the cache returns heads ordered lowest to highest
1538 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1532 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
1539 if start is not None:
1533 if start is not None:
1540 # filter out the heads that cannot be reached from startrev
1534 # filter out the heads that cannot be reached from startrev
1541 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1535 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
1542 bheads = [h for h in bheads if h in fbheads]
1536 bheads = [h for h in bheads if h in fbheads]
1543 return bheads
1537 return bheads
1544
1538
1545 def branches(self, nodes):
1539 def branches(self, nodes):
1546 if not nodes:
1540 if not nodes:
1547 nodes = [self.changelog.tip()]
1541 nodes = [self.changelog.tip()]
1548 b = []
1542 b = []
1549 for n in nodes:
1543 for n in nodes:
1550 t = n
1544 t = n
1551 while True:
1545 while True:
1552 p = self.changelog.parents(n)
1546 p = self.changelog.parents(n)
1553 if p[1] != nullid or p[0] == nullid:
1547 if p[1] != nullid or p[0] == nullid:
1554 b.append((t, n, p[0], p[1]))
1548 b.append((t, n, p[0], p[1]))
1555 break
1549 break
1556 n = p[0]
1550 n = p[0]
1557 return b
1551 return b
1558
1552
1559 def between(self, pairs):
1553 def between(self, pairs):
1560 r = []
1554 r = []
1561
1555
1562 for top, bottom in pairs:
1556 for top, bottom in pairs:
1563 n, l, i = top, [], 0
1557 n, l, i = top, [], 0
1564 f = 1
1558 f = 1
1565
1559
1566 while n != bottom and n != nullid:
1560 while n != bottom and n != nullid:
1567 p = self.changelog.parents(n)[0]
1561 p = self.changelog.parents(n)[0]
1568 if i == f:
1562 if i == f:
1569 l.append(n)
1563 l.append(n)
1570 f = f * 2
1564 f = f * 2
1571 n = p
1565 n = p
1572 i += 1
1566 i += 1
1573
1567
1574 r.append(l)
1568 r.append(l)
1575
1569
1576 return r
1570 return r
1577
1571
1578 def pull(self, remote, heads=None, force=False):
1572 def pull(self, remote, heads=None, force=False):
1579 return exchange.pull (self, remote, heads, force)
1573 return exchange.pull (self, remote, heads, force)
1580
1574
1581 def checkpush(self, pushop):
1575 def checkpush(self, pushop):
1582 """Extensions can override this function if additional checks have
1576 """Extensions can override this function if additional checks have
1583 to be performed before pushing, or call it if they override push
1577 to be performed before pushing, or call it if they override push
1584 command.
1578 command.
1585 """
1579 """
1586 pass
1580 pass
1587
1581
1588 @unfilteredpropertycache
1582 @unfilteredpropertycache
1589 def prepushoutgoinghooks(self):
1583 def prepushoutgoinghooks(self):
1590 """Return util.hooks consists of "(repo, remote, outgoing)"
1584 """Return util.hooks consists of "(repo, remote, outgoing)"
1591 functions, which are called before pushing changesets.
1585 functions, which are called before pushing changesets.
1592 """
1586 """
1593 return util.hooks()
1587 return util.hooks()
1594
1588
1595 def push(self, remote, force=False, revs=None, newbranch=False):
1589 def push(self, remote, force=False, revs=None, newbranch=False):
1596 return exchange.push(self, remote, force, revs, newbranch)
1590 return exchange.push(self, remote, force, revs, newbranch)
1597
1591
1598 def stream_in(self, remote, requirements):
1592 def stream_in(self, remote, requirements):
1599 lock = self.lock()
1593 lock = self.lock()
1600 try:
1594 try:
1601 # Save remote branchmap. We will use it later
1595 # Save remote branchmap. We will use it later
1602 # to speed up branchcache creation
1596 # to speed up branchcache creation
1603 rbranchmap = None
1597 rbranchmap = None
1604 if remote.capable("branchmap"):
1598 if remote.capable("branchmap"):
1605 rbranchmap = remote.branchmap()
1599 rbranchmap = remote.branchmap()
1606
1600
1607 fp = remote.stream_out()
1601 fp = remote.stream_out()
1608 l = fp.readline()
1602 l = fp.readline()
1609 try:
1603 try:
1610 resp = int(l)
1604 resp = int(l)
1611 except ValueError:
1605 except ValueError:
1612 raise error.ResponseError(
1606 raise error.ResponseError(
1613 _('unexpected response from remote server:'), l)
1607 _('unexpected response from remote server:'), l)
1614 if resp == 1:
1608 if resp == 1:
1615 raise util.Abort(_('operation forbidden by server'))
1609 raise util.Abort(_('operation forbidden by server'))
1616 elif resp == 2:
1610 elif resp == 2:
1617 raise util.Abort(_('locking the remote repository failed'))
1611 raise util.Abort(_('locking the remote repository failed'))
1618 elif resp != 0:
1612 elif resp != 0:
1619 raise util.Abort(_('the server sent an unknown error code'))
1613 raise util.Abort(_('the server sent an unknown error code'))
1620 self.ui.status(_('streaming all changes\n'))
1614 self.ui.status(_('streaming all changes\n'))
1621 l = fp.readline()
1615 l = fp.readline()
1622 try:
1616 try:
1623 total_files, total_bytes = map(int, l.split(' ', 1))
1617 total_files, total_bytes = map(int, l.split(' ', 1))
1624 except (ValueError, TypeError):
1618 except (ValueError, TypeError):
1625 raise error.ResponseError(
1619 raise error.ResponseError(
1626 _('unexpected response from remote server:'), l)
1620 _('unexpected response from remote server:'), l)
1627 self.ui.status(_('%d files to transfer, %s of data\n') %
1621 self.ui.status(_('%d files to transfer, %s of data\n') %
1628 (total_files, util.bytecount(total_bytes)))
1622 (total_files, util.bytecount(total_bytes)))
1629 handled_bytes = 0
1623 handled_bytes = 0
1630 self.ui.progress(_('clone'), 0, total=total_bytes)
1624 self.ui.progress(_('clone'), 0, total=total_bytes)
1631 start = time.time()
1625 start = time.time()
1632
1626
1633 tr = self.transaction(_('clone'))
1627 tr = self.transaction(_('clone'))
1634 try:
1628 try:
1635 for i in xrange(total_files):
1629 for i in xrange(total_files):
1636 # XXX doesn't support '\n' or '\r' in filenames
1630 # XXX doesn't support '\n' or '\r' in filenames
1637 l = fp.readline()
1631 l = fp.readline()
1638 try:
1632 try:
1639 name, size = l.split('\0', 1)
1633 name, size = l.split('\0', 1)
1640 size = int(size)
1634 size = int(size)
1641 except (ValueError, TypeError):
1635 except (ValueError, TypeError):
1642 raise error.ResponseError(
1636 raise error.ResponseError(
1643 _('unexpected response from remote server:'), l)
1637 _('unexpected response from remote server:'), l)
1644 if self.ui.debugflag:
1638 if self.ui.debugflag:
1645 self.ui.debug('adding %s (%s)\n' %
1639 self.ui.debug('adding %s (%s)\n' %
1646 (name, util.bytecount(size)))
1640 (name, util.bytecount(size)))
1647 # for backwards compat, name was partially encoded
1641 # for backwards compat, name was partially encoded
1648 ofp = self.sopener(store.decodedir(name), 'w')
1642 ofp = self.sopener(store.decodedir(name), 'w')
1649 for chunk in util.filechunkiter(fp, limit=size):
1643 for chunk in util.filechunkiter(fp, limit=size):
1650 handled_bytes += len(chunk)
1644 handled_bytes += len(chunk)
1651 self.ui.progress(_('clone'), handled_bytes,
1645 self.ui.progress(_('clone'), handled_bytes,
1652 total=total_bytes)
1646 total=total_bytes)
1653 ofp.write(chunk)
1647 ofp.write(chunk)
1654 ofp.close()
1648 ofp.close()
1655 tr.close()
1649 tr.close()
1656 finally:
1650 finally:
1657 tr.release()
1651 tr.release()
1658
1652
1659 # Writing straight to files circumvented the inmemory caches
1653 # Writing straight to files circumvented the inmemory caches
1660 self.invalidate()
1654 self.invalidate()
1661
1655
1662 elapsed = time.time() - start
1656 elapsed = time.time() - start
1663 if elapsed <= 0:
1657 if elapsed <= 0:
1664 elapsed = 0.001
1658 elapsed = 0.001
1665 self.ui.progress(_('clone'), None)
1659 self.ui.progress(_('clone'), None)
1666 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1660 self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
1667 (util.bytecount(total_bytes), elapsed,
1661 (util.bytecount(total_bytes), elapsed,
1668 util.bytecount(total_bytes / elapsed)))
1662 util.bytecount(total_bytes / elapsed)))
1669
1663
1670 # new requirements = old non-format requirements +
1664 # new requirements = old non-format requirements +
1671 # new format-related
1665 # new format-related
1672 # requirements from the streamed-in repository
1666 # requirements from the streamed-in repository
1673 requirements.update(set(self.requirements) - self.supportedformats)
1667 requirements.update(set(self.requirements) - self.supportedformats)
1674 self._applyrequirements(requirements)
1668 self._applyrequirements(requirements)
1675 self._writerequirements()
1669 self._writerequirements()
1676
1670
1677 if rbranchmap:
1671 if rbranchmap:
1678 rbheads = []
1672 rbheads = []
1679 for bheads in rbranchmap.itervalues():
1673 for bheads in rbranchmap.itervalues():
1680 rbheads.extend(bheads)
1674 rbheads.extend(bheads)
1681
1675
1682 if rbheads:
1676 if rbheads:
1683 rtiprev = max((int(self.changelog.rev(node))
1677 rtiprev = max((int(self.changelog.rev(node))
1684 for node in rbheads))
1678 for node in rbheads))
1685 cache = branchmap.branchcache(rbranchmap,
1679 cache = branchmap.branchcache(rbranchmap,
1686 self[rtiprev].node(),
1680 self[rtiprev].node(),
1687 rtiprev)
1681 rtiprev)
1688 # Try to stick it as low as possible
1682 # Try to stick it as low as possible
1689 # filter above served are unlikely to be fetch from a clone
1683 # filter above served are unlikely to be fetch from a clone
1690 for candidate in ('base', 'immutable', 'served'):
1684 for candidate in ('base', 'immutable', 'served'):
1691 rview = self.filtered(candidate)
1685 rview = self.filtered(candidate)
1692 if cache.validfor(rview):
1686 if cache.validfor(rview):
1693 self._branchcaches[candidate] = cache
1687 self._branchcaches[candidate] = cache
1694 cache.write(rview)
1688 cache.write(rview)
1695 break
1689 break
1696 self.invalidate()
1690 self.invalidate()
1697 return len(self.heads()) + 1
1691 return len(self.heads()) + 1
1698 finally:
1692 finally:
1699 lock.release()
1693 lock.release()
1700
1694
1701 def clone(self, remote, heads=[], stream=False):
1695 def clone(self, remote, heads=[], stream=False):
1702 '''clone remote repository.
1696 '''clone remote repository.
1703
1697
1704 keyword arguments:
1698 keyword arguments:
1705 heads: list of revs to clone (forces use of pull)
1699 heads: list of revs to clone (forces use of pull)
1706 stream: use streaming clone if possible'''
1700 stream: use streaming clone if possible'''
1707
1701
1708 # now, all clients that can request uncompressed clones can
1702 # now, all clients that can request uncompressed clones can
1709 # read repo formats supported by all servers that can serve
1703 # read repo formats supported by all servers that can serve
1710 # them.
1704 # them.
1711
1705
1712 # if revlog format changes, client will have to check version
1706 # if revlog format changes, client will have to check version
1713 # and format flags on "stream" capability, and use
1707 # and format flags on "stream" capability, and use
1714 # uncompressed only if compatible.
1708 # uncompressed only if compatible.
1715
1709
1716 if not stream:
1710 if not stream:
1717 # if the server explicitly prefers to stream (for fast LANs)
1711 # if the server explicitly prefers to stream (for fast LANs)
1718 stream = remote.capable('stream-preferred')
1712 stream = remote.capable('stream-preferred')
1719
1713
1720 if stream and not heads:
1714 if stream and not heads:
1721 # 'stream' means remote revlog format is revlogv1 only
1715 # 'stream' means remote revlog format is revlogv1 only
1722 if remote.capable('stream'):
1716 if remote.capable('stream'):
1723 return self.stream_in(remote, set(('revlogv1',)))
1717 return self.stream_in(remote, set(('revlogv1',)))
1724 # otherwise, 'streamreqs' contains the remote revlog format
1718 # otherwise, 'streamreqs' contains the remote revlog format
1725 streamreqs = remote.capable('streamreqs')
1719 streamreqs = remote.capable('streamreqs')
1726 if streamreqs:
1720 if streamreqs:
1727 streamreqs = set(streamreqs.split(','))
1721 streamreqs = set(streamreqs.split(','))
1728 # if we support it, stream in and adjust our requirements
1722 # if we support it, stream in and adjust our requirements
1729 if not streamreqs - self.supportedformats:
1723 if not streamreqs - self.supportedformats:
1730 return self.stream_in(remote, streamreqs)
1724 return self.stream_in(remote, streamreqs)
1731 return self.pull(remote, heads)
1725 return self.pull(remote, heads)
1732
1726
1733 def pushkey(self, namespace, key, old, new):
1727 def pushkey(self, namespace, key, old, new):
1734 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1728 self.hook('prepushkey', throw=True, namespace=namespace, key=key,
1735 old=old, new=new)
1729 old=old, new=new)
1736 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1730 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
1737 ret = pushkey.push(self, namespace, key, old, new)
1731 ret = pushkey.push(self, namespace, key, old, new)
1738 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1732 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
1739 ret=ret)
1733 ret=ret)
1740 return ret
1734 return ret
1741
1735
1742 def listkeys(self, namespace):
1736 def listkeys(self, namespace):
1743 self.hook('prelistkeys', throw=True, namespace=namespace)
1737 self.hook('prelistkeys', throw=True, namespace=namespace)
1744 self.ui.debug('listing keys for "%s"\n' % namespace)
1738 self.ui.debug('listing keys for "%s"\n' % namespace)
1745 values = pushkey.list(self, namespace)
1739 values = pushkey.list(self, namespace)
1746 self.hook('listkeys', namespace=namespace, values=values)
1740 self.hook('listkeys', namespace=namespace, values=values)
1747 return values
1741 return values
1748
1742
1749 def debugwireargs(self, one, two, three=None, four=None, five=None):
1743 def debugwireargs(self, one, two, three=None, four=None, five=None):
1750 '''used to test argument passing over the wire'''
1744 '''used to test argument passing over the wire'''
1751 return "%s %s %s %s %s" % (one, two, three, four, five)
1745 return "%s %s %s %s %s" % (one, two, three, four, five)
1752
1746
1753 def savecommitmessage(self, text):
1747 def savecommitmessage(self, text):
1754 fp = self.opener('last-message.txt', 'wb')
1748 fp = self.opener('last-message.txt', 'wb')
1755 try:
1749 try:
1756 fp.write(text)
1750 fp.write(text)
1757 finally:
1751 finally:
1758 fp.close()
1752 fp.close()
1759 return self.pathto(fp.name[len(self.root) + 1:])
1753 return self.pathto(fp.name[len(self.root) + 1:])
1760
1754
1761 # used to avoid circular references so destructors work
1755 # used to avoid circular references so destructors work
1762 def aftertrans(files):
1756 def aftertrans(files):
1763 renamefiles = [tuple(t) for t in files]
1757 renamefiles = [tuple(t) for t in files]
1764 def a():
1758 def a():
1765 for vfs, src, dest in renamefiles:
1759 for vfs, src, dest in renamefiles:
1766 try:
1760 try:
1767 vfs.rename(src, dest)
1761 vfs.rename(src, dest)
1768 except OSError: # journal file does not yet exist
1762 except OSError: # journal file does not yet exist
1769 pass
1763 pass
1770 return a
1764 return a
1771
1765
1772 def undoname(fn):
1766 def undoname(fn):
1773 base, name = os.path.split(fn)
1767 base, name = os.path.split(fn)
1774 assert name.startswith('journal')
1768 assert name.startswith('journal')
1775 return os.path.join(base, name.replace('journal', 'undo', 1))
1769 return os.path.join(base, name.replace('journal', 'undo', 1))
1776
1770
1777 def instance(ui, path, create):
1771 def instance(ui, path, create):
1778 return localrepository(ui, util.urllocalpath(path), create)
1772 return localrepository(ui, util.urllocalpath(path), create)
1779
1773
1780 def islocal(path):
1774 def islocal(path):
1781 return True
1775 return True
@@ -1,868 +1,868 b''
1 # wireproto.py - generic wire protocol support functions
1 # wireproto.py - generic wire protocol support functions
2 #
2 #
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 import urllib, tempfile, os, sys
8 import urllib, tempfile, os, sys
9 from i18n import _
9 from i18n import _
10 from node import bin, hex
10 from node import bin, hex
11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
11 import changegroup as changegroupmod, bundle2, pushkey as pushkeymod
12 import peer, error, encoding, util, store, exchange
12 import peer, error, encoding, util, store, exchange
13
13
14
14
15 class abstractserverproto(object):
15 class abstractserverproto(object):
16 """abstract class that summarizes the protocol API
16 """abstract class that summarizes the protocol API
17
17
18 Used as reference and documentation.
18 Used as reference and documentation.
19 """
19 """
20
20
21 def getargs(self, args):
21 def getargs(self, args):
22 """return the value for arguments in <args>
22 """return the value for arguments in <args>
23
23
24 returns a list of values (same order as <args>)"""
24 returns a list of values (same order as <args>)"""
25 raise NotImplementedError()
25 raise NotImplementedError()
26
26
27 def getfile(self, fp):
27 def getfile(self, fp):
28 """write the whole content of a file into a file like object
28 """write the whole content of a file into a file like object
29
29
30 The file is in the form::
30 The file is in the form::
31
31
32 (<chunk-size>\n<chunk>)+0\n
32 (<chunk-size>\n<chunk>)+0\n
33
33
34 chunk size is the ascii version of the int.
34 chunk size is the ascii version of the int.
35 """
35 """
36 raise NotImplementedError()
36 raise NotImplementedError()
37
37
38 def redirect(self):
38 def redirect(self):
39 """may setup interception for stdout and stderr
39 """may setup interception for stdout and stderr
40
40
41 See also the `restore` method."""
41 See also the `restore` method."""
42 raise NotImplementedError()
42 raise NotImplementedError()
43
43
44 # If the `redirect` function does install interception, the `restore`
44 # If the `redirect` function does install interception, the `restore`
45 # function MUST be defined. If interception is not used, this function
45 # function MUST be defined. If interception is not used, this function
46 # MUST NOT be defined.
46 # MUST NOT be defined.
47 #
47 #
48 # left commented here on purpose
48 # left commented here on purpose
49 #
49 #
50 #def restore(self):
50 #def restore(self):
51 # """reinstall previous stdout and stderr and return intercepted stdout
51 # """reinstall previous stdout and stderr and return intercepted stdout
52 # """
52 # """
53 # raise NotImplementedError()
53 # raise NotImplementedError()
54
54
55 def groupchunks(self, cg):
55 def groupchunks(self, cg):
56 """return 4096 chunks from a changegroup object
56 """return 4096 chunks from a changegroup object
57
57
58 Some protocols may have compressed the contents."""
58 Some protocols may have compressed the contents."""
59 raise NotImplementedError()
59 raise NotImplementedError()
60
60
61 # abstract batching support
61 # abstract batching support
62
62
63 class future(object):
63 class future(object):
64 '''placeholder for a value to be set later'''
64 '''placeholder for a value to be set later'''
65 def set(self, value):
65 def set(self, value):
66 if util.safehasattr(self, 'value'):
66 if util.safehasattr(self, 'value'):
67 raise error.RepoError("future is already set")
67 raise error.RepoError("future is already set")
68 self.value = value
68 self.value = value
69
69
70 class batcher(object):
70 class batcher(object):
71 '''base class for batches of commands submittable in a single request
71 '''base class for batches of commands submittable in a single request
72
72
73 All methods invoked on instances of this class are simply queued and
73 All methods invoked on instances of this class are simply queued and
74 return a a future for the result. Once you call submit(), all the queued
74 return a a future for the result. Once you call submit(), all the queued
75 calls are performed and the results set in their respective futures.
75 calls are performed and the results set in their respective futures.
76 '''
76 '''
77 def __init__(self):
77 def __init__(self):
78 self.calls = []
78 self.calls = []
79 def __getattr__(self, name):
79 def __getattr__(self, name):
80 def call(*args, **opts):
80 def call(*args, **opts):
81 resref = future()
81 resref = future()
82 self.calls.append((name, args, opts, resref,))
82 self.calls.append((name, args, opts, resref,))
83 return resref
83 return resref
84 return call
84 return call
85 def submit(self):
85 def submit(self):
86 pass
86 pass
87
87
88 class localbatch(batcher):
88 class localbatch(batcher):
89 '''performs the queued calls directly'''
89 '''performs the queued calls directly'''
90 def __init__(self, local):
90 def __init__(self, local):
91 batcher.__init__(self)
91 batcher.__init__(self)
92 self.local = local
92 self.local = local
93 def submit(self):
93 def submit(self):
94 for name, args, opts, resref in self.calls:
94 for name, args, opts, resref in self.calls:
95 resref.set(getattr(self.local, name)(*args, **opts))
95 resref.set(getattr(self.local, name)(*args, **opts))
96
96
97 class remotebatch(batcher):
97 class remotebatch(batcher):
98 '''batches the queued calls; uses as few roundtrips as possible'''
98 '''batches the queued calls; uses as few roundtrips as possible'''
99 def __init__(self, remote):
99 def __init__(self, remote):
100 '''remote must support _submitbatch(encbatch) and
100 '''remote must support _submitbatch(encbatch) and
101 _submitone(op, encargs)'''
101 _submitone(op, encargs)'''
102 batcher.__init__(self)
102 batcher.__init__(self)
103 self.remote = remote
103 self.remote = remote
104 def submit(self):
104 def submit(self):
105 req, rsp = [], []
105 req, rsp = [], []
106 for name, args, opts, resref in self.calls:
106 for name, args, opts, resref in self.calls:
107 mtd = getattr(self.remote, name)
107 mtd = getattr(self.remote, name)
108 batchablefn = getattr(mtd, 'batchable', None)
108 batchablefn = getattr(mtd, 'batchable', None)
109 if batchablefn is not None:
109 if batchablefn is not None:
110 batchable = batchablefn(mtd.im_self, *args, **opts)
110 batchable = batchablefn(mtd.im_self, *args, **opts)
111 encargsorres, encresref = batchable.next()
111 encargsorres, encresref = batchable.next()
112 if encresref:
112 if encresref:
113 req.append((name, encargsorres,))
113 req.append((name, encargsorres,))
114 rsp.append((batchable, encresref, resref,))
114 rsp.append((batchable, encresref, resref,))
115 else:
115 else:
116 resref.set(encargsorres)
116 resref.set(encargsorres)
117 else:
117 else:
118 if req:
118 if req:
119 self._submitreq(req, rsp)
119 self._submitreq(req, rsp)
120 req, rsp = [], []
120 req, rsp = [], []
121 resref.set(mtd(*args, **opts))
121 resref.set(mtd(*args, **opts))
122 if req:
122 if req:
123 self._submitreq(req, rsp)
123 self._submitreq(req, rsp)
124 def _submitreq(self, req, rsp):
124 def _submitreq(self, req, rsp):
125 encresults = self.remote._submitbatch(req)
125 encresults = self.remote._submitbatch(req)
126 for encres, r in zip(encresults, rsp):
126 for encres, r in zip(encresults, rsp):
127 batchable, encresref, resref = r
127 batchable, encresref, resref = r
128 encresref.set(encres)
128 encresref.set(encres)
129 resref.set(batchable.next())
129 resref.set(batchable.next())
130
130
131 def batchable(f):
131 def batchable(f):
132 '''annotation for batchable methods
132 '''annotation for batchable methods
133
133
134 Such methods must implement a coroutine as follows:
134 Such methods must implement a coroutine as follows:
135
135
136 @batchable
136 @batchable
137 def sample(self, one, two=None):
137 def sample(self, one, two=None):
138 # Handle locally computable results first:
138 # Handle locally computable results first:
139 if not one:
139 if not one:
140 yield "a local result", None
140 yield "a local result", None
141 # Build list of encoded arguments suitable for your wire protocol:
141 # Build list of encoded arguments suitable for your wire protocol:
142 encargs = [('one', encode(one),), ('two', encode(two),)]
142 encargs = [('one', encode(one),), ('two', encode(two),)]
143 # Create future for injection of encoded result:
143 # Create future for injection of encoded result:
144 encresref = future()
144 encresref = future()
145 # Return encoded arguments and future:
145 # Return encoded arguments and future:
146 yield encargs, encresref
146 yield encargs, encresref
147 # Assuming the future to be filled with the result from the batched
147 # Assuming the future to be filled with the result from the batched
148 # request now. Decode it:
148 # request now. Decode it:
149 yield decode(encresref.value)
149 yield decode(encresref.value)
150
150
151 The decorator returns a function which wraps this coroutine as a plain
151 The decorator returns a function which wraps this coroutine as a plain
152 method, but adds the original method as an attribute called "batchable",
152 method, but adds the original method as an attribute called "batchable",
153 which is used by remotebatch to split the call into separate encoding and
153 which is used by remotebatch to split the call into separate encoding and
154 decoding phases.
154 decoding phases.
155 '''
155 '''
156 def plain(*args, **opts):
156 def plain(*args, **opts):
157 batchable = f(*args, **opts)
157 batchable = f(*args, **opts)
158 encargsorres, encresref = batchable.next()
158 encargsorres, encresref = batchable.next()
159 if not encresref:
159 if not encresref:
160 return encargsorres # a local result in this case
160 return encargsorres # a local result in this case
161 self = args[0]
161 self = args[0]
162 encresref.set(self._submitone(f.func_name, encargsorres))
162 encresref.set(self._submitone(f.func_name, encargsorres))
163 return batchable.next()
163 return batchable.next()
164 setattr(plain, 'batchable', f)
164 setattr(plain, 'batchable', f)
165 return plain
165 return plain
166
166
167 # list of nodes encoding / decoding
167 # list of nodes encoding / decoding
168
168
169 def decodelist(l, sep=' '):
169 def decodelist(l, sep=' '):
170 if l:
170 if l:
171 return map(bin, l.split(sep))
171 return map(bin, l.split(sep))
172 return []
172 return []
173
173
174 def encodelist(l, sep=' '):
174 def encodelist(l, sep=' '):
175 return sep.join(map(hex, l))
175 return sep.join(map(hex, l))
176
176
177 # batched call argument encoding
177 # batched call argument encoding
178
178
179 def escapearg(plain):
179 def escapearg(plain):
180 return (plain
180 return (plain
181 .replace(':', '::')
181 .replace(':', '::')
182 .replace(',', ':,')
182 .replace(',', ':,')
183 .replace(';', ':;')
183 .replace(';', ':;')
184 .replace('=', ':='))
184 .replace('=', ':='))
185
185
186 def unescapearg(escaped):
186 def unescapearg(escaped):
187 return (escaped
187 return (escaped
188 .replace(':=', '=')
188 .replace(':=', '=')
189 .replace(':;', ';')
189 .replace(':;', ';')
190 .replace(':,', ',')
190 .replace(':,', ',')
191 .replace('::', ':'))
191 .replace('::', ':'))
192
192
193 # mapping of options accepted by getbundle and their types
193 # mapping of options accepted by getbundle and their types
194 #
194 #
195 # Meant to be extended by extensions. It is extensions responsibility to ensure
195 # Meant to be extended by extensions. It is extensions responsibility to ensure
196 # such options are properly processed in exchange.getbundle.
196 # such options are properly processed in exchange.getbundle.
197 #
197 #
198 # supported types are:
198 # supported types are:
199 #
199 #
200 # :nodes: list of binary nodes
200 # :nodes: list of binary nodes
201 # :csv: list of comma-separated values
201 # :csv: list of comma-separated values
202 # :plain: string with no transformation needed.
202 # :plain: string with no transformation needed.
203 gboptsmap = {'heads': 'nodes',
203 gboptsmap = {'heads': 'nodes',
204 'common': 'nodes',
204 'common': 'nodes',
205 'bundlecaps': 'csv',
205 'bundlecaps': 'csv',
206 'listkeys': 'csv',
206 'listkeys': 'csv',
207 'cg': 'boolean'}
207 'cg': 'boolean'}
208
208
209 # client side
209 # client side
210
210
211 class wirepeer(peer.peerrepository):
211 class wirepeer(peer.peerrepository):
212
212
213 def batch(self):
213 def batch(self):
214 return remotebatch(self)
214 return remotebatch(self)
215 def _submitbatch(self, req):
215 def _submitbatch(self, req):
216 cmds = []
216 cmds = []
217 for op, argsdict in req:
217 for op, argsdict in req:
218 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
218 args = ','.join('%s=%s' % p for p in argsdict.iteritems())
219 cmds.append('%s %s' % (op, args))
219 cmds.append('%s %s' % (op, args))
220 rsp = self._call("batch", cmds=';'.join(cmds))
220 rsp = self._call("batch", cmds=';'.join(cmds))
221 return rsp.split(';')
221 return rsp.split(';')
222 def _submitone(self, op, args):
222 def _submitone(self, op, args):
223 return self._call(op, **args)
223 return self._call(op, **args)
224
224
225 @batchable
225 @batchable
226 def lookup(self, key):
226 def lookup(self, key):
227 self.requirecap('lookup', _('look up remote revision'))
227 self.requirecap('lookup', _('look up remote revision'))
228 f = future()
228 f = future()
229 yield {'key': encoding.fromlocal(key)}, f
229 yield {'key': encoding.fromlocal(key)}, f
230 d = f.value
230 d = f.value
231 success, data = d[:-1].split(" ", 1)
231 success, data = d[:-1].split(" ", 1)
232 if int(success):
232 if int(success):
233 yield bin(data)
233 yield bin(data)
234 self._abort(error.RepoError(data))
234 self._abort(error.RepoError(data))
235
235
236 @batchable
236 @batchable
237 def heads(self):
237 def heads(self):
238 f = future()
238 f = future()
239 yield {}, f
239 yield {}, f
240 d = f.value
240 d = f.value
241 try:
241 try:
242 yield decodelist(d[:-1])
242 yield decodelist(d[:-1])
243 except ValueError:
243 except ValueError:
244 self._abort(error.ResponseError(_("unexpected response:"), d))
244 self._abort(error.ResponseError(_("unexpected response:"), d))
245
245
246 @batchable
246 @batchable
247 def known(self, nodes):
247 def known(self, nodes):
248 f = future()
248 f = future()
249 yield {'nodes': encodelist(nodes)}, f
249 yield {'nodes': encodelist(nodes)}, f
250 d = f.value
250 d = f.value
251 try:
251 try:
252 yield [bool(int(b)) for b in d]
252 yield [bool(int(b)) for b in d]
253 except ValueError:
253 except ValueError:
254 self._abort(error.ResponseError(_("unexpected response:"), d))
254 self._abort(error.ResponseError(_("unexpected response:"), d))
255
255
256 @batchable
256 @batchable
257 def branchmap(self):
257 def branchmap(self):
258 f = future()
258 f = future()
259 yield {}, f
259 yield {}, f
260 d = f.value
260 d = f.value
261 try:
261 try:
262 branchmap = {}
262 branchmap = {}
263 for branchpart in d.splitlines():
263 for branchpart in d.splitlines():
264 branchname, branchheads = branchpart.split(' ', 1)
264 branchname, branchheads = branchpart.split(' ', 1)
265 branchname = encoding.tolocal(urllib.unquote(branchname))
265 branchname = encoding.tolocal(urllib.unquote(branchname))
266 branchheads = decodelist(branchheads)
266 branchheads = decodelist(branchheads)
267 branchmap[branchname] = branchheads
267 branchmap[branchname] = branchheads
268 yield branchmap
268 yield branchmap
269 except TypeError:
269 except TypeError:
270 self._abort(error.ResponseError(_("unexpected response:"), d))
270 self._abort(error.ResponseError(_("unexpected response:"), d))
271
271
272 def branches(self, nodes):
272 def branches(self, nodes):
273 n = encodelist(nodes)
273 n = encodelist(nodes)
274 d = self._call("branches", nodes=n)
274 d = self._call("branches", nodes=n)
275 try:
275 try:
276 br = [tuple(decodelist(b)) for b in d.splitlines()]
276 br = [tuple(decodelist(b)) for b in d.splitlines()]
277 return br
277 return br
278 except ValueError:
278 except ValueError:
279 self._abort(error.ResponseError(_("unexpected response:"), d))
279 self._abort(error.ResponseError(_("unexpected response:"), d))
280
280
281 def between(self, pairs):
281 def between(self, pairs):
282 batch = 8 # avoid giant requests
282 batch = 8 # avoid giant requests
283 r = []
283 r = []
284 for i in xrange(0, len(pairs), batch):
284 for i in xrange(0, len(pairs), batch):
285 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
285 n = " ".join([encodelist(p, '-') for p in pairs[i:i + batch]])
286 d = self._call("between", pairs=n)
286 d = self._call("between", pairs=n)
287 try:
287 try:
288 r.extend(l and decodelist(l) or [] for l in d.splitlines())
288 r.extend(l and decodelist(l) or [] for l in d.splitlines())
289 except ValueError:
289 except ValueError:
290 self._abort(error.ResponseError(_("unexpected response:"), d))
290 self._abort(error.ResponseError(_("unexpected response:"), d))
291 return r
291 return r
292
292
293 @batchable
293 @batchable
294 def pushkey(self, namespace, key, old, new):
294 def pushkey(self, namespace, key, old, new):
295 if not self.capable('pushkey'):
295 if not self.capable('pushkey'):
296 yield False, None
296 yield False, None
297 f = future()
297 f = future()
298 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
298 self.ui.debug('preparing pushkey for "%s:%s"\n' % (namespace, key))
299 yield {'namespace': encoding.fromlocal(namespace),
299 yield {'namespace': encoding.fromlocal(namespace),
300 'key': encoding.fromlocal(key),
300 'key': encoding.fromlocal(key),
301 'old': encoding.fromlocal(old),
301 'old': encoding.fromlocal(old),
302 'new': encoding.fromlocal(new)}, f
302 'new': encoding.fromlocal(new)}, f
303 d = f.value
303 d = f.value
304 d, output = d.split('\n', 1)
304 d, output = d.split('\n', 1)
305 try:
305 try:
306 d = bool(int(d))
306 d = bool(int(d))
307 except ValueError:
307 except ValueError:
308 raise error.ResponseError(
308 raise error.ResponseError(
309 _('push failed (unexpected response):'), d)
309 _('push failed (unexpected response):'), d)
310 for l in output.splitlines(True):
310 for l in output.splitlines(True):
311 self.ui.status(_('remote: '), l)
311 self.ui.status(_('remote: '), l)
312 yield d
312 yield d
313
313
314 @batchable
314 @batchable
315 def listkeys(self, namespace):
315 def listkeys(self, namespace):
316 if not self.capable('pushkey'):
316 if not self.capable('pushkey'):
317 yield {}, None
317 yield {}, None
318 f = future()
318 f = future()
319 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
319 self.ui.debug('preparing listkeys for "%s"\n' % namespace)
320 yield {'namespace': encoding.fromlocal(namespace)}, f
320 yield {'namespace': encoding.fromlocal(namespace)}, f
321 d = f.value
321 d = f.value
322 yield pushkeymod.decodekeys(d)
322 yield pushkeymod.decodekeys(d)
323
323
324 def stream_out(self):
324 def stream_out(self):
325 return self._callstream('stream_out')
325 return self._callstream('stream_out')
326
326
327 def changegroup(self, nodes, kind):
327 def changegroup(self, nodes, kind):
328 n = encodelist(nodes)
328 n = encodelist(nodes)
329 f = self._callcompressable("changegroup", roots=n)
329 f = self._callcompressable("changegroup", roots=n)
330 return changegroupmod.unbundle10(f, 'UN')
330 return changegroupmod.unbundle10(f, 'UN')
331
331
332 def changegroupsubset(self, bases, heads, kind):
332 def changegroupsubset(self, bases, heads, kind):
333 self.requirecap('changegroupsubset', _('look up remote changes'))
333 self.requirecap('changegroupsubset', _('look up remote changes'))
334 bases = encodelist(bases)
334 bases = encodelist(bases)
335 heads = encodelist(heads)
335 heads = encodelist(heads)
336 f = self._callcompressable("changegroupsubset",
336 f = self._callcompressable("changegroupsubset",
337 bases=bases, heads=heads)
337 bases=bases, heads=heads)
338 return changegroupmod.unbundle10(f, 'UN')
338 return changegroupmod.unbundle10(f, 'UN')
339
339
340 def getbundle(self, source, **kwargs):
340 def getbundle(self, source, **kwargs):
341 self.requirecap('getbundle', _('look up remote changes'))
341 self.requirecap('getbundle', _('look up remote changes'))
342 opts = {}
342 opts = {}
343 for key, value in kwargs.iteritems():
343 for key, value in kwargs.iteritems():
344 if value is None:
344 if value is None:
345 continue
345 continue
346 keytype = gboptsmap.get(key)
346 keytype = gboptsmap.get(key)
347 if keytype is None:
347 if keytype is None:
348 assert False, 'unexpected'
348 assert False, 'unexpected'
349 elif keytype == 'nodes':
349 elif keytype == 'nodes':
350 value = encodelist(value)
350 value = encodelist(value)
351 elif keytype == 'csv':
351 elif keytype == 'csv':
352 value = ','.join(value)
352 value = ','.join(value)
353 elif keytype == 'boolean':
353 elif keytype == 'boolean':
354 value = bool(value)
354 value = bool(value)
355 elif keytype != 'plain':
355 elif keytype != 'plain':
356 raise KeyError('unknown getbundle option type %s'
356 raise KeyError('unknown getbundle option type %s'
357 % keytype)
357 % keytype)
358 opts[key] = value
358 opts[key] = value
359 f = self._callcompressable("getbundle", **opts)
359 f = self._callcompressable("getbundle", **opts)
360 bundlecaps = kwargs.get('bundlecaps')
360 bundlecaps = kwargs.get('bundlecaps')
361 if bundlecaps is not None and 'HG2X' in bundlecaps:
361 if bundlecaps is not None and 'HG2X' in bundlecaps:
362 return bundle2.unbundle20(self.ui, f)
362 return bundle2.unbundle20(self.ui, f)
363 else:
363 else:
364 return changegroupmod.unbundle10(f, 'UN')
364 return changegroupmod.unbundle10(f, 'UN')
365
365
366 def unbundle(self, cg, heads, source):
366 def unbundle(self, cg, heads, source):
367 '''Send cg (a readable file-like object representing the
367 '''Send cg (a readable file-like object representing the
368 changegroup to push, typically a chunkbuffer object) to the
368 changegroup to push, typically a chunkbuffer object) to the
369 remote server as a bundle.
369 remote server as a bundle.
370
370
371 When pushing a bundle10 stream, return an integer indicating the
371 When pushing a bundle10 stream, return an integer indicating the
372 result of the push (see localrepository.addchangegroup()).
372 result of the push (see localrepository.addchangegroup()).
373
373
374 When pushing a bundle20 stream, return a bundle20 stream.'''
374 When pushing a bundle20 stream, return a bundle20 stream.'''
375
375
376 if heads != ['force'] and self.capable('unbundlehash'):
376 if heads != ['force'] and self.capable('unbundlehash'):
377 heads = encodelist(['hashed',
377 heads = encodelist(['hashed',
378 util.sha1(''.join(sorted(heads))).digest()])
378 util.sha1(''.join(sorted(heads))).digest()])
379 else:
379 else:
380 heads = encodelist(heads)
380 heads = encodelist(heads)
381
381
382 if util.safehasattr(cg, 'deltaheader'):
382 if util.safehasattr(cg, 'deltaheader'):
383 # this a bundle10, do the old style call sequence
383 # this a bundle10, do the old style call sequence
384 ret, output = self._callpush("unbundle", cg, heads=heads)
384 ret, output = self._callpush("unbundle", cg, heads=heads)
385 if ret == "":
385 if ret == "":
386 raise error.ResponseError(
386 raise error.ResponseError(
387 _('push failed:'), output)
387 _('push failed:'), output)
388 try:
388 try:
389 ret = int(ret)
389 ret = int(ret)
390 except ValueError:
390 except ValueError:
391 raise error.ResponseError(
391 raise error.ResponseError(
392 _('push failed (unexpected response):'), ret)
392 _('push failed (unexpected response):'), ret)
393
393
394 for l in output.splitlines(True):
394 for l in output.splitlines(True):
395 self.ui.status(_('remote: '), l)
395 self.ui.status(_('remote: '), l)
396 else:
396 else:
397 # bundle2 push. Send a stream, fetch a stream.
397 # bundle2 push. Send a stream, fetch a stream.
398 stream = self._calltwowaystream('unbundle', cg, heads=heads)
398 stream = self._calltwowaystream('unbundle', cg, heads=heads)
399 ret = bundle2.unbundle20(self.ui, stream)
399 ret = bundle2.unbundle20(self.ui, stream)
400 return ret
400 return ret
401
401
402 def debugwireargs(self, one, two, three=None, four=None, five=None):
402 def debugwireargs(self, one, two, three=None, four=None, five=None):
403 # don't pass optional arguments left at their default value
403 # don't pass optional arguments left at their default value
404 opts = {}
404 opts = {}
405 if three is not None:
405 if three is not None:
406 opts['three'] = three
406 opts['three'] = three
407 if four is not None:
407 if four is not None:
408 opts['four'] = four
408 opts['four'] = four
409 return self._call('debugwireargs', one=one, two=two, **opts)
409 return self._call('debugwireargs', one=one, two=two, **opts)
410
410
411 def _call(self, cmd, **args):
411 def _call(self, cmd, **args):
412 """execute <cmd> on the server
412 """execute <cmd> on the server
413
413
414 The command is expected to return a simple string.
414 The command is expected to return a simple string.
415
415
416 returns the server reply as a string."""
416 returns the server reply as a string."""
417 raise NotImplementedError()
417 raise NotImplementedError()
418
418
419 def _callstream(self, cmd, **args):
419 def _callstream(self, cmd, **args):
420 """execute <cmd> on the server
420 """execute <cmd> on the server
421
421
422 The command is expected to return a stream.
422 The command is expected to return a stream.
423
423
424 returns the server reply as a file like object."""
424 returns the server reply as a file like object."""
425 raise NotImplementedError()
425 raise NotImplementedError()
426
426
427 def _callcompressable(self, cmd, **args):
427 def _callcompressable(self, cmd, **args):
428 """execute <cmd> on the server
428 """execute <cmd> on the server
429
429
430 The command is expected to return a stream.
430 The command is expected to return a stream.
431
431
432 The stream may have been compressed in some implementations. This
432 The stream may have been compressed in some implementations. This
433 function takes care of the decompression. This is the only difference
433 function takes care of the decompression. This is the only difference
434 with _callstream.
434 with _callstream.
435
435
436 returns the server reply as a file like object.
436 returns the server reply as a file like object.
437 """
437 """
438 raise NotImplementedError()
438 raise NotImplementedError()
439
439
440 def _callpush(self, cmd, fp, **args):
440 def _callpush(self, cmd, fp, **args):
441 """execute a <cmd> on server
441 """execute a <cmd> on server
442
442
443 The command is expected to be related to a push. Push has a special
443 The command is expected to be related to a push. Push has a special
444 return method.
444 return method.
445
445
446 returns the server reply as a (ret, output) tuple. ret is either
446 returns the server reply as a (ret, output) tuple. ret is either
447 empty (error) or a stringified int.
447 empty (error) or a stringified int.
448 """
448 """
449 raise NotImplementedError()
449 raise NotImplementedError()
450
450
451 def _calltwowaystream(self, cmd, fp, **args):
451 def _calltwowaystream(self, cmd, fp, **args):
452 """execute <cmd> on server
452 """execute <cmd> on server
453
453
454 The command will send a stream to the server and get a stream in reply.
454 The command will send a stream to the server and get a stream in reply.
455 """
455 """
456 raise NotImplementedError()
456 raise NotImplementedError()
457
457
458 def _abort(self, exception):
458 def _abort(self, exception):
459 """clearly abort the wire protocol connection and raise the exception
459 """clearly abort the wire protocol connection and raise the exception
460 """
460 """
461 raise NotImplementedError()
461 raise NotImplementedError()
462
462
463 # server side
463 # server side
464
464
465 # wire protocol command can either return a string or one of these classes.
465 # wire protocol command can either return a string or one of these classes.
466 class streamres(object):
466 class streamres(object):
467 """wireproto reply: binary stream
467 """wireproto reply: binary stream
468
468
469 The call was successful and the result is a stream.
469 The call was successful and the result is a stream.
470 Iterate on the `self.gen` attribute to retrieve chunks.
470 Iterate on the `self.gen` attribute to retrieve chunks.
471 """
471 """
472 def __init__(self, gen):
472 def __init__(self, gen):
473 self.gen = gen
473 self.gen = gen
474
474
475 class pushres(object):
475 class pushres(object):
476 """wireproto reply: success with simple integer return
476 """wireproto reply: success with simple integer return
477
477
478 The call was successful and returned an integer contained in `self.res`.
478 The call was successful and returned an integer contained in `self.res`.
479 """
479 """
480 def __init__(self, res):
480 def __init__(self, res):
481 self.res = res
481 self.res = res
482
482
483 class pusherr(object):
483 class pusherr(object):
484 """wireproto reply: failure
484 """wireproto reply: failure
485
485
486 The call failed. The `self.res` attribute contains the error message.
486 The call failed. The `self.res` attribute contains the error message.
487 """
487 """
488 def __init__(self, res):
488 def __init__(self, res):
489 self.res = res
489 self.res = res
490
490
491 class ooberror(object):
491 class ooberror(object):
492 """wireproto reply: failure of a batch of operation
492 """wireproto reply: failure of a batch of operation
493
493
494 Something failed during a batch call. The error message is stored in
494 Something failed during a batch call. The error message is stored in
495 `self.message`.
495 `self.message`.
496 """
496 """
497 def __init__(self, message):
497 def __init__(self, message):
498 self.message = message
498 self.message = message
499
499
500 def dispatch(repo, proto, command):
500 def dispatch(repo, proto, command):
501 repo = repo.filtered("served")
501 repo = repo.filtered("served")
502 func, spec = commands[command]
502 func, spec = commands[command]
503 args = proto.getargs(spec)
503 args = proto.getargs(spec)
504 return func(repo, proto, *args)
504 return func(repo, proto, *args)
505
505
506 def options(cmd, keys, others):
506 def options(cmd, keys, others):
507 opts = {}
507 opts = {}
508 for k in keys:
508 for k in keys:
509 if k in others:
509 if k in others:
510 opts[k] = others[k]
510 opts[k] = others[k]
511 del others[k]
511 del others[k]
512 if others:
512 if others:
513 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
513 sys.stderr.write("warning: %s ignored unexpected arguments %s\n"
514 % (cmd, ",".join(others)))
514 % (cmd, ",".join(others)))
515 return opts
515 return opts
516
516
517 # list of commands
517 # list of commands
518 commands = {}
518 commands = {}
519
519
520 def wireprotocommand(name, args=''):
520 def wireprotocommand(name, args=''):
521 """decorator for wire protocol command"""
521 """decorator for wire protocol command"""
522 def register(func):
522 def register(func):
523 commands[name] = (func, args)
523 commands[name] = (func, args)
524 return func
524 return func
525 return register
525 return register
526
526
527 @wireprotocommand('batch', 'cmds *')
527 @wireprotocommand('batch', 'cmds *')
528 def batch(repo, proto, cmds, others):
528 def batch(repo, proto, cmds, others):
529 repo = repo.filtered("served")
529 repo = repo.filtered("served")
530 res = []
530 res = []
531 for pair in cmds.split(';'):
531 for pair in cmds.split(';'):
532 op, args = pair.split(' ', 1)
532 op, args = pair.split(' ', 1)
533 vals = {}
533 vals = {}
534 for a in args.split(','):
534 for a in args.split(','):
535 if a:
535 if a:
536 n, v = a.split('=')
536 n, v = a.split('=')
537 vals[n] = unescapearg(v)
537 vals[n] = unescapearg(v)
538 func, spec = commands[op]
538 func, spec = commands[op]
539 if spec:
539 if spec:
540 keys = spec.split()
540 keys = spec.split()
541 data = {}
541 data = {}
542 for k in keys:
542 for k in keys:
543 if k == '*':
543 if k == '*':
544 star = {}
544 star = {}
545 for key in vals.keys():
545 for key in vals.keys():
546 if key not in keys:
546 if key not in keys:
547 star[key] = vals[key]
547 star[key] = vals[key]
548 data['*'] = star
548 data['*'] = star
549 else:
549 else:
550 data[k] = vals[k]
550 data[k] = vals[k]
551 result = func(repo, proto, *[data[k] for k in keys])
551 result = func(repo, proto, *[data[k] for k in keys])
552 else:
552 else:
553 result = func(repo, proto)
553 result = func(repo, proto)
554 if isinstance(result, ooberror):
554 if isinstance(result, ooberror):
555 return result
555 return result
556 res.append(escapearg(result))
556 res.append(escapearg(result))
557 return ';'.join(res)
557 return ';'.join(res)
558
558
559 @wireprotocommand('between', 'pairs')
559 @wireprotocommand('between', 'pairs')
560 def between(repo, proto, pairs):
560 def between(repo, proto, pairs):
561 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
561 pairs = [decodelist(p, '-') for p in pairs.split(" ")]
562 r = []
562 r = []
563 for b in repo.between(pairs):
563 for b in repo.between(pairs):
564 r.append(encodelist(b) + "\n")
564 r.append(encodelist(b) + "\n")
565 return "".join(r)
565 return "".join(r)
566
566
567 @wireprotocommand('branchmap')
567 @wireprotocommand('branchmap')
568 def branchmap(repo, proto):
568 def branchmap(repo, proto):
569 branchmap = repo.branchmap()
569 branchmap = repo.branchmap()
570 heads = []
570 heads = []
571 for branch, nodes in branchmap.iteritems():
571 for branch, nodes in branchmap.iteritems():
572 branchname = urllib.quote(encoding.fromlocal(branch))
572 branchname = urllib.quote(encoding.fromlocal(branch))
573 branchnodes = encodelist(nodes)
573 branchnodes = encodelist(nodes)
574 heads.append('%s %s' % (branchname, branchnodes))
574 heads.append('%s %s' % (branchname, branchnodes))
575 return '\n'.join(heads)
575 return '\n'.join(heads)
576
576
577 @wireprotocommand('branches', 'nodes')
577 @wireprotocommand('branches', 'nodes')
578 def branches(repo, proto, nodes):
578 def branches(repo, proto, nodes):
579 nodes = decodelist(nodes)
579 nodes = decodelist(nodes)
580 r = []
580 r = []
581 for b in repo.branches(nodes):
581 for b in repo.branches(nodes):
582 r.append(encodelist(b) + "\n")
582 r.append(encodelist(b) + "\n")
583 return "".join(r)
583 return "".join(r)
584
584
585
585
586 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
586 wireprotocaps = ['lookup', 'changegroupsubset', 'branchmap', 'pushkey',
587 'known', 'getbundle', 'unbundlehash', 'batch']
587 'known', 'getbundle', 'unbundlehash', 'batch']
588
588
589 def _capabilities(repo, proto):
589 def _capabilities(repo, proto):
590 """return a list of capabilities for a repo
590 """return a list of capabilities for a repo
591
591
592 This function exists to allow extensions to easily wrap capabilities
592 This function exists to allow extensions to easily wrap capabilities
593 computation
593 computation
594
594
595 - returns a lists: easy to alter
595 - returns a lists: easy to alter
596 - change done here will be propagated to both `capabilities` and `hello`
596 - change done here will be propagated to both `capabilities` and `hello`
597 command without any other action needed.
597 command without any other action needed.
598 """
598 """
599 # copy to prevent modification of the global list
599 # copy to prevent modification of the global list
600 caps = list(wireprotocaps)
600 caps = list(wireprotocaps)
601 if _allowstream(repo.ui):
601 if _allowstream(repo.ui):
602 if repo.ui.configbool('server', 'preferuncompressed', False):
602 if repo.ui.configbool('server', 'preferuncompressed', False):
603 caps.append('stream-preferred')
603 caps.append('stream-preferred')
604 requiredformats = repo.requirements & repo.supportedformats
604 requiredformats = repo.requirements & repo.supportedformats
605 # if our local revlogs are just revlogv1, add 'stream' cap
605 # if our local revlogs are just revlogv1, add 'stream' cap
606 if not requiredformats - set(('revlogv1',)):
606 if not requiredformats - set(('revlogv1',)):
607 caps.append('stream')
607 caps.append('stream')
608 # otherwise, add 'streamreqs' detailing our local revlog format
608 # otherwise, add 'streamreqs' detailing our local revlog format
609 else:
609 else:
610 caps.append('streamreqs=%s' % ','.join(requiredformats))
610 caps.append('streamreqs=%s' % ','.join(requiredformats))
611 if repo.ui.configbool('experimental', 'bundle2-exp', False):
611 if repo.ui.configbool('experimental', 'bundle2-exp', False):
612 capsblob = bundle2.encodecaps(repo.bundle2caps)
612 capsblob = bundle2.encodecaps(bundle2.capabilities)
613 caps.append('bundle2-exp=' + urllib.quote(capsblob))
613 caps.append('bundle2-exp=' + urllib.quote(capsblob))
614 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
614 caps.append('unbundle=%s' % ','.join(changegroupmod.bundlepriority))
615 caps.append('httpheader=1024')
615 caps.append('httpheader=1024')
616 return caps
616 return caps
617
617
618 # If you are writing an extension and consider wrapping this function. Wrap
618 # If you are writing an extension and consider wrapping this function. Wrap
619 # `_capabilities` instead.
619 # `_capabilities` instead.
620 @wireprotocommand('capabilities')
620 @wireprotocommand('capabilities')
621 def capabilities(repo, proto):
621 def capabilities(repo, proto):
622 return ' '.join(_capabilities(repo, proto))
622 return ' '.join(_capabilities(repo, proto))
623
623
624 @wireprotocommand('changegroup', 'roots')
624 @wireprotocommand('changegroup', 'roots')
625 def changegroup(repo, proto, roots):
625 def changegroup(repo, proto, roots):
626 nodes = decodelist(roots)
626 nodes = decodelist(roots)
627 cg = changegroupmod.changegroup(repo, nodes, 'serve')
627 cg = changegroupmod.changegroup(repo, nodes, 'serve')
628 return streamres(proto.groupchunks(cg))
628 return streamres(proto.groupchunks(cg))
629
629
630 @wireprotocommand('changegroupsubset', 'bases heads')
630 @wireprotocommand('changegroupsubset', 'bases heads')
631 def changegroupsubset(repo, proto, bases, heads):
631 def changegroupsubset(repo, proto, bases, heads):
632 bases = decodelist(bases)
632 bases = decodelist(bases)
633 heads = decodelist(heads)
633 heads = decodelist(heads)
634 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
634 cg = changegroupmod.changegroupsubset(repo, bases, heads, 'serve')
635 return streamres(proto.groupchunks(cg))
635 return streamres(proto.groupchunks(cg))
636
636
637 @wireprotocommand('debugwireargs', 'one two *')
637 @wireprotocommand('debugwireargs', 'one two *')
638 def debugwireargs(repo, proto, one, two, others):
638 def debugwireargs(repo, proto, one, two, others):
639 # only accept optional args from the known set
639 # only accept optional args from the known set
640 opts = options('debugwireargs', ['three', 'four'], others)
640 opts = options('debugwireargs', ['three', 'four'], others)
641 return repo.debugwireargs(one, two, **opts)
641 return repo.debugwireargs(one, two, **opts)
642
642
643 # List of options accepted by getbundle.
643 # List of options accepted by getbundle.
644 #
644 #
645 # Meant to be extended by extensions. It is the extension's responsibility to
645 # Meant to be extended by extensions. It is the extension's responsibility to
646 # ensure such options are properly processed in exchange.getbundle.
646 # ensure such options are properly processed in exchange.getbundle.
647 gboptslist = ['heads', 'common', 'bundlecaps']
647 gboptslist = ['heads', 'common', 'bundlecaps']
648
648
649 @wireprotocommand('getbundle', '*')
649 @wireprotocommand('getbundle', '*')
650 def getbundle(repo, proto, others):
650 def getbundle(repo, proto, others):
651 opts = options('getbundle', gboptsmap.keys(), others)
651 opts = options('getbundle', gboptsmap.keys(), others)
652 for k, v in opts.iteritems():
652 for k, v in opts.iteritems():
653 keytype = gboptsmap[k]
653 keytype = gboptsmap[k]
654 if keytype == 'nodes':
654 if keytype == 'nodes':
655 opts[k] = decodelist(v)
655 opts[k] = decodelist(v)
656 elif keytype == 'csv':
656 elif keytype == 'csv':
657 opts[k] = set(v.split(','))
657 opts[k] = set(v.split(','))
658 elif keytype == 'boolean':
658 elif keytype == 'boolean':
659 opts[k] = '%i' % bool(v)
659 opts[k] = '%i' % bool(v)
660 elif keytype != 'plain':
660 elif keytype != 'plain':
661 raise KeyError('unknown getbundle option type %s'
661 raise KeyError('unknown getbundle option type %s'
662 % keytype)
662 % keytype)
663 cg = exchange.getbundle(repo, 'serve', **opts)
663 cg = exchange.getbundle(repo, 'serve', **opts)
664 return streamres(proto.groupchunks(cg))
664 return streamres(proto.groupchunks(cg))
665
665
666 @wireprotocommand('heads')
666 @wireprotocommand('heads')
667 def heads(repo, proto):
667 def heads(repo, proto):
668 h = repo.heads()
668 h = repo.heads()
669 return encodelist(h) + "\n"
669 return encodelist(h) + "\n"
670
670
671 @wireprotocommand('hello')
671 @wireprotocommand('hello')
672 def hello(repo, proto):
672 def hello(repo, proto):
673 '''the hello command returns a set of lines describing various
673 '''the hello command returns a set of lines describing various
674 interesting things about the server, in an RFC822-like format.
674 interesting things about the server, in an RFC822-like format.
675 Currently the only one defined is "capabilities", which
675 Currently the only one defined is "capabilities", which
676 consists of a line in the form:
676 consists of a line in the form:
677
677
678 capabilities: space separated list of tokens
678 capabilities: space separated list of tokens
679 '''
679 '''
680 return "capabilities: %s\n" % (capabilities(repo, proto))
680 return "capabilities: %s\n" % (capabilities(repo, proto))
681
681
682 @wireprotocommand('listkeys', 'namespace')
682 @wireprotocommand('listkeys', 'namespace')
683 def listkeys(repo, proto, namespace):
683 def listkeys(repo, proto, namespace):
684 d = repo.listkeys(encoding.tolocal(namespace)).items()
684 d = repo.listkeys(encoding.tolocal(namespace)).items()
685 return pushkeymod.encodekeys(d)
685 return pushkeymod.encodekeys(d)
686
686
687 @wireprotocommand('lookup', 'key')
687 @wireprotocommand('lookup', 'key')
688 def lookup(repo, proto, key):
688 def lookup(repo, proto, key):
689 try:
689 try:
690 k = encoding.tolocal(key)
690 k = encoding.tolocal(key)
691 c = repo[k]
691 c = repo[k]
692 r = c.hex()
692 r = c.hex()
693 success = 1
693 success = 1
694 except Exception, inst:
694 except Exception, inst:
695 r = str(inst)
695 r = str(inst)
696 success = 0
696 success = 0
697 return "%s %s\n" % (success, r)
697 return "%s %s\n" % (success, r)
698
698
699 @wireprotocommand('known', 'nodes *')
699 @wireprotocommand('known', 'nodes *')
700 def known(repo, proto, nodes, others):
700 def known(repo, proto, nodes, others):
701 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
701 return ''.join(b and "1" or "0" for b in repo.known(decodelist(nodes)))
702
702
703 @wireprotocommand('pushkey', 'namespace key old new')
703 @wireprotocommand('pushkey', 'namespace key old new')
704 def pushkey(repo, proto, namespace, key, old, new):
704 def pushkey(repo, proto, namespace, key, old, new):
705 # compatibility with pre-1.8 clients which were accidentally
705 # compatibility with pre-1.8 clients which were accidentally
706 # sending raw binary nodes rather than utf-8-encoded hex
706 # sending raw binary nodes rather than utf-8-encoded hex
707 if len(new) == 20 and new.encode('string-escape') != new:
707 if len(new) == 20 and new.encode('string-escape') != new:
708 # looks like it could be a binary node
708 # looks like it could be a binary node
709 try:
709 try:
710 new.decode('utf-8')
710 new.decode('utf-8')
711 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
711 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
712 except UnicodeDecodeError:
712 except UnicodeDecodeError:
713 pass # binary, leave unmodified
713 pass # binary, leave unmodified
714 else:
714 else:
715 new = encoding.tolocal(new) # normal path
715 new = encoding.tolocal(new) # normal path
716
716
717 if util.safehasattr(proto, 'restore'):
717 if util.safehasattr(proto, 'restore'):
718
718
719 proto.redirect()
719 proto.redirect()
720
720
721 try:
721 try:
722 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
722 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
723 encoding.tolocal(old), new) or False
723 encoding.tolocal(old), new) or False
724 except util.Abort:
724 except util.Abort:
725 r = False
725 r = False
726
726
727 output = proto.restore()
727 output = proto.restore()
728
728
729 return '%s\n%s' % (int(r), output)
729 return '%s\n%s' % (int(r), output)
730
730
731 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
731 r = repo.pushkey(encoding.tolocal(namespace), encoding.tolocal(key),
732 encoding.tolocal(old), new)
732 encoding.tolocal(old), new)
733 return '%s\n' % int(r)
733 return '%s\n' % int(r)
734
734
735 def _allowstream(ui):
735 def _allowstream(ui):
736 return ui.configbool('server', 'uncompressed', True, untrusted=True)
736 return ui.configbool('server', 'uncompressed', True, untrusted=True)
737
737
738 def _walkstreamfiles(repo):
738 def _walkstreamfiles(repo):
739 # this is it's own function so extensions can override it
739 # this is it's own function so extensions can override it
740 return repo.store.walk()
740 return repo.store.walk()
741
741
742 @wireprotocommand('stream_out')
742 @wireprotocommand('stream_out')
743 def stream(repo, proto):
743 def stream(repo, proto):
744 '''If the server supports streaming clone, it advertises the "stream"
744 '''If the server supports streaming clone, it advertises the "stream"
745 capability with a value representing the version and flags of the repo
745 capability with a value representing the version and flags of the repo
746 it is serving. Client checks to see if it understands the format.
746 it is serving. Client checks to see if it understands the format.
747
747
748 The format is simple: the server writes out a line with the amount
748 The format is simple: the server writes out a line with the amount
749 of files, then the total amount of bytes to be transferred (separated
749 of files, then the total amount of bytes to be transferred (separated
750 by a space). Then, for each file, the server first writes the filename
750 by a space). Then, for each file, the server first writes the filename
751 and file size (separated by the null character), then the file contents.
751 and file size (separated by the null character), then the file contents.
752 '''
752 '''
753
753
754 if not _allowstream(repo.ui):
754 if not _allowstream(repo.ui):
755 return '1\n'
755 return '1\n'
756
756
757 entries = []
757 entries = []
758 total_bytes = 0
758 total_bytes = 0
759 try:
759 try:
760 # get consistent snapshot of repo, lock during scan
760 # get consistent snapshot of repo, lock during scan
761 lock = repo.lock()
761 lock = repo.lock()
762 try:
762 try:
763 repo.ui.debug('scanning\n')
763 repo.ui.debug('scanning\n')
764 for name, ename, size in _walkstreamfiles(repo):
764 for name, ename, size in _walkstreamfiles(repo):
765 if size:
765 if size:
766 entries.append((name, size))
766 entries.append((name, size))
767 total_bytes += size
767 total_bytes += size
768 finally:
768 finally:
769 lock.release()
769 lock.release()
770 except error.LockError:
770 except error.LockError:
771 return '2\n' # error: 2
771 return '2\n' # error: 2
772
772
773 def streamer(repo, entries, total):
773 def streamer(repo, entries, total):
774 '''stream out all metadata files in repository.'''
774 '''stream out all metadata files in repository.'''
775 yield '0\n' # success
775 yield '0\n' # success
776 repo.ui.debug('%d files, %d bytes to transfer\n' %
776 repo.ui.debug('%d files, %d bytes to transfer\n' %
777 (len(entries), total_bytes))
777 (len(entries), total_bytes))
778 yield '%d %d\n' % (len(entries), total_bytes)
778 yield '%d %d\n' % (len(entries), total_bytes)
779
779
780 sopener = repo.sopener
780 sopener = repo.sopener
781 oldaudit = sopener.mustaudit
781 oldaudit = sopener.mustaudit
782 debugflag = repo.ui.debugflag
782 debugflag = repo.ui.debugflag
783 sopener.mustaudit = False
783 sopener.mustaudit = False
784
784
785 try:
785 try:
786 for name, size in entries:
786 for name, size in entries:
787 if debugflag:
787 if debugflag:
788 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
788 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
789 # partially encode name over the wire for backwards compat
789 # partially encode name over the wire for backwards compat
790 yield '%s\0%d\n' % (store.encodedir(name), size)
790 yield '%s\0%d\n' % (store.encodedir(name), size)
791 if size <= 65536:
791 if size <= 65536:
792 fp = sopener(name)
792 fp = sopener(name)
793 try:
793 try:
794 data = fp.read(size)
794 data = fp.read(size)
795 finally:
795 finally:
796 fp.close()
796 fp.close()
797 yield data
797 yield data
798 else:
798 else:
799 for chunk in util.filechunkiter(sopener(name), limit=size):
799 for chunk in util.filechunkiter(sopener(name), limit=size):
800 yield chunk
800 yield chunk
801 # replace with "finally:" when support for python 2.4 has been dropped
801 # replace with "finally:" when support for python 2.4 has been dropped
802 except Exception:
802 except Exception:
803 sopener.mustaudit = oldaudit
803 sopener.mustaudit = oldaudit
804 raise
804 raise
805 sopener.mustaudit = oldaudit
805 sopener.mustaudit = oldaudit
806
806
807 return streamres(streamer(repo, entries, total_bytes))
807 return streamres(streamer(repo, entries, total_bytes))
808
808
809 @wireprotocommand('unbundle', 'heads')
809 @wireprotocommand('unbundle', 'heads')
810 def unbundle(repo, proto, heads):
810 def unbundle(repo, proto, heads):
811 their_heads = decodelist(heads)
811 their_heads = decodelist(heads)
812
812
813 try:
813 try:
814 proto.redirect()
814 proto.redirect()
815
815
816 exchange.check_heads(repo, their_heads, 'preparing changes')
816 exchange.check_heads(repo, their_heads, 'preparing changes')
817
817
818 # write bundle data to temporary file because it can be big
818 # write bundle data to temporary file because it can be big
819 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
819 fd, tempname = tempfile.mkstemp(prefix='hg-unbundle-')
820 fp = os.fdopen(fd, 'wb+')
820 fp = os.fdopen(fd, 'wb+')
821 r = 0
821 r = 0
822 try:
822 try:
823 proto.getfile(fp)
823 proto.getfile(fp)
824 fp.seek(0)
824 fp.seek(0)
825 gen = exchange.readbundle(repo.ui, fp, None)
825 gen = exchange.readbundle(repo.ui, fp, None)
826 r = exchange.unbundle(repo, gen, their_heads, 'serve',
826 r = exchange.unbundle(repo, gen, their_heads, 'serve',
827 proto._client())
827 proto._client())
828 if util.safehasattr(r, 'addpart'):
828 if util.safehasattr(r, 'addpart'):
829 # The return looks streameable, we are in the bundle2 case and
829 # The return looks streameable, we are in the bundle2 case and
830 # should return a stream.
830 # should return a stream.
831 return streamres(r.getchunks())
831 return streamres(r.getchunks())
832 return pushres(r)
832 return pushres(r)
833
833
834 finally:
834 finally:
835 fp.close()
835 fp.close()
836 os.unlink(tempname)
836 os.unlink(tempname)
837 except error.BundleValueError, exc:
837 except error.BundleValueError, exc:
838 bundler = bundle2.bundle20(repo.ui)
838 bundler = bundle2.bundle20(repo.ui)
839 errpart = bundler.newpart('B2X:ERROR:UNSUPPORTEDCONTENT')
839 errpart = bundler.newpart('B2X:ERROR:UNSUPPORTEDCONTENT')
840 if exc.parttype is not None:
840 if exc.parttype is not None:
841 errpart.addparam('parttype', exc.parttype)
841 errpart.addparam('parttype', exc.parttype)
842 if exc.params:
842 if exc.params:
843 errpart.addparam('params', '\0'.join(exc.params))
843 errpart.addparam('params', '\0'.join(exc.params))
844 return streamres(bundler.getchunks())
844 return streamres(bundler.getchunks())
845 except util.Abort, inst:
845 except util.Abort, inst:
846 # The old code we moved used sys.stderr directly.
846 # The old code we moved used sys.stderr directly.
847 # We did not change it to minimise code change.
847 # We did not change it to minimise code change.
848 # This need to be moved to something proper.
848 # This need to be moved to something proper.
849 # Feel free to do it.
849 # Feel free to do it.
850 if getattr(inst, 'duringunbundle2', False):
850 if getattr(inst, 'duringunbundle2', False):
851 bundler = bundle2.bundle20(repo.ui)
851 bundler = bundle2.bundle20(repo.ui)
852 manargs = [('message', str(inst))]
852 manargs = [('message', str(inst))]
853 advargs = []
853 advargs = []
854 if inst.hint is not None:
854 if inst.hint is not None:
855 advargs.append(('hint', inst.hint))
855 advargs.append(('hint', inst.hint))
856 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
856 bundler.addpart(bundle2.bundlepart('B2X:ERROR:ABORT',
857 manargs, advargs))
857 manargs, advargs))
858 return streamres(bundler.getchunks())
858 return streamres(bundler.getchunks())
859 else:
859 else:
860 sys.stderr.write("abort: %s\n" % inst)
860 sys.stderr.write("abort: %s\n" % inst)
861 return pushres(0)
861 return pushres(0)
862 except error.PushRaced, exc:
862 except error.PushRaced, exc:
863 if getattr(exc, 'duringunbundle2', False):
863 if getattr(exc, 'duringunbundle2', False):
864 bundler = bundle2.bundle20(repo.ui)
864 bundler = bundle2.bundle20(repo.ui)
865 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
865 bundler.newpart('B2X:ERROR:PUSHRACED', [('message', str(exc))])
866 return streamres(bundler.getchunks())
866 return streamres(bundler.getchunks())
867 else:
867 else:
868 return pusherr(str(exc))
868 return pusherr(str(exc))
General Comments 0
You need to be logged in to leave comments. Login now