##// END OF EJS Templates
cleanup: use modern @property/@foo.setter property specification...
Augie Fackler -
r27879:52a4ad62 default
parent child Browse files
Show More
@@ -1,1552 +1,1554 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155 import urllib
155 import urllib
156
156
157 from .i18n import _
157 from .i18n import _
158 from . import (
158 from . import (
159 changegroup,
159 changegroup,
160 error,
160 error,
161 obsolete,
161 obsolete,
162 pushkey,
162 pushkey,
163 tags,
163 tags,
164 url,
164 url,
165 util,
165 util,
166 )
166 )
167
167
168 _pack = struct.pack
168 _pack = struct.pack
169 _unpack = struct.unpack
169 _unpack = struct.unpack
170
170
171 _fstreamparamsize = '>i'
171 _fstreamparamsize = '>i'
172 _fpartheadersize = '>i'
172 _fpartheadersize = '>i'
173 _fparttypesize = '>B'
173 _fparttypesize = '>B'
174 _fpartid = '>I'
174 _fpartid = '>I'
175 _fpayloadsize = '>i'
175 _fpayloadsize = '>i'
176 _fpartparamcount = '>BB'
176 _fpartparamcount = '>BB'
177
177
178 preferedchunksize = 4096
178 preferedchunksize = 4096
179
179
180 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
180 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
181
181
182 def outdebug(ui, message):
182 def outdebug(ui, message):
183 """debug regarding output stream (bundling)"""
183 """debug regarding output stream (bundling)"""
184 if ui.configbool('devel', 'bundle2.debug', False):
184 if ui.configbool('devel', 'bundle2.debug', False):
185 ui.debug('bundle2-output: %s\n' % message)
185 ui.debug('bundle2-output: %s\n' % message)
186
186
187 def indebug(ui, message):
187 def indebug(ui, message):
188 """debug on input stream (unbundling)"""
188 """debug on input stream (unbundling)"""
189 if ui.configbool('devel', 'bundle2.debug', False):
189 if ui.configbool('devel', 'bundle2.debug', False):
190 ui.debug('bundle2-input: %s\n' % message)
190 ui.debug('bundle2-input: %s\n' % message)
191
191
192 def validateparttype(parttype):
192 def validateparttype(parttype):
193 """raise ValueError if a parttype contains invalid character"""
193 """raise ValueError if a parttype contains invalid character"""
194 if _parttypeforbidden.search(parttype):
194 if _parttypeforbidden.search(parttype):
195 raise ValueError(parttype)
195 raise ValueError(parttype)
196
196
197 def _makefpartparamsizes(nbparams):
197 def _makefpartparamsizes(nbparams):
198 """return a struct format to read part parameter sizes
198 """return a struct format to read part parameter sizes
199
199
200 The number parameters is variable so we need to build that format
200 The number parameters is variable so we need to build that format
201 dynamically.
201 dynamically.
202 """
202 """
203 return '>'+('BB'*nbparams)
203 return '>'+('BB'*nbparams)
204
204
205 parthandlermapping = {}
205 parthandlermapping = {}
206
206
207 def parthandler(parttype, params=()):
207 def parthandler(parttype, params=()):
208 """decorator that register a function as a bundle2 part handler
208 """decorator that register a function as a bundle2 part handler
209
209
210 eg::
210 eg::
211
211
212 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
212 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
213 def myparttypehandler(...):
213 def myparttypehandler(...):
214 '''process a part of type "my part".'''
214 '''process a part of type "my part".'''
215 ...
215 ...
216 """
216 """
217 validateparttype(parttype)
217 validateparttype(parttype)
218 def _decorator(func):
218 def _decorator(func):
219 lparttype = parttype.lower() # enforce lower case matching.
219 lparttype = parttype.lower() # enforce lower case matching.
220 assert lparttype not in parthandlermapping
220 assert lparttype not in parthandlermapping
221 parthandlermapping[lparttype] = func
221 parthandlermapping[lparttype] = func
222 func.params = frozenset(params)
222 func.params = frozenset(params)
223 return func
223 return func
224 return _decorator
224 return _decorator
225
225
226 class unbundlerecords(object):
226 class unbundlerecords(object):
227 """keep record of what happens during and unbundle
227 """keep record of what happens during and unbundle
228
228
229 New records are added using `records.add('cat', obj)`. Where 'cat' is a
229 New records are added using `records.add('cat', obj)`. Where 'cat' is a
230 category of record and obj is an arbitrary object.
230 category of record and obj is an arbitrary object.
231
231
232 `records['cat']` will return all entries of this category 'cat'.
232 `records['cat']` will return all entries of this category 'cat'.
233
233
234 Iterating on the object itself will yield `('category', obj)` tuples
234 Iterating on the object itself will yield `('category', obj)` tuples
235 for all entries.
235 for all entries.
236
236
237 All iterations happens in chronological order.
237 All iterations happens in chronological order.
238 """
238 """
239
239
240 def __init__(self):
240 def __init__(self):
241 self._categories = {}
241 self._categories = {}
242 self._sequences = []
242 self._sequences = []
243 self._replies = {}
243 self._replies = {}
244
244
245 def add(self, category, entry, inreplyto=None):
245 def add(self, category, entry, inreplyto=None):
246 """add a new record of a given category.
246 """add a new record of a given category.
247
247
248 The entry can then be retrieved in the list returned by
248 The entry can then be retrieved in the list returned by
249 self['category']."""
249 self['category']."""
250 self._categories.setdefault(category, []).append(entry)
250 self._categories.setdefault(category, []).append(entry)
251 self._sequences.append((category, entry))
251 self._sequences.append((category, entry))
252 if inreplyto is not None:
252 if inreplyto is not None:
253 self.getreplies(inreplyto).add(category, entry)
253 self.getreplies(inreplyto).add(category, entry)
254
254
255 def getreplies(self, partid):
255 def getreplies(self, partid):
256 """get the records that are replies to a specific part"""
256 """get the records that are replies to a specific part"""
257 return self._replies.setdefault(partid, unbundlerecords())
257 return self._replies.setdefault(partid, unbundlerecords())
258
258
259 def __getitem__(self, cat):
259 def __getitem__(self, cat):
260 return tuple(self._categories.get(cat, ()))
260 return tuple(self._categories.get(cat, ()))
261
261
262 def __iter__(self):
262 def __iter__(self):
263 return iter(self._sequences)
263 return iter(self._sequences)
264
264
265 def __len__(self):
265 def __len__(self):
266 return len(self._sequences)
266 return len(self._sequences)
267
267
268 def __nonzero__(self):
268 def __nonzero__(self):
269 return bool(self._sequences)
269 return bool(self._sequences)
270
270
271 class bundleoperation(object):
271 class bundleoperation(object):
272 """an object that represents a single bundling process
272 """an object that represents a single bundling process
273
273
274 Its purpose is to carry unbundle-related objects and states.
274 Its purpose is to carry unbundle-related objects and states.
275
275
276 A new object should be created at the beginning of each bundle processing.
276 A new object should be created at the beginning of each bundle processing.
277 The object is to be returned by the processing function.
277 The object is to be returned by the processing function.
278
278
279 The object has very little content now it will ultimately contain:
279 The object has very little content now it will ultimately contain:
280 * an access to the repo the bundle is applied to,
280 * an access to the repo the bundle is applied to,
281 * a ui object,
281 * a ui object,
282 * a way to retrieve a transaction to add changes to the repo,
282 * a way to retrieve a transaction to add changes to the repo,
283 * a way to record the result of processing each part,
283 * a way to record the result of processing each part,
284 * a way to construct a bundle response when applicable.
284 * a way to construct a bundle response when applicable.
285 """
285 """
286
286
287 def __init__(self, repo, transactiongetter, captureoutput=True):
287 def __init__(self, repo, transactiongetter, captureoutput=True):
288 self.repo = repo
288 self.repo = repo
289 self.ui = repo.ui
289 self.ui = repo.ui
290 self.records = unbundlerecords()
290 self.records = unbundlerecords()
291 self.gettransaction = transactiongetter
291 self.gettransaction = transactiongetter
292 self.reply = None
292 self.reply = None
293 self.captureoutput = captureoutput
293 self.captureoutput = captureoutput
294
294
295 class TransactionUnavailable(RuntimeError):
295 class TransactionUnavailable(RuntimeError):
296 pass
296 pass
297
297
298 def _notransaction():
298 def _notransaction():
299 """default method to get a transaction while processing a bundle
299 """default method to get a transaction while processing a bundle
300
300
301 Raise an exception to highlight the fact that no transaction was expected
301 Raise an exception to highlight the fact that no transaction was expected
302 to be created"""
302 to be created"""
303 raise TransactionUnavailable()
303 raise TransactionUnavailable()
304
304
305 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
305 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
306 # transform me into unbundler.apply() as soon as the freeze is lifted
306 # transform me into unbundler.apply() as soon as the freeze is lifted
307 tr.hookargs['bundle2'] = '1'
307 tr.hookargs['bundle2'] = '1'
308 if source is not None and 'source' not in tr.hookargs:
308 if source is not None and 'source' not in tr.hookargs:
309 tr.hookargs['source'] = source
309 tr.hookargs['source'] = source
310 if url is not None and 'url' not in tr.hookargs:
310 if url is not None and 'url' not in tr.hookargs:
311 tr.hookargs['url'] = url
311 tr.hookargs['url'] = url
312 return processbundle(repo, unbundler, lambda: tr, op=op)
312 return processbundle(repo, unbundler, lambda: tr, op=op)
313
313
314 def processbundle(repo, unbundler, transactiongetter=None, op=None):
314 def processbundle(repo, unbundler, transactiongetter=None, op=None):
315 """This function process a bundle, apply effect to/from a repo
315 """This function process a bundle, apply effect to/from a repo
316
316
317 It iterates over each part then searches for and uses the proper handling
317 It iterates over each part then searches for and uses the proper handling
318 code to process the part. Parts are processed in order.
318 code to process the part. Parts are processed in order.
319
319
320 This is very early version of this function that will be strongly reworked
320 This is very early version of this function that will be strongly reworked
321 before final usage.
321 before final usage.
322
322
323 Unknown Mandatory part will abort the process.
323 Unknown Mandatory part will abort the process.
324
324
325 It is temporarily possible to provide a prebuilt bundleoperation to the
325 It is temporarily possible to provide a prebuilt bundleoperation to the
326 function. This is used to ensure output is properly propagated in case of
326 function. This is used to ensure output is properly propagated in case of
327 an error during the unbundling. This output capturing part will likely be
327 an error during the unbundling. This output capturing part will likely be
328 reworked and this ability will probably go away in the process.
328 reworked and this ability will probably go away in the process.
329 """
329 """
330 if op is None:
330 if op is None:
331 if transactiongetter is None:
331 if transactiongetter is None:
332 transactiongetter = _notransaction
332 transactiongetter = _notransaction
333 op = bundleoperation(repo, transactiongetter)
333 op = bundleoperation(repo, transactiongetter)
334 # todo:
334 # todo:
335 # - replace this is a init function soon.
335 # - replace this is a init function soon.
336 # - exception catching
336 # - exception catching
337 unbundler.params
337 unbundler.params
338 if repo.ui.debugflag:
338 if repo.ui.debugflag:
339 msg = ['bundle2-input-bundle:']
339 msg = ['bundle2-input-bundle:']
340 if unbundler.params:
340 if unbundler.params:
341 msg.append(' %i params')
341 msg.append(' %i params')
342 if op.gettransaction is None:
342 if op.gettransaction is None:
343 msg.append(' no-transaction')
343 msg.append(' no-transaction')
344 else:
344 else:
345 msg.append(' with-transaction')
345 msg.append(' with-transaction')
346 msg.append('\n')
346 msg.append('\n')
347 repo.ui.debug(''.join(msg))
347 repo.ui.debug(''.join(msg))
348 iterparts = enumerate(unbundler.iterparts())
348 iterparts = enumerate(unbundler.iterparts())
349 part = None
349 part = None
350 nbpart = 0
350 nbpart = 0
351 try:
351 try:
352 for nbpart, part in iterparts:
352 for nbpart, part in iterparts:
353 _processpart(op, part)
353 _processpart(op, part)
354 except BaseException as exc:
354 except BaseException as exc:
355 for nbpart, part in iterparts:
355 for nbpart, part in iterparts:
356 # consume the bundle content
356 # consume the bundle content
357 part.seek(0, 2)
357 part.seek(0, 2)
358 # Small hack to let caller code distinguish exceptions from bundle2
358 # Small hack to let caller code distinguish exceptions from bundle2
359 # processing from processing the old format. This is mostly
359 # processing from processing the old format. This is mostly
360 # needed to handle different return codes to unbundle according to the
360 # needed to handle different return codes to unbundle according to the
361 # type of bundle. We should probably clean up or drop this return code
361 # type of bundle. We should probably clean up or drop this return code
362 # craziness in a future version.
362 # craziness in a future version.
363 exc.duringunbundle2 = True
363 exc.duringunbundle2 = True
364 salvaged = []
364 salvaged = []
365 replycaps = None
365 replycaps = None
366 if op.reply is not None:
366 if op.reply is not None:
367 salvaged = op.reply.salvageoutput()
367 salvaged = op.reply.salvageoutput()
368 replycaps = op.reply.capabilities
368 replycaps = op.reply.capabilities
369 exc._replycaps = replycaps
369 exc._replycaps = replycaps
370 exc._bundle2salvagedoutput = salvaged
370 exc._bundle2salvagedoutput = salvaged
371 raise
371 raise
372 finally:
372 finally:
373 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
373 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
374
374
375 return op
375 return op
376
376
377 def _processpart(op, part):
377 def _processpart(op, part):
378 """process a single part from a bundle
378 """process a single part from a bundle
379
379
380 The part is guaranteed to have been fully consumed when the function exits
380 The part is guaranteed to have been fully consumed when the function exits
381 (even if an exception is raised)."""
381 (even if an exception is raised)."""
382 status = 'unknown' # used by debug output
382 status = 'unknown' # used by debug output
383 try:
383 try:
384 try:
384 try:
385 handler = parthandlermapping.get(part.type)
385 handler = parthandlermapping.get(part.type)
386 if handler is None:
386 if handler is None:
387 status = 'unsupported-type'
387 status = 'unsupported-type'
388 raise error.BundleUnknownFeatureError(parttype=part.type)
388 raise error.BundleUnknownFeatureError(parttype=part.type)
389 indebug(op.ui, 'found a handler for part %r' % part.type)
389 indebug(op.ui, 'found a handler for part %r' % part.type)
390 unknownparams = part.mandatorykeys - handler.params
390 unknownparams = part.mandatorykeys - handler.params
391 if unknownparams:
391 if unknownparams:
392 unknownparams = list(unknownparams)
392 unknownparams = list(unknownparams)
393 unknownparams.sort()
393 unknownparams.sort()
394 status = 'unsupported-params (%s)' % unknownparams
394 status = 'unsupported-params (%s)' % unknownparams
395 raise error.BundleUnknownFeatureError(parttype=part.type,
395 raise error.BundleUnknownFeatureError(parttype=part.type,
396 params=unknownparams)
396 params=unknownparams)
397 status = 'supported'
397 status = 'supported'
398 except error.BundleUnknownFeatureError as exc:
398 except error.BundleUnknownFeatureError as exc:
399 if part.mandatory: # mandatory parts
399 if part.mandatory: # mandatory parts
400 raise
400 raise
401 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
401 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
402 return # skip to part processing
402 return # skip to part processing
403 finally:
403 finally:
404 if op.ui.debugflag:
404 if op.ui.debugflag:
405 msg = ['bundle2-input-part: "%s"' % part.type]
405 msg = ['bundle2-input-part: "%s"' % part.type]
406 if not part.mandatory:
406 if not part.mandatory:
407 msg.append(' (advisory)')
407 msg.append(' (advisory)')
408 nbmp = len(part.mandatorykeys)
408 nbmp = len(part.mandatorykeys)
409 nbap = len(part.params) - nbmp
409 nbap = len(part.params) - nbmp
410 if nbmp or nbap:
410 if nbmp or nbap:
411 msg.append(' (params:')
411 msg.append(' (params:')
412 if nbmp:
412 if nbmp:
413 msg.append(' %i mandatory' % nbmp)
413 msg.append(' %i mandatory' % nbmp)
414 if nbap:
414 if nbap:
415 msg.append(' %i advisory' % nbmp)
415 msg.append(' %i advisory' % nbmp)
416 msg.append(')')
416 msg.append(')')
417 msg.append(' %s\n' % status)
417 msg.append(' %s\n' % status)
418 op.ui.debug(''.join(msg))
418 op.ui.debug(''.join(msg))
419
419
420 # handler is called outside the above try block so that we don't
420 # handler is called outside the above try block so that we don't
421 # risk catching KeyErrors from anything other than the
421 # risk catching KeyErrors from anything other than the
422 # parthandlermapping lookup (any KeyError raised by handler()
422 # parthandlermapping lookup (any KeyError raised by handler()
423 # itself represents a defect of a different variety).
423 # itself represents a defect of a different variety).
424 output = None
424 output = None
425 if op.captureoutput and op.reply is not None:
425 if op.captureoutput and op.reply is not None:
426 op.ui.pushbuffer(error=True, subproc=True)
426 op.ui.pushbuffer(error=True, subproc=True)
427 output = ''
427 output = ''
428 try:
428 try:
429 handler(op, part)
429 handler(op, part)
430 finally:
430 finally:
431 if output is not None:
431 if output is not None:
432 output = op.ui.popbuffer()
432 output = op.ui.popbuffer()
433 if output:
433 if output:
434 outpart = op.reply.newpart('output', data=output,
434 outpart = op.reply.newpart('output', data=output,
435 mandatory=False)
435 mandatory=False)
436 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
436 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
437 finally:
437 finally:
438 # consume the part content to not corrupt the stream.
438 # consume the part content to not corrupt the stream.
439 part.seek(0, 2)
439 part.seek(0, 2)
440
440
441
441
442 def decodecaps(blob):
442 def decodecaps(blob):
443 """decode a bundle2 caps bytes blob into a dictionary
443 """decode a bundle2 caps bytes blob into a dictionary
444
444
445 The blob is a list of capabilities (one per line)
445 The blob is a list of capabilities (one per line)
446 Capabilities may have values using a line of the form::
446 Capabilities may have values using a line of the form::
447
447
448 capability=value1,value2,value3
448 capability=value1,value2,value3
449
449
450 The values are always a list."""
450 The values are always a list."""
451 caps = {}
451 caps = {}
452 for line in blob.splitlines():
452 for line in blob.splitlines():
453 if not line:
453 if not line:
454 continue
454 continue
455 if '=' not in line:
455 if '=' not in line:
456 key, vals = line, ()
456 key, vals = line, ()
457 else:
457 else:
458 key, vals = line.split('=', 1)
458 key, vals = line.split('=', 1)
459 vals = vals.split(',')
459 vals = vals.split(',')
460 key = urllib.unquote(key)
460 key = urllib.unquote(key)
461 vals = [urllib.unquote(v) for v in vals]
461 vals = [urllib.unquote(v) for v in vals]
462 caps[key] = vals
462 caps[key] = vals
463 return caps
463 return caps
464
464
465 def encodecaps(caps):
465 def encodecaps(caps):
466 """encode a bundle2 caps dictionary into a bytes blob"""
466 """encode a bundle2 caps dictionary into a bytes blob"""
467 chunks = []
467 chunks = []
468 for ca in sorted(caps):
468 for ca in sorted(caps):
469 vals = caps[ca]
469 vals = caps[ca]
470 ca = urllib.quote(ca)
470 ca = urllib.quote(ca)
471 vals = [urllib.quote(v) for v in vals]
471 vals = [urllib.quote(v) for v in vals]
472 if vals:
472 if vals:
473 ca = "%s=%s" % (ca, ','.join(vals))
473 ca = "%s=%s" % (ca, ','.join(vals))
474 chunks.append(ca)
474 chunks.append(ca)
475 return '\n'.join(chunks)
475 return '\n'.join(chunks)
476
476
477 class bundle20(object):
477 class bundle20(object):
478 """represent an outgoing bundle2 container
478 """represent an outgoing bundle2 container
479
479
480 Use the `addparam` method to add stream level parameter. and `newpart` to
480 Use the `addparam` method to add stream level parameter. and `newpart` to
481 populate it. Then call `getchunks` to retrieve all the binary chunks of
481 populate it. Then call `getchunks` to retrieve all the binary chunks of
482 data that compose the bundle2 container."""
482 data that compose the bundle2 container."""
483
483
484 _magicstring = 'HG20'
484 _magicstring = 'HG20'
485
485
486 def __init__(self, ui, capabilities=()):
486 def __init__(self, ui, capabilities=()):
487 self.ui = ui
487 self.ui = ui
488 self._params = []
488 self._params = []
489 self._parts = []
489 self._parts = []
490 self.capabilities = dict(capabilities)
490 self.capabilities = dict(capabilities)
491 self._compressor = util.compressors[None]()
491 self._compressor = util.compressors[None]()
492
492
493 def setcompression(self, alg):
493 def setcompression(self, alg):
494 """setup core part compression to <alg>"""
494 """setup core part compression to <alg>"""
495 if alg is None:
495 if alg is None:
496 return
496 return
497 assert not any(n.lower() == 'Compression' for n, v in self._params)
497 assert not any(n.lower() == 'Compression' for n, v in self._params)
498 self.addparam('Compression', alg)
498 self.addparam('Compression', alg)
499 self._compressor = util.compressors[alg]()
499 self._compressor = util.compressors[alg]()
500
500
501 @property
501 @property
502 def nbparts(self):
502 def nbparts(self):
503 """total number of parts added to the bundler"""
503 """total number of parts added to the bundler"""
504 return len(self._parts)
504 return len(self._parts)
505
505
506 # methods used to defines the bundle2 content
506 # methods used to defines the bundle2 content
507 def addparam(self, name, value=None):
507 def addparam(self, name, value=None):
508 """add a stream level parameter"""
508 """add a stream level parameter"""
509 if not name:
509 if not name:
510 raise ValueError('empty parameter name')
510 raise ValueError('empty parameter name')
511 if name[0] not in string.letters:
511 if name[0] not in string.letters:
512 raise ValueError('non letter first character: %r' % name)
512 raise ValueError('non letter first character: %r' % name)
513 self._params.append((name, value))
513 self._params.append((name, value))
514
514
515 def addpart(self, part):
515 def addpart(self, part):
516 """add a new part to the bundle2 container
516 """add a new part to the bundle2 container
517
517
518 Parts contains the actual applicative payload."""
518 Parts contains the actual applicative payload."""
519 assert part.id is None
519 assert part.id is None
520 part.id = len(self._parts) # very cheap counter
520 part.id = len(self._parts) # very cheap counter
521 self._parts.append(part)
521 self._parts.append(part)
522
522
523 def newpart(self, typeid, *args, **kwargs):
523 def newpart(self, typeid, *args, **kwargs):
524 """create a new part and add it to the containers
524 """create a new part and add it to the containers
525
525
526 As the part is directly added to the containers. For now, this means
526 As the part is directly added to the containers. For now, this means
527 that any failure to properly initialize the part after calling
527 that any failure to properly initialize the part after calling
528 ``newpart`` should result in a failure of the whole bundling process.
528 ``newpart`` should result in a failure of the whole bundling process.
529
529
530 You can still fall back to manually create and add if you need better
530 You can still fall back to manually create and add if you need better
531 control."""
531 control."""
532 part = bundlepart(typeid, *args, **kwargs)
532 part = bundlepart(typeid, *args, **kwargs)
533 self.addpart(part)
533 self.addpart(part)
534 return part
534 return part
535
535
536 # methods used to generate the bundle2 stream
536 # methods used to generate the bundle2 stream
537 def getchunks(self):
537 def getchunks(self):
538 if self.ui.debugflag:
538 if self.ui.debugflag:
539 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
539 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
540 if self._params:
540 if self._params:
541 msg.append(' (%i params)' % len(self._params))
541 msg.append(' (%i params)' % len(self._params))
542 msg.append(' %i parts total\n' % len(self._parts))
542 msg.append(' %i parts total\n' % len(self._parts))
543 self.ui.debug(''.join(msg))
543 self.ui.debug(''.join(msg))
544 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
544 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
545 yield self._magicstring
545 yield self._magicstring
546 param = self._paramchunk()
546 param = self._paramchunk()
547 outdebug(self.ui, 'bundle parameter: %s' % param)
547 outdebug(self.ui, 'bundle parameter: %s' % param)
548 yield _pack(_fstreamparamsize, len(param))
548 yield _pack(_fstreamparamsize, len(param))
549 if param:
549 if param:
550 yield param
550 yield param
551 # starting compression
551 # starting compression
552 for chunk in self._getcorechunk():
552 for chunk in self._getcorechunk():
553 yield self._compressor.compress(chunk)
553 yield self._compressor.compress(chunk)
554 yield self._compressor.flush()
554 yield self._compressor.flush()
555
555
556 def _paramchunk(self):
556 def _paramchunk(self):
557 """return a encoded version of all stream parameters"""
557 """return a encoded version of all stream parameters"""
558 blocks = []
558 blocks = []
559 for par, value in self._params:
559 for par, value in self._params:
560 par = urllib.quote(par)
560 par = urllib.quote(par)
561 if value is not None:
561 if value is not None:
562 value = urllib.quote(value)
562 value = urllib.quote(value)
563 par = '%s=%s' % (par, value)
563 par = '%s=%s' % (par, value)
564 blocks.append(par)
564 blocks.append(par)
565 return ' '.join(blocks)
565 return ' '.join(blocks)
566
566
567 def _getcorechunk(self):
567 def _getcorechunk(self):
568 """yield chunk for the core part of the bundle
568 """yield chunk for the core part of the bundle
569
569
570 (all but headers and parameters)"""
570 (all but headers and parameters)"""
571 outdebug(self.ui, 'start of parts')
571 outdebug(self.ui, 'start of parts')
572 for part in self._parts:
572 for part in self._parts:
573 outdebug(self.ui, 'bundle part: "%s"' % part.type)
573 outdebug(self.ui, 'bundle part: "%s"' % part.type)
574 for chunk in part.getchunks(ui=self.ui):
574 for chunk in part.getchunks(ui=self.ui):
575 yield chunk
575 yield chunk
576 outdebug(self.ui, 'end of bundle')
576 outdebug(self.ui, 'end of bundle')
577 yield _pack(_fpartheadersize, 0)
577 yield _pack(_fpartheadersize, 0)
578
578
579
579
580 def salvageoutput(self):
580 def salvageoutput(self):
581 """return a list with a copy of all output parts in the bundle
581 """return a list with a copy of all output parts in the bundle
582
582
583 This is meant to be used during error handling to make sure we preserve
583 This is meant to be used during error handling to make sure we preserve
584 server output"""
584 server output"""
585 salvaged = []
585 salvaged = []
586 for part in self._parts:
586 for part in self._parts:
587 if part.type.startswith('output'):
587 if part.type.startswith('output'):
588 salvaged.append(part.copy())
588 salvaged.append(part.copy())
589 return salvaged
589 return salvaged
590
590
591
591
592 class unpackermixin(object):
592 class unpackermixin(object):
593 """A mixin to extract bytes and struct data from a stream"""
593 """A mixin to extract bytes and struct data from a stream"""
594
594
595 def __init__(self, fp):
595 def __init__(self, fp):
596 self._fp = fp
596 self._fp = fp
597 self._seekable = (util.safehasattr(fp, 'seek') and
597 self._seekable = (util.safehasattr(fp, 'seek') and
598 util.safehasattr(fp, 'tell'))
598 util.safehasattr(fp, 'tell'))
599
599
600 def _unpack(self, format):
600 def _unpack(self, format):
601 """unpack this struct format from the stream"""
601 """unpack this struct format from the stream"""
602 data = self._readexact(struct.calcsize(format))
602 data = self._readexact(struct.calcsize(format))
603 return _unpack(format, data)
603 return _unpack(format, data)
604
604
605 def _readexact(self, size):
605 def _readexact(self, size):
606 """read exactly <size> bytes from the stream"""
606 """read exactly <size> bytes from the stream"""
607 return changegroup.readexactly(self._fp, size)
607 return changegroup.readexactly(self._fp, size)
608
608
609 def seek(self, offset, whence=0):
609 def seek(self, offset, whence=0):
610 """move the underlying file pointer"""
610 """move the underlying file pointer"""
611 if self._seekable:
611 if self._seekable:
612 return self._fp.seek(offset, whence)
612 return self._fp.seek(offset, whence)
613 else:
613 else:
614 raise NotImplementedError(_('File pointer is not seekable'))
614 raise NotImplementedError(_('File pointer is not seekable'))
615
615
616 def tell(self):
616 def tell(self):
617 """return the file offset, or None if file is not seekable"""
617 """return the file offset, or None if file is not seekable"""
618 if self._seekable:
618 if self._seekable:
619 try:
619 try:
620 return self._fp.tell()
620 return self._fp.tell()
621 except IOError as e:
621 except IOError as e:
622 if e.errno == errno.ESPIPE:
622 if e.errno == errno.ESPIPE:
623 self._seekable = False
623 self._seekable = False
624 else:
624 else:
625 raise
625 raise
626 return None
626 return None
627
627
628 def close(self):
628 def close(self):
629 """close underlying file"""
629 """close underlying file"""
630 if util.safehasattr(self._fp, 'close'):
630 if util.safehasattr(self._fp, 'close'):
631 return self._fp.close()
631 return self._fp.close()
632
632
633 def getunbundler(ui, fp, magicstring=None):
633 def getunbundler(ui, fp, magicstring=None):
634 """return a valid unbundler object for a given magicstring"""
634 """return a valid unbundler object for a given magicstring"""
635 if magicstring is None:
635 if magicstring is None:
636 magicstring = changegroup.readexactly(fp, 4)
636 magicstring = changegroup.readexactly(fp, 4)
637 magic, version = magicstring[0:2], magicstring[2:4]
637 magic, version = magicstring[0:2], magicstring[2:4]
638 if magic != 'HG':
638 if magic != 'HG':
639 raise error.Abort(_('not a Mercurial bundle'))
639 raise error.Abort(_('not a Mercurial bundle'))
640 unbundlerclass = formatmap.get(version)
640 unbundlerclass = formatmap.get(version)
641 if unbundlerclass is None:
641 if unbundlerclass is None:
642 raise error.Abort(_('unknown bundle version %s') % version)
642 raise error.Abort(_('unknown bundle version %s') % version)
643 unbundler = unbundlerclass(ui, fp)
643 unbundler = unbundlerclass(ui, fp)
644 indebug(ui, 'start processing of %s stream' % magicstring)
644 indebug(ui, 'start processing of %s stream' % magicstring)
645 return unbundler
645 return unbundler
646
646
647 class unbundle20(unpackermixin):
647 class unbundle20(unpackermixin):
648 """interpret a bundle2 stream
648 """interpret a bundle2 stream
649
649
650 This class is fed with a binary stream and yields parts through its
650 This class is fed with a binary stream and yields parts through its
651 `iterparts` methods."""
651 `iterparts` methods."""
652
652
653 _magicstring = 'HG20'
653 _magicstring = 'HG20'
654
654
655 def __init__(self, ui, fp):
655 def __init__(self, ui, fp):
656 """If header is specified, we do not read it out of the stream."""
656 """If header is specified, we do not read it out of the stream."""
657 self.ui = ui
657 self.ui = ui
658 self._decompressor = util.decompressors[None]
658 self._decompressor = util.decompressors[None]
659 self._compressed = None
659 self._compressed = None
660 super(unbundle20, self).__init__(fp)
660 super(unbundle20, self).__init__(fp)
661
661
662 @util.propertycache
662 @util.propertycache
663 def params(self):
663 def params(self):
664 """dictionary of stream level parameters"""
664 """dictionary of stream level parameters"""
665 indebug(self.ui, 'reading bundle2 stream parameters')
665 indebug(self.ui, 'reading bundle2 stream parameters')
666 params = {}
666 params = {}
667 paramssize = self._unpack(_fstreamparamsize)[0]
667 paramssize = self._unpack(_fstreamparamsize)[0]
668 if paramssize < 0:
668 if paramssize < 0:
669 raise error.BundleValueError('negative bundle param size: %i'
669 raise error.BundleValueError('negative bundle param size: %i'
670 % paramssize)
670 % paramssize)
671 if paramssize:
671 if paramssize:
672 params = self._readexact(paramssize)
672 params = self._readexact(paramssize)
673 params = self._processallparams(params)
673 params = self._processallparams(params)
674 return params
674 return params
675
675
676 def _processallparams(self, paramsblock):
676 def _processallparams(self, paramsblock):
677 """"""
677 """"""
678 params = {}
678 params = {}
679 for p in paramsblock.split(' '):
679 for p in paramsblock.split(' '):
680 p = p.split('=', 1)
680 p = p.split('=', 1)
681 p = [urllib.unquote(i) for i in p]
681 p = [urllib.unquote(i) for i in p]
682 if len(p) < 2:
682 if len(p) < 2:
683 p.append(None)
683 p.append(None)
684 self._processparam(*p)
684 self._processparam(*p)
685 params[p[0]] = p[1]
685 params[p[0]] = p[1]
686 return params
686 return params
687
687
688
688
689 def _processparam(self, name, value):
689 def _processparam(self, name, value):
690 """process a parameter, applying its effect if needed
690 """process a parameter, applying its effect if needed
691
691
692 Parameter starting with a lower case letter are advisory and will be
692 Parameter starting with a lower case letter are advisory and will be
693 ignored when unknown. Those starting with an upper case letter are
693 ignored when unknown. Those starting with an upper case letter are
694 mandatory and will this function will raise a KeyError when unknown.
694 mandatory and will this function will raise a KeyError when unknown.
695
695
696 Note: no option are currently supported. Any input will be either
696 Note: no option are currently supported. Any input will be either
697 ignored or failing.
697 ignored or failing.
698 """
698 """
699 if not name:
699 if not name:
700 raise ValueError('empty parameter name')
700 raise ValueError('empty parameter name')
701 if name[0] not in string.letters:
701 if name[0] not in string.letters:
702 raise ValueError('non letter first character: %r' % name)
702 raise ValueError('non letter first character: %r' % name)
703 try:
703 try:
704 handler = b2streamparamsmap[name.lower()]
704 handler = b2streamparamsmap[name.lower()]
705 except KeyError:
705 except KeyError:
706 if name[0].islower():
706 if name[0].islower():
707 indebug(self.ui, "ignoring unknown parameter %r" % name)
707 indebug(self.ui, "ignoring unknown parameter %r" % name)
708 else:
708 else:
709 raise error.BundleUnknownFeatureError(params=(name,))
709 raise error.BundleUnknownFeatureError(params=(name,))
710 else:
710 else:
711 handler(self, name, value)
711 handler(self, name, value)
712
712
713 def _forwardchunks(self):
713 def _forwardchunks(self):
714 """utility to transfer a bundle2 as binary
714 """utility to transfer a bundle2 as binary
715
715
716 This is made necessary by the fact the 'getbundle' command over 'ssh'
716 This is made necessary by the fact the 'getbundle' command over 'ssh'
717 have no way to know then the reply end, relying on the bundle to be
717 have no way to know then the reply end, relying on the bundle to be
718 interpreted to know its end. This is terrible and we are sorry, but we
718 interpreted to know its end. This is terrible and we are sorry, but we
719 needed to move forward to get general delta enabled.
719 needed to move forward to get general delta enabled.
720 """
720 """
721 yield self._magicstring
721 yield self._magicstring
722 assert 'params' not in vars(self)
722 assert 'params' not in vars(self)
723 paramssize = self._unpack(_fstreamparamsize)[0]
723 paramssize = self._unpack(_fstreamparamsize)[0]
724 if paramssize < 0:
724 if paramssize < 0:
725 raise error.BundleValueError('negative bundle param size: %i'
725 raise error.BundleValueError('negative bundle param size: %i'
726 % paramssize)
726 % paramssize)
727 yield _pack(_fstreamparamsize, paramssize)
727 yield _pack(_fstreamparamsize, paramssize)
728 if paramssize:
728 if paramssize:
729 params = self._readexact(paramssize)
729 params = self._readexact(paramssize)
730 self._processallparams(params)
730 self._processallparams(params)
731 yield params
731 yield params
732 assert self._decompressor is util.decompressors[None]
732 assert self._decompressor is util.decompressors[None]
733 # From there, payload might need to be decompressed
733 # From there, payload might need to be decompressed
734 self._fp = self._decompressor(self._fp)
734 self._fp = self._decompressor(self._fp)
735 emptycount = 0
735 emptycount = 0
736 while emptycount < 2:
736 while emptycount < 2:
737 # so we can brainlessly loop
737 # so we can brainlessly loop
738 assert _fpartheadersize == _fpayloadsize
738 assert _fpartheadersize == _fpayloadsize
739 size = self._unpack(_fpartheadersize)[0]
739 size = self._unpack(_fpartheadersize)[0]
740 yield _pack(_fpartheadersize, size)
740 yield _pack(_fpartheadersize, size)
741 if size:
741 if size:
742 emptycount = 0
742 emptycount = 0
743 else:
743 else:
744 emptycount += 1
744 emptycount += 1
745 continue
745 continue
746 if size == flaginterrupt:
746 if size == flaginterrupt:
747 continue
747 continue
748 elif size < 0:
748 elif size < 0:
749 raise error.BundleValueError('negative chunk size: %i')
749 raise error.BundleValueError('negative chunk size: %i')
750 yield self._readexact(size)
750 yield self._readexact(size)
751
751
752
752
753 def iterparts(self):
753 def iterparts(self):
754 """yield all parts contained in the stream"""
754 """yield all parts contained in the stream"""
755 # make sure param have been loaded
755 # make sure param have been loaded
756 self.params
756 self.params
757 # From there, payload need to be decompressed
757 # From there, payload need to be decompressed
758 self._fp = self._decompressor(self._fp)
758 self._fp = self._decompressor(self._fp)
759 indebug(self.ui, 'start extraction of bundle2 parts')
759 indebug(self.ui, 'start extraction of bundle2 parts')
760 headerblock = self._readpartheader()
760 headerblock = self._readpartheader()
761 while headerblock is not None:
761 while headerblock is not None:
762 part = unbundlepart(self.ui, headerblock, self._fp)
762 part = unbundlepart(self.ui, headerblock, self._fp)
763 yield part
763 yield part
764 part.seek(0, 2)
764 part.seek(0, 2)
765 headerblock = self._readpartheader()
765 headerblock = self._readpartheader()
766 indebug(self.ui, 'end of bundle2 stream')
766 indebug(self.ui, 'end of bundle2 stream')
767
767
768 def _readpartheader(self):
768 def _readpartheader(self):
769 """reads a part header size and return the bytes blob
769 """reads a part header size and return the bytes blob
770
770
771 returns None if empty"""
771 returns None if empty"""
772 headersize = self._unpack(_fpartheadersize)[0]
772 headersize = self._unpack(_fpartheadersize)[0]
773 if headersize < 0:
773 if headersize < 0:
774 raise error.BundleValueError('negative part header size: %i'
774 raise error.BundleValueError('negative part header size: %i'
775 % headersize)
775 % headersize)
776 indebug(self.ui, 'part header size: %i' % headersize)
776 indebug(self.ui, 'part header size: %i' % headersize)
777 if headersize:
777 if headersize:
778 return self._readexact(headersize)
778 return self._readexact(headersize)
779 return None
779 return None
780
780
781 def compressed(self):
781 def compressed(self):
782 self.params # load params
782 self.params # load params
783 return self._compressed
783 return self._compressed
784
784
785 formatmap = {'20': unbundle20}
785 formatmap = {'20': unbundle20}
786
786
787 b2streamparamsmap = {}
787 b2streamparamsmap = {}
788
788
789 def b2streamparamhandler(name):
789 def b2streamparamhandler(name):
790 """register a handler for a stream level parameter"""
790 """register a handler for a stream level parameter"""
791 def decorator(func):
791 def decorator(func):
792 assert name not in formatmap
792 assert name not in formatmap
793 b2streamparamsmap[name] = func
793 b2streamparamsmap[name] = func
794 return func
794 return func
795 return decorator
795 return decorator
796
796
797 @b2streamparamhandler('compression')
797 @b2streamparamhandler('compression')
798 def processcompression(unbundler, param, value):
798 def processcompression(unbundler, param, value):
799 """read compression parameter and install payload decompression"""
799 """read compression parameter and install payload decompression"""
800 if value not in util.decompressors:
800 if value not in util.decompressors:
801 raise error.BundleUnknownFeatureError(params=(param,),
801 raise error.BundleUnknownFeatureError(params=(param,),
802 values=(value,))
802 values=(value,))
803 unbundler._decompressor = util.decompressors[value]
803 unbundler._decompressor = util.decompressors[value]
804 if value is not None:
804 if value is not None:
805 unbundler._compressed = True
805 unbundler._compressed = True
806
806
807 class bundlepart(object):
807 class bundlepart(object):
808 """A bundle2 part contains application level payload
808 """A bundle2 part contains application level payload
809
809
810 The part `type` is used to route the part to the application level
810 The part `type` is used to route the part to the application level
811 handler.
811 handler.
812
812
813 The part payload is contained in ``part.data``. It could be raw bytes or a
813 The part payload is contained in ``part.data``. It could be raw bytes or a
814 generator of byte chunks.
814 generator of byte chunks.
815
815
816 You can add parameters to the part using the ``addparam`` method.
816 You can add parameters to the part using the ``addparam`` method.
817 Parameters can be either mandatory (default) or advisory. Remote side
817 Parameters can be either mandatory (default) or advisory. Remote side
818 should be able to safely ignore the advisory ones.
818 should be able to safely ignore the advisory ones.
819
819
820 Both data and parameters cannot be modified after the generation has begun.
820 Both data and parameters cannot be modified after the generation has begun.
821 """
821 """
822
822
823 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
823 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
824 data='', mandatory=True):
824 data='', mandatory=True):
825 validateparttype(parttype)
825 validateparttype(parttype)
826 self.id = None
826 self.id = None
827 self.type = parttype
827 self.type = parttype
828 self._data = data
828 self._data = data
829 self._mandatoryparams = list(mandatoryparams)
829 self._mandatoryparams = list(mandatoryparams)
830 self._advisoryparams = list(advisoryparams)
830 self._advisoryparams = list(advisoryparams)
831 # checking for duplicated entries
831 # checking for duplicated entries
832 self._seenparams = set()
832 self._seenparams = set()
833 for pname, __ in self._mandatoryparams + self._advisoryparams:
833 for pname, __ in self._mandatoryparams + self._advisoryparams:
834 if pname in self._seenparams:
834 if pname in self._seenparams:
835 raise RuntimeError('duplicated params: %s' % pname)
835 raise RuntimeError('duplicated params: %s' % pname)
836 self._seenparams.add(pname)
836 self._seenparams.add(pname)
837 # status of the part's generation:
837 # status of the part's generation:
838 # - None: not started,
838 # - None: not started,
839 # - False: currently generated,
839 # - False: currently generated,
840 # - True: generation done.
840 # - True: generation done.
841 self._generated = None
841 self._generated = None
842 self.mandatory = mandatory
842 self.mandatory = mandatory
843
843
844 def copy(self):
844 def copy(self):
845 """return a copy of the part
845 """return a copy of the part
846
846
847 The new part have the very same content but no partid assigned yet.
847 The new part have the very same content but no partid assigned yet.
848 Parts with generated data cannot be copied."""
848 Parts with generated data cannot be copied."""
849 assert not util.safehasattr(self.data, 'next')
849 assert not util.safehasattr(self.data, 'next')
850 return self.__class__(self.type, self._mandatoryparams,
850 return self.__class__(self.type, self._mandatoryparams,
851 self._advisoryparams, self._data, self.mandatory)
851 self._advisoryparams, self._data, self.mandatory)
852
852
853 # methods used to defines the part content
853 # methods used to defines the part content
854 def __setdata(self, data):
854 @property
855 def data(self):
856 return self._data
857
858 @data.setter
859 def data(self, data):
855 if self._generated is not None:
860 if self._generated is not None:
856 raise error.ReadOnlyPartError('part is being generated')
861 raise error.ReadOnlyPartError('part is being generated')
857 self._data = data
862 self._data = data
858 def __getdata(self):
859 return self._data
860 data = property(__getdata, __setdata)
861
863
862 @property
864 @property
863 def mandatoryparams(self):
865 def mandatoryparams(self):
864 # make it an immutable tuple to force people through ``addparam``
866 # make it an immutable tuple to force people through ``addparam``
865 return tuple(self._mandatoryparams)
867 return tuple(self._mandatoryparams)
866
868
867 @property
869 @property
868 def advisoryparams(self):
870 def advisoryparams(self):
869 # make it an immutable tuple to force people through ``addparam``
871 # make it an immutable tuple to force people through ``addparam``
870 return tuple(self._advisoryparams)
872 return tuple(self._advisoryparams)
871
873
872 def addparam(self, name, value='', mandatory=True):
874 def addparam(self, name, value='', mandatory=True):
873 if self._generated is not None:
875 if self._generated is not None:
874 raise error.ReadOnlyPartError('part is being generated')
876 raise error.ReadOnlyPartError('part is being generated')
875 if name in self._seenparams:
877 if name in self._seenparams:
876 raise ValueError('duplicated params: %s' % name)
878 raise ValueError('duplicated params: %s' % name)
877 self._seenparams.add(name)
879 self._seenparams.add(name)
878 params = self._advisoryparams
880 params = self._advisoryparams
879 if mandatory:
881 if mandatory:
880 params = self._mandatoryparams
882 params = self._mandatoryparams
881 params.append((name, value))
883 params.append((name, value))
882
884
883 # methods used to generates the bundle2 stream
885 # methods used to generates the bundle2 stream
884 def getchunks(self, ui):
886 def getchunks(self, ui):
885 if self._generated is not None:
887 if self._generated is not None:
886 raise RuntimeError('part can only be consumed once')
888 raise RuntimeError('part can only be consumed once')
887 self._generated = False
889 self._generated = False
888
890
889 if ui.debugflag:
891 if ui.debugflag:
890 msg = ['bundle2-output-part: "%s"' % self.type]
892 msg = ['bundle2-output-part: "%s"' % self.type]
891 if not self.mandatory:
893 if not self.mandatory:
892 msg.append(' (advisory)')
894 msg.append(' (advisory)')
893 nbmp = len(self.mandatoryparams)
895 nbmp = len(self.mandatoryparams)
894 nbap = len(self.advisoryparams)
896 nbap = len(self.advisoryparams)
895 if nbmp or nbap:
897 if nbmp or nbap:
896 msg.append(' (params:')
898 msg.append(' (params:')
897 if nbmp:
899 if nbmp:
898 msg.append(' %i mandatory' % nbmp)
900 msg.append(' %i mandatory' % nbmp)
899 if nbap:
901 if nbap:
900 msg.append(' %i advisory' % nbmp)
902 msg.append(' %i advisory' % nbmp)
901 msg.append(')')
903 msg.append(')')
902 if not self.data:
904 if not self.data:
903 msg.append(' empty payload')
905 msg.append(' empty payload')
904 elif util.safehasattr(self.data, 'next'):
906 elif util.safehasattr(self.data, 'next'):
905 msg.append(' streamed payload')
907 msg.append(' streamed payload')
906 else:
908 else:
907 msg.append(' %i bytes payload' % len(self.data))
909 msg.append(' %i bytes payload' % len(self.data))
908 msg.append('\n')
910 msg.append('\n')
909 ui.debug(''.join(msg))
911 ui.debug(''.join(msg))
910
912
911 #### header
913 #### header
912 if self.mandatory:
914 if self.mandatory:
913 parttype = self.type.upper()
915 parttype = self.type.upper()
914 else:
916 else:
915 parttype = self.type.lower()
917 parttype = self.type.lower()
916 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
918 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
917 ## parttype
919 ## parttype
918 header = [_pack(_fparttypesize, len(parttype)),
920 header = [_pack(_fparttypesize, len(parttype)),
919 parttype, _pack(_fpartid, self.id),
921 parttype, _pack(_fpartid, self.id),
920 ]
922 ]
921 ## parameters
923 ## parameters
922 # count
924 # count
923 manpar = self.mandatoryparams
925 manpar = self.mandatoryparams
924 advpar = self.advisoryparams
926 advpar = self.advisoryparams
925 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
927 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
926 # size
928 # size
927 parsizes = []
929 parsizes = []
928 for key, value in manpar:
930 for key, value in manpar:
929 parsizes.append(len(key))
931 parsizes.append(len(key))
930 parsizes.append(len(value))
932 parsizes.append(len(value))
931 for key, value in advpar:
933 for key, value in advpar:
932 parsizes.append(len(key))
934 parsizes.append(len(key))
933 parsizes.append(len(value))
935 parsizes.append(len(value))
934 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
936 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
935 header.append(paramsizes)
937 header.append(paramsizes)
936 # key, value
938 # key, value
937 for key, value in manpar:
939 for key, value in manpar:
938 header.append(key)
940 header.append(key)
939 header.append(value)
941 header.append(value)
940 for key, value in advpar:
942 for key, value in advpar:
941 header.append(key)
943 header.append(key)
942 header.append(value)
944 header.append(value)
943 ## finalize header
945 ## finalize header
944 headerchunk = ''.join(header)
946 headerchunk = ''.join(header)
945 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
947 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
946 yield _pack(_fpartheadersize, len(headerchunk))
948 yield _pack(_fpartheadersize, len(headerchunk))
947 yield headerchunk
949 yield headerchunk
948 ## payload
950 ## payload
949 try:
951 try:
950 for chunk in self._payloadchunks():
952 for chunk in self._payloadchunks():
951 outdebug(ui, 'payload chunk size: %i' % len(chunk))
953 outdebug(ui, 'payload chunk size: %i' % len(chunk))
952 yield _pack(_fpayloadsize, len(chunk))
954 yield _pack(_fpayloadsize, len(chunk))
953 yield chunk
955 yield chunk
954 except GeneratorExit:
956 except GeneratorExit:
955 # GeneratorExit means that nobody is listening for our
957 # GeneratorExit means that nobody is listening for our
956 # results anyway, so just bail quickly rather than trying
958 # results anyway, so just bail quickly rather than trying
957 # to produce an error part.
959 # to produce an error part.
958 ui.debug('bundle2-generatorexit\n')
960 ui.debug('bundle2-generatorexit\n')
959 raise
961 raise
960 except BaseException as exc:
962 except BaseException as exc:
961 # backup exception data for later
963 # backup exception data for later
962 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
964 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
963 % exc)
965 % exc)
964 exc_info = sys.exc_info()
966 exc_info = sys.exc_info()
965 msg = 'unexpected error: %s' % exc
967 msg = 'unexpected error: %s' % exc
966 interpart = bundlepart('error:abort', [('message', msg)],
968 interpart = bundlepart('error:abort', [('message', msg)],
967 mandatory=False)
969 mandatory=False)
968 interpart.id = 0
970 interpart.id = 0
969 yield _pack(_fpayloadsize, -1)
971 yield _pack(_fpayloadsize, -1)
970 for chunk in interpart.getchunks(ui=ui):
972 for chunk in interpart.getchunks(ui=ui):
971 yield chunk
973 yield chunk
972 outdebug(ui, 'closing payload chunk')
974 outdebug(ui, 'closing payload chunk')
973 # abort current part payload
975 # abort current part payload
974 yield _pack(_fpayloadsize, 0)
976 yield _pack(_fpayloadsize, 0)
975 raise exc_info[0], exc_info[1], exc_info[2]
977 raise exc_info[0], exc_info[1], exc_info[2]
976 # end of payload
978 # end of payload
977 outdebug(ui, 'closing payload chunk')
979 outdebug(ui, 'closing payload chunk')
978 yield _pack(_fpayloadsize, 0)
980 yield _pack(_fpayloadsize, 0)
979 self._generated = True
981 self._generated = True
980
982
981 def _payloadchunks(self):
983 def _payloadchunks(self):
982 """yield chunks of a the part payload
984 """yield chunks of a the part payload
983
985
984 Exists to handle the different methods to provide data to a part."""
986 Exists to handle the different methods to provide data to a part."""
985 # we only support fixed size data now.
987 # we only support fixed size data now.
986 # This will be improved in the future.
988 # This will be improved in the future.
987 if util.safehasattr(self.data, 'next'):
989 if util.safehasattr(self.data, 'next'):
988 buff = util.chunkbuffer(self.data)
990 buff = util.chunkbuffer(self.data)
989 chunk = buff.read(preferedchunksize)
991 chunk = buff.read(preferedchunksize)
990 while chunk:
992 while chunk:
991 yield chunk
993 yield chunk
992 chunk = buff.read(preferedchunksize)
994 chunk = buff.read(preferedchunksize)
993 elif len(self.data):
995 elif len(self.data):
994 yield self.data
996 yield self.data
995
997
996
998
997 flaginterrupt = -1
999 flaginterrupt = -1
998
1000
999 class interrupthandler(unpackermixin):
1001 class interrupthandler(unpackermixin):
1000 """read one part and process it with restricted capability
1002 """read one part and process it with restricted capability
1001
1003
1002 This allows to transmit exception raised on the producer size during part
1004 This allows to transmit exception raised on the producer size during part
1003 iteration while the consumer is reading a part.
1005 iteration while the consumer is reading a part.
1004
1006
1005 Part processed in this manner only have access to a ui object,"""
1007 Part processed in this manner only have access to a ui object,"""
1006
1008
1007 def __init__(self, ui, fp):
1009 def __init__(self, ui, fp):
1008 super(interrupthandler, self).__init__(fp)
1010 super(interrupthandler, self).__init__(fp)
1009 self.ui = ui
1011 self.ui = ui
1010
1012
1011 def _readpartheader(self):
1013 def _readpartheader(self):
1012 """reads a part header size and return the bytes blob
1014 """reads a part header size and return the bytes blob
1013
1015
1014 returns None if empty"""
1016 returns None if empty"""
1015 headersize = self._unpack(_fpartheadersize)[0]
1017 headersize = self._unpack(_fpartheadersize)[0]
1016 if headersize < 0:
1018 if headersize < 0:
1017 raise error.BundleValueError('negative part header size: %i'
1019 raise error.BundleValueError('negative part header size: %i'
1018 % headersize)
1020 % headersize)
1019 indebug(self.ui, 'part header size: %i\n' % headersize)
1021 indebug(self.ui, 'part header size: %i\n' % headersize)
1020 if headersize:
1022 if headersize:
1021 return self._readexact(headersize)
1023 return self._readexact(headersize)
1022 return None
1024 return None
1023
1025
1024 def __call__(self):
1026 def __call__(self):
1025
1027
1026 self.ui.debug('bundle2-input-stream-interrupt:'
1028 self.ui.debug('bundle2-input-stream-interrupt:'
1027 ' opening out of band context\n')
1029 ' opening out of band context\n')
1028 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1030 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1029 headerblock = self._readpartheader()
1031 headerblock = self._readpartheader()
1030 if headerblock is None:
1032 if headerblock is None:
1031 indebug(self.ui, 'no part found during interruption.')
1033 indebug(self.ui, 'no part found during interruption.')
1032 return
1034 return
1033 part = unbundlepart(self.ui, headerblock, self._fp)
1035 part = unbundlepart(self.ui, headerblock, self._fp)
1034 op = interruptoperation(self.ui)
1036 op = interruptoperation(self.ui)
1035 _processpart(op, part)
1037 _processpart(op, part)
1036 self.ui.debug('bundle2-input-stream-interrupt:'
1038 self.ui.debug('bundle2-input-stream-interrupt:'
1037 ' closing out of band context\n')
1039 ' closing out of band context\n')
1038
1040
1039 class interruptoperation(object):
1041 class interruptoperation(object):
1040 """A limited operation to be use by part handler during interruption
1042 """A limited operation to be use by part handler during interruption
1041
1043
1042 It only have access to an ui object.
1044 It only have access to an ui object.
1043 """
1045 """
1044
1046
1045 def __init__(self, ui):
1047 def __init__(self, ui):
1046 self.ui = ui
1048 self.ui = ui
1047 self.reply = None
1049 self.reply = None
1048 self.captureoutput = False
1050 self.captureoutput = False
1049
1051
1050 @property
1052 @property
1051 def repo(self):
1053 def repo(self):
1052 raise RuntimeError('no repo access from stream interruption')
1054 raise RuntimeError('no repo access from stream interruption')
1053
1055
1054 def gettransaction(self):
1056 def gettransaction(self):
1055 raise TransactionUnavailable('no repo access from stream interruption')
1057 raise TransactionUnavailable('no repo access from stream interruption')
1056
1058
1057 class unbundlepart(unpackermixin):
1059 class unbundlepart(unpackermixin):
1058 """a bundle part read from a bundle"""
1060 """a bundle part read from a bundle"""
1059
1061
1060 def __init__(self, ui, header, fp):
1062 def __init__(self, ui, header, fp):
1061 super(unbundlepart, self).__init__(fp)
1063 super(unbundlepart, self).__init__(fp)
1062 self.ui = ui
1064 self.ui = ui
1063 # unbundle state attr
1065 # unbundle state attr
1064 self._headerdata = header
1066 self._headerdata = header
1065 self._headeroffset = 0
1067 self._headeroffset = 0
1066 self._initialized = False
1068 self._initialized = False
1067 self.consumed = False
1069 self.consumed = False
1068 # part data
1070 # part data
1069 self.id = None
1071 self.id = None
1070 self.type = None
1072 self.type = None
1071 self.mandatoryparams = None
1073 self.mandatoryparams = None
1072 self.advisoryparams = None
1074 self.advisoryparams = None
1073 self.params = None
1075 self.params = None
1074 self.mandatorykeys = ()
1076 self.mandatorykeys = ()
1075 self._payloadstream = None
1077 self._payloadstream = None
1076 self._readheader()
1078 self._readheader()
1077 self._mandatory = None
1079 self._mandatory = None
1078 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1080 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1079 self._pos = 0
1081 self._pos = 0
1080
1082
1081 def _fromheader(self, size):
1083 def _fromheader(self, size):
1082 """return the next <size> byte from the header"""
1084 """return the next <size> byte from the header"""
1083 offset = self._headeroffset
1085 offset = self._headeroffset
1084 data = self._headerdata[offset:(offset + size)]
1086 data = self._headerdata[offset:(offset + size)]
1085 self._headeroffset = offset + size
1087 self._headeroffset = offset + size
1086 return data
1088 return data
1087
1089
1088 def _unpackheader(self, format):
1090 def _unpackheader(self, format):
1089 """read given format from header
1091 """read given format from header
1090
1092
1091 This automatically compute the size of the format to read."""
1093 This automatically compute the size of the format to read."""
1092 data = self._fromheader(struct.calcsize(format))
1094 data = self._fromheader(struct.calcsize(format))
1093 return _unpack(format, data)
1095 return _unpack(format, data)
1094
1096
1095 def _initparams(self, mandatoryparams, advisoryparams):
1097 def _initparams(self, mandatoryparams, advisoryparams):
1096 """internal function to setup all logic related parameters"""
1098 """internal function to setup all logic related parameters"""
1097 # make it read only to prevent people touching it by mistake.
1099 # make it read only to prevent people touching it by mistake.
1098 self.mandatoryparams = tuple(mandatoryparams)
1100 self.mandatoryparams = tuple(mandatoryparams)
1099 self.advisoryparams = tuple(advisoryparams)
1101 self.advisoryparams = tuple(advisoryparams)
1100 # user friendly UI
1102 # user friendly UI
1101 self.params = dict(self.mandatoryparams)
1103 self.params = dict(self.mandatoryparams)
1102 self.params.update(dict(self.advisoryparams))
1104 self.params.update(dict(self.advisoryparams))
1103 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1105 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1104
1106
1105 def _payloadchunks(self, chunknum=0):
1107 def _payloadchunks(self, chunknum=0):
1106 '''seek to specified chunk and start yielding data'''
1108 '''seek to specified chunk and start yielding data'''
1107 if len(self._chunkindex) == 0:
1109 if len(self._chunkindex) == 0:
1108 assert chunknum == 0, 'Must start with chunk 0'
1110 assert chunknum == 0, 'Must start with chunk 0'
1109 self._chunkindex.append((0, super(unbundlepart, self).tell()))
1111 self._chunkindex.append((0, super(unbundlepart, self).tell()))
1110 else:
1112 else:
1111 assert chunknum < len(self._chunkindex), \
1113 assert chunknum < len(self._chunkindex), \
1112 'Unknown chunk %d' % chunknum
1114 'Unknown chunk %d' % chunknum
1113 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
1115 super(unbundlepart, self).seek(self._chunkindex[chunknum][1])
1114
1116
1115 pos = self._chunkindex[chunknum][0]
1117 pos = self._chunkindex[chunknum][0]
1116 payloadsize = self._unpack(_fpayloadsize)[0]
1118 payloadsize = self._unpack(_fpayloadsize)[0]
1117 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1119 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1118 while payloadsize:
1120 while payloadsize:
1119 if payloadsize == flaginterrupt:
1121 if payloadsize == flaginterrupt:
1120 # interruption detection, the handler will now read a
1122 # interruption detection, the handler will now read a
1121 # single part and process it.
1123 # single part and process it.
1122 interrupthandler(self.ui, self._fp)()
1124 interrupthandler(self.ui, self._fp)()
1123 elif payloadsize < 0:
1125 elif payloadsize < 0:
1124 msg = 'negative payload chunk size: %i' % payloadsize
1126 msg = 'negative payload chunk size: %i' % payloadsize
1125 raise error.BundleValueError(msg)
1127 raise error.BundleValueError(msg)
1126 else:
1128 else:
1127 result = self._readexact(payloadsize)
1129 result = self._readexact(payloadsize)
1128 chunknum += 1
1130 chunknum += 1
1129 pos += payloadsize
1131 pos += payloadsize
1130 if chunknum == len(self._chunkindex):
1132 if chunknum == len(self._chunkindex):
1131 self._chunkindex.append((pos,
1133 self._chunkindex.append((pos,
1132 super(unbundlepart, self).tell()))
1134 super(unbundlepart, self).tell()))
1133 yield result
1135 yield result
1134 payloadsize = self._unpack(_fpayloadsize)[0]
1136 payloadsize = self._unpack(_fpayloadsize)[0]
1135 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1137 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1136
1138
1137 def _findchunk(self, pos):
1139 def _findchunk(self, pos):
1138 '''for a given payload position, return a chunk number and offset'''
1140 '''for a given payload position, return a chunk number and offset'''
1139 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1141 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1140 if ppos == pos:
1142 if ppos == pos:
1141 return chunk, 0
1143 return chunk, 0
1142 elif ppos > pos:
1144 elif ppos > pos:
1143 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1145 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1144 raise ValueError('Unknown chunk')
1146 raise ValueError('Unknown chunk')
1145
1147
1146 def _readheader(self):
1148 def _readheader(self):
1147 """read the header and setup the object"""
1149 """read the header and setup the object"""
1148 typesize = self._unpackheader(_fparttypesize)[0]
1150 typesize = self._unpackheader(_fparttypesize)[0]
1149 self.type = self._fromheader(typesize)
1151 self.type = self._fromheader(typesize)
1150 indebug(self.ui, 'part type: "%s"' % self.type)
1152 indebug(self.ui, 'part type: "%s"' % self.type)
1151 self.id = self._unpackheader(_fpartid)[0]
1153 self.id = self._unpackheader(_fpartid)[0]
1152 indebug(self.ui, 'part id: "%s"' % self.id)
1154 indebug(self.ui, 'part id: "%s"' % self.id)
1153 # extract mandatory bit from type
1155 # extract mandatory bit from type
1154 self.mandatory = (self.type != self.type.lower())
1156 self.mandatory = (self.type != self.type.lower())
1155 self.type = self.type.lower()
1157 self.type = self.type.lower()
1156 ## reading parameters
1158 ## reading parameters
1157 # param count
1159 # param count
1158 mancount, advcount = self._unpackheader(_fpartparamcount)
1160 mancount, advcount = self._unpackheader(_fpartparamcount)
1159 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1161 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1160 # param size
1162 # param size
1161 fparamsizes = _makefpartparamsizes(mancount + advcount)
1163 fparamsizes = _makefpartparamsizes(mancount + advcount)
1162 paramsizes = self._unpackheader(fparamsizes)
1164 paramsizes = self._unpackheader(fparamsizes)
1163 # make it a list of couple again
1165 # make it a list of couple again
1164 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1166 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1165 # split mandatory from advisory
1167 # split mandatory from advisory
1166 mansizes = paramsizes[:mancount]
1168 mansizes = paramsizes[:mancount]
1167 advsizes = paramsizes[mancount:]
1169 advsizes = paramsizes[mancount:]
1168 # retrieve param value
1170 # retrieve param value
1169 manparams = []
1171 manparams = []
1170 for key, value in mansizes:
1172 for key, value in mansizes:
1171 manparams.append((self._fromheader(key), self._fromheader(value)))
1173 manparams.append((self._fromheader(key), self._fromheader(value)))
1172 advparams = []
1174 advparams = []
1173 for key, value in advsizes:
1175 for key, value in advsizes:
1174 advparams.append((self._fromheader(key), self._fromheader(value)))
1176 advparams.append((self._fromheader(key), self._fromheader(value)))
1175 self._initparams(manparams, advparams)
1177 self._initparams(manparams, advparams)
1176 ## part payload
1178 ## part payload
1177 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1179 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1178 # we read the data, tell it
1180 # we read the data, tell it
1179 self._initialized = True
1181 self._initialized = True
1180
1182
1181 def read(self, size=None):
1183 def read(self, size=None):
1182 """read payload data"""
1184 """read payload data"""
1183 if not self._initialized:
1185 if not self._initialized:
1184 self._readheader()
1186 self._readheader()
1185 if size is None:
1187 if size is None:
1186 data = self._payloadstream.read()
1188 data = self._payloadstream.read()
1187 else:
1189 else:
1188 data = self._payloadstream.read(size)
1190 data = self._payloadstream.read(size)
1189 self._pos += len(data)
1191 self._pos += len(data)
1190 if size is None or len(data) < size:
1192 if size is None or len(data) < size:
1191 if not self.consumed and self._pos:
1193 if not self.consumed and self._pos:
1192 self.ui.debug('bundle2-input-part: total payload size %i\n'
1194 self.ui.debug('bundle2-input-part: total payload size %i\n'
1193 % self._pos)
1195 % self._pos)
1194 self.consumed = True
1196 self.consumed = True
1195 return data
1197 return data
1196
1198
1197 def tell(self):
1199 def tell(self):
1198 return self._pos
1200 return self._pos
1199
1201
1200 def seek(self, offset, whence=0):
1202 def seek(self, offset, whence=0):
1201 if whence == 0:
1203 if whence == 0:
1202 newpos = offset
1204 newpos = offset
1203 elif whence == 1:
1205 elif whence == 1:
1204 newpos = self._pos + offset
1206 newpos = self._pos + offset
1205 elif whence == 2:
1207 elif whence == 2:
1206 if not self.consumed:
1208 if not self.consumed:
1207 self.read()
1209 self.read()
1208 newpos = self._chunkindex[-1][0] - offset
1210 newpos = self._chunkindex[-1][0] - offset
1209 else:
1211 else:
1210 raise ValueError('Unknown whence value: %r' % (whence,))
1212 raise ValueError('Unknown whence value: %r' % (whence,))
1211
1213
1212 if newpos > self._chunkindex[-1][0] and not self.consumed:
1214 if newpos > self._chunkindex[-1][0] and not self.consumed:
1213 self.read()
1215 self.read()
1214 if not 0 <= newpos <= self._chunkindex[-1][0]:
1216 if not 0 <= newpos <= self._chunkindex[-1][0]:
1215 raise ValueError('Offset out of range')
1217 raise ValueError('Offset out of range')
1216
1218
1217 if self._pos != newpos:
1219 if self._pos != newpos:
1218 chunk, internaloffset = self._findchunk(newpos)
1220 chunk, internaloffset = self._findchunk(newpos)
1219 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1221 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1220 adjust = self.read(internaloffset)
1222 adjust = self.read(internaloffset)
1221 if len(adjust) != internaloffset:
1223 if len(adjust) != internaloffset:
1222 raise error.Abort(_('Seek failed\n'))
1224 raise error.Abort(_('Seek failed\n'))
1223 self._pos = newpos
1225 self._pos = newpos
1224
1226
1225 # These are only the static capabilities.
1227 # These are only the static capabilities.
1226 # Check the 'getrepocaps' function for the rest.
1228 # Check the 'getrepocaps' function for the rest.
1227 capabilities = {'HG20': (),
1229 capabilities = {'HG20': (),
1228 'error': ('abort', 'unsupportedcontent', 'pushraced',
1230 'error': ('abort', 'unsupportedcontent', 'pushraced',
1229 'pushkey'),
1231 'pushkey'),
1230 'listkeys': (),
1232 'listkeys': (),
1231 'pushkey': (),
1233 'pushkey': (),
1232 'digests': tuple(sorted(util.DIGESTS.keys())),
1234 'digests': tuple(sorted(util.DIGESTS.keys())),
1233 'remote-changegroup': ('http', 'https'),
1235 'remote-changegroup': ('http', 'https'),
1234 'hgtagsfnodes': (),
1236 'hgtagsfnodes': (),
1235 }
1237 }
1236
1238
1237 def getrepocaps(repo, allowpushback=False):
1239 def getrepocaps(repo, allowpushback=False):
1238 """return the bundle2 capabilities for a given repo
1240 """return the bundle2 capabilities for a given repo
1239
1241
1240 Exists to allow extensions (like evolution) to mutate the capabilities.
1242 Exists to allow extensions (like evolution) to mutate the capabilities.
1241 """
1243 """
1242 caps = capabilities.copy()
1244 caps = capabilities.copy()
1243 caps['changegroup'] = tuple(sorted(changegroup.supportedversions(repo)))
1245 caps['changegroup'] = tuple(sorted(changegroup.supportedversions(repo)))
1244 if obsolete.isenabled(repo, obsolete.exchangeopt):
1246 if obsolete.isenabled(repo, obsolete.exchangeopt):
1245 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1247 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1246 caps['obsmarkers'] = supportedformat
1248 caps['obsmarkers'] = supportedformat
1247 if allowpushback:
1249 if allowpushback:
1248 caps['pushback'] = ()
1250 caps['pushback'] = ()
1249 return caps
1251 return caps
1250
1252
1251 def bundle2caps(remote):
1253 def bundle2caps(remote):
1252 """return the bundle capabilities of a peer as dict"""
1254 """return the bundle capabilities of a peer as dict"""
1253 raw = remote.capable('bundle2')
1255 raw = remote.capable('bundle2')
1254 if not raw and raw != '':
1256 if not raw and raw != '':
1255 return {}
1257 return {}
1256 capsblob = urllib.unquote(remote.capable('bundle2'))
1258 capsblob = urllib.unquote(remote.capable('bundle2'))
1257 return decodecaps(capsblob)
1259 return decodecaps(capsblob)
1258
1260
1259 def obsmarkersversion(caps):
1261 def obsmarkersversion(caps):
1260 """extract the list of supported obsmarkers versions from a bundle2caps dict
1262 """extract the list of supported obsmarkers versions from a bundle2caps dict
1261 """
1263 """
1262 obscaps = caps.get('obsmarkers', ())
1264 obscaps = caps.get('obsmarkers', ())
1263 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1265 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1264
1266
1265 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1267 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1266 def handlechangegroup(op, inpart):
1268 def handlechangegroup(op, inpart):
1267 """apply a changegroup part on the repo
1269 """apply a changegroup part on the repo
1268
1270
1269 This is a very early implementation that will massive rework before being
1271 This is a very early implementation that will massive rework before being
1270 inflicted to any end-user.
1272 inflicted to any end-user.
1271 """
1273 """
1272 # Make sure we trigger a transaction creation
1274 # Make sure we trigger a transaction creation
1273 #
1275 #
1274 # The addchangegroup function will get a transaction object by itself, but
1276 # The addchangegroup function will get a transaction object by itself, but
1275 # we need to make sure we trigger the creation of a transaction object used
1277 # we need to make sure we trigger the creation of a transaction object used
1276 # for the whole processing scope.
1278 # for the whole processing scope.
1277 op.gettransaction()
1279 op.gettransaction()
1278 unpackerversion = inpart.params.get('version', '01')
1280 unpackerversion = inpart.params.get('version', '01')
1279 # We should raise an appropriate exception here
1281 # We should raise an appropriate exception here
1280 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1282 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1281 # the source and url passed here are overwritten by the one contained in
1283 # the source and url passed here are overwritten by the one contained in
1282 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1284 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1283 nbchangesets = None
1285 nbchangesets = None
1284 if 'nbchanges' in inpart.params:
1286 if 'nbchanges' in inpart.params:
1285 nbchangesets = int(inpart.params.get('nbchanges'))
1287 nbchangesets = int(inpart.params.get('nbchanges'))
1286 if ('treemanifest' in inpart.params and
1288 if ('treemanifest' in inpart.params and
1287 'treemanifest' not in op.repo.requirements):
1289 'treemanifest' not in op.repo.requirements):
1288 if len(op.repo.changelog) != 0:
1290 if len(op.repo.changelog) != 0:
1289 raise error.Abort(_(
1291 raise error.Abort(_(
1290 "bundle contains tree manifests, but local repo is "
1292 "bundle contains tree manifests, but local repo is "
1291 "non-empty and does not use tree manifests"))
1293 "non-empty and does not use tree manifests"))
1292 op.repo.requirements.add('treemanifest')
1294 op.repo.requirements.add('treemanifest')
1293 op.repo._applyopenerreqs()
1295 op.repo._applyopenerreqs()
1294 op.repo._writerequirements()
1296 op.repo._writerequirements()
1295 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1297 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1296 op.records.add('changegroup', {'return': ret})
1298 op.records.add('changegroup', {'return': ret})
1297 if op.reply is not None:
1299 if op.reply is not None:
1298 # This is definitely not the final form of this
1300 # This is definitely not the final form of this
1299 # return. But one need to start somewhere.
1301 # return. But one need to start somewhere.
1300 part = op.reply.newpart('reply:changegroup', mandatory=False)
1302 part = op.reply.newpart('reply:changegroup', mandatory=False)
1301 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1303 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1302 part.addparam('return', '%i' % ret, mandatory=False)
1304 part.addparam('return', '%i' % ret, mandatory=False)
1303 assert not inpart.read()
1305 assert not inpart.read()
1304
1306
1305 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1307 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1306 ['digest:%s' % k for k in util.DIGESTS.keys()])
1308 ['digest:%s' % k for k in util.DIGESTS.keys()])
1307 @parthandler('remote-changegroup', _remotechangegroupparams)
1309 @parthandler('remote-changegroup', _remotechangegroupparams)
1308 def handleremotechangegroup(op, inpart):
1310 def handleremotechangegroup(op, inpart):
1309 """apply a bundle10 on the repo, given an url and validation information
1311 """apply a bundle10 on the repo, given an url and validation information
1310
1312
1311 All the information about the remote bundle to import are given as
1313 All the information about the remote bundle to import are given as
1312 parameters. The parameters include:
1314 parameters. The parameters include:
1313 - url: the url to the bundle10.
1315 - url: the url to the bundle10.
1314 - size: the bundle10 file size. It is used to validate what was
1316 - size: the bundle10 file size. It is used to validate what was
1315 retrieved by the client matches the server knowledge about the bundle.
1317 retrieved by the client matches the server knowledge about the bundle.
1316 - digests: a space separated list of the digest types provided as
1318 - digests: a space separated list of the digest types provided as
1317 parameters.
1319 parameters.
1318 - digest:<digest-type>: the hexadecimal representation of the digest with
1320 - digest:<digest-type>: the hexadecimal representation of the digest with
1319 that name. Like the size, it is used to validate what was retrieved by
1321 that name. Like the size, it is used to validate what was retrieved by
1320 the client matches what the server knows about the bundle.
1322 the client matches what the server knows about the bundle.
1321
1323
1322 When multiple digest types are given, all of them are checked.
1324 When multiple digest types are given, all of them are checked.
1323 """
1325 """
1324 try:
1326 try:
1325 raw_url = inpart.params['url']
1327 raw_url = inpart.params['url']
1326 except KeyError:
1328 except KeyError:
1327 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1329 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1328 parsed_url = util.url(raw_url)
1330 parsed_url = util.url(raw_url)
1329 if parsed_url.scheme not in capabilities['remote-changegroup']:
1331 if parsed_url.scheme not in capabilities['remote-changegroup']:
1330 raise error.Abort(_('remote-changegroup does not support %s urls') %
1332 raise error.Abort(_('remote-changegroup does not support %s urls') %
1331 parsed_url.scheme)
1333 parsed_url.scheme)
1332
1334
1333 try:
1335 try:
1334 size = int(inpart.params['size'])
1336 size = int(inpart.params['size'])
1335 except ValueError:
1337 except ValueError:
1336 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1338 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1337 % 'size')
1339 % 'size')
1338 except KeyError:
1340 except KeyError:
1339 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1341 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1340
1342
1341 digests = {}
1343 digests = {}
1342 for typ in inpart.params.get('digests', '').split():
1344 for typ in inpart.params.get('digests', '').split():
1343 param = 'digest:%s' % typ
1345 param = 'digest:%s' % typ
1344 try:
1346 try:
1345 value = inpart.params[param]
1347 value = inpart.params[param]
1346 except KeyError:
1348 except KeyError:
1347 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1349 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1348 param)
1350 param)
1349 digests[typ] = value
1351 digests[typ] = value
1350
1352
1351 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1353 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1352
1354
1353 # Make sure we trigger a transaction creation
1355 # Make sure we trigger a transaction creation
1354 #
1356 #
1355 # The addchangegroup function will get a transaction object by itself, but
1357 # The addchangegroup function will get a transaction object by itself, but
1356 # we need to make sure we trigger the creation of a transaction object used
1358 # we need to make sure we trigger the creation of a transaction object used
1357 # for the whole processing scope.
1359 # for the whole processing scope.
1358 op.gettransaction()
1360 op.gettransaction()
1359 from . import exchange
1361 from . import exchange
1360 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1362 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1361 if not isinstance(cg, changegroup.cg1unpacker):
1363 if not isinstance(cg, changegroup.cg1unpacker):
1362 raise error.Abort(_('%s: not a bundle version 1.0') %
1364 raise error.Abort(_('%s: not a bundle version 1.0') %
1363 util.hidepassword(raw_url))
1365 util.hidepassword(raw_url))
1364 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1366 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1365 op.records.add('changegroup', {'return': ret})
1367 op.records.add('changegroup', {'return': ret})
1366 if op.reply is not None:
1368 if op.reply is not None:
1367 # This is definitely not the final form of this
1369 # This is definitely not the final form of this
1368 # return. But one need to start somewhere.
1370 # return. But one need to start somewhere.
1369 part = op.reply.newpart('reply:changegroup')
1371 part = op.reply.newpart('reply:changegroup')
1370 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1372 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1371 part.addparam('return', '%i' % ret, mandatory=False)
1373 part.addparam('return', '%i' % ret, mandatory=False)
1372 try:
1374 try:
1373 real_part.validate()
1375 real_part.validate()
1374 except error.Abort as e:
1376 except error.Abort as e:
1375 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1377 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1376 (util.hidepassword(raw_url), str(e)))
1378 (util.hidepassword(raw_url), str(e)))
1377 assert not inpart.read()
1379 assert not inpart.read()
1378
1380
1379 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1381 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1380 def handlereplychangegroup(op, inpart):
1382 def handlereplychangegroup(op, inpart):
1381 ret = int(inpart.params['return'])
1383 ret = int(inpart.params['return'])
1382 replyto = int(inpart.params['in-reply-to'])
1384 replyto = int(inpart.params['in-reply-to'])
1383 op.records.add('changegroup', {'return': ret}, replyto)
1385 op.records.add('changegroup', {'return': ret}, replyto)
1384
1386
1385 @parthandler('check:heads')
1387 @parthandler('check:heads')
1386 def handlecheckheads(op, inpart):
1388 def handlecheckheads(op, inpart):
1387 """check that head of the repo did not change
1389 """check that head of the repo did not change
1388
1390
1389 This is used to detect a push race when using unbundle.
1391 This is used to detect a push race when using unbundle.
1390 This replaces the "heads" argument of unbundle."""
1392 This replaces the "heads" argument of unbundle."""
1391 h = inpart.read(20)
1393 h = inpart.read(20)
1392 heads = []
1394 heads = []
1393 while len(h) == 20:
1395 while len(h) == 20:
1394 heads.append(h)
1396 heads.append(h)
1395 h = inpart.read(20)
1397 h = inpart.read(20)
1396 assert not h
1398 assert not h
1397 # Trigger a transaction so that we are guaranteed to have the lock now.
1399 # Trigger a transaction so that we are guaranteed to have the lock now.
1398 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1400 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1399 op.gettransaction()
1401 op.gettransaction()
1400 if heads != op.repo.heads():
1402 if heads != op.repo.heads():
1401 raise error.PushRaced('repository changed while pushing - '
1403 raise error.PushRaced('repository changed while pushing - '
1402 'please try again')
1404 'please try again')
1403
1405
1404 @parthandler('output')
1406 @parthandler('output')
1405 def handleoutput(op, inpart):
1407 def handleoutput(op, inpart):
1406 """forward output captured on the server to the client"""
1408 """forward output captured on the server to the client"""
1407 for line in inpart.read().splitlines():
1409 for line in inpart.read().splitlines():
1408 op.ui.status(('remote: %s\n' % line))
1410 op.ui.status(('remote: %s\n' % line))
1409
1411
1410 @parthandler('replycaps')
1412 @parthandler('replycaps')
1411 def handlereplycaps(op, inpart):
1413 def handlereplycaps(op, inpart):
1412 """Notify that a reply bundle should be created
1414 """Notify that a reply bundle should be created
1413
1415
1414 The payload contains the capabilities information for the reply"""
1416 The payload contains the capabilities information for the reply"""
1415 caps = decodecaps(inpart.read())
1417 caps = decodecaps(inpart.read())
1416 if op.reply is None:
1418 if op.reply is None:
1417 op.reply = bundle20(op.ui, caps)
1419 op.reply = bundle20(op.ui, caps)
1418
1420
1419 class AbortFromPart(error.Abort):
1421 class AbortFromPart(error.Abort):
1420 """Sub-class of Abort that denotes an error from a bundle2 part."""
1422 """Sub-class of Abort that denotes an error from a bundle2 part."""
1421
1423
1422 @parthandler('error:abort', ('message', 'hint'))
1424 @parthandler('error:abort', ('message', 'hint'))
1423 def handleerrorabort(op, inpart):
1425 def handleerrorabort(op, inpart):
1424 """Used to transmit abort error over the wire"""
1426 """Used to transmit abort error over the wire"""
1425 raise AbortFromPart(inpart.params['message'],
1427 raise AbortFromPart(inpart.params['message'],
1426 hint=inpart.params.get('hint'))
1428 hint=inpart.params.get('hint'))
1427
1429
1428 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1430 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1429 'in-reply-to'))
1431 'in-reply-to'))
1430 def handleerrorpushkey(op, inpart):
1432 def handleerrorpushkey(op, inpart):
1431 """Used to transmit failure of a mandatory pushkey over the wire"""
1433 """Used to transmit failure of a mandatory pushkey over the wire"""
1432 kwargs = {}
1434 kwargs = {}
1433 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1435 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1434 value = inpart.params.get(name)
1436 value = inpart.params.get(name)
1435 if value is not None:
1437 if value is not None:
1436 kwargs[name] = value
1438 kwargs[name] = value
1437 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1439 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1438
1440
1439 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1441 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1440 def handleerrorunsupportedcontent(op, inpart):
1442 def handleerrorunsupportedcontent(op, inpart):
1441 """Used to transmit unknown content error over the wire"""
1443 """Used to transmit unknown content error over the wire"""
1442 kwargs = {}
1444 kwargs = {}
1443 parttype = inpart.params.get('parttype')
1445 parttype = inpart.params.get('parttype')
1444 if parttype is not None:
1446 if parttype is not None:
1445 kwargs['parttype'] = parttype
1447 kwargs['parttype'] = parttype
1446 params = inpart.params.get('params')
1448 params = inpart.params.get('params')
1447 if params is not None:
1449 if params is not None:
1448 kwargs['params'] = params.split('\0')
1450 kwargs['params'] = params.split('\0')
1449
1451
1450 raise error.BundleUnknownFeatureError(**kwargs)
1452 raise error.BundleUnknownFeatureError(**kwargs)
1451
1453
1452 @parthandler('error:pushraced', ('message',))
1454 @parthandler('error:pushraced', ('message',))
1453 def handleerrorpushraced(op, inpart):
1455 def handleerrorpushraced(op, inpart):
1454 """Used to transmit push race error over the wire"""
1456 """Used to transmit push race error over the wire"""
1455 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1457 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1456
1458
1457 @parthandler('listkeys', ('namespace',))
1459 @parthandler('listkeys', ('namespace',))
1458 def handlelistkeys(op, inpart):
1460 def handlelistkeys(op, inpart):
1459 """retrieve pushkey namespace content stored in a bundle2"""
1461 """retrieve pushkey namespace content stored in a bundle2"""
1460 namespace = inpart.params['namespace']
1462 namespace = inpart.params['namespace']
1461 r = pushkey.decodekeys(inpart.read())
1463 r = pushkey.decodekeys(inpart.read())
1462 op.records.add('listkeys', (namespace, r))
1464 op.records.add('listkeys', (namespace, r))
1463
1465
1464 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1466 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1465 def handlepushkey(op, inpart):
1467 def handlepushkey(op, inpart):
1466 """process a pushkey request"""
1468 """process a pushkey request"""
1467 dec = pushkey.decode
1469 dec = pushkey.decode
1468 namespace = dec(inpart.params['namespace'])
1470 namespace = dec(inpart.params['namespace'])
1469 key = dec(inpart.params['key'])
1471 key = dec(inpart.params['key'])
1470 old = dec(inpart.params['old'])
1472 old = dec(inpart.params['old'])
1471 new = dec(inpart.params['new'])
1473 new = dec(inpart.params['new'])
1472 # Grab the transaction to ensure that we have the lock before performing the
1474 # Grab the transaction to ensure that we have the lock before performing the
1473 # pushkey.
1475 # pushkey.
1474 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1476 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1475 op.gettransaction()
1477 op.gettransaction()
1476 ret = op.repo.pushkey(namespace, key, old, new)
1478 ret = op.repo.pushkey(namespace, key, old, new)
1477 record = {'namespace': namespace,
1479 record = {'namespace': namespace,
1478 'key': key,
1480 'key': key,
1479 'old': old,
1481 'old': old,
1480 'new': new}
1482 'new': new}
1481 op.records.add('pushkey', record)
1483 op.records.add('pushkey', record)
1482 if op.reply is not None:
1484 if op.reply is not None:
1483 rpart = op.reply.newpart('reply:pushkey')
1485 rpart = op.reply.newpart('reply:pushkey')
1484 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1486 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1485 rpart.addparam('return', '%i' % ret, mandatory=False)
1487 rpart.addparam('return', '%i' % ret, mandatory=False)
1486 if inpart.mandatory and not ret:
1488 if inpart.mandatory and not ret:
1487 kwargs = {}
1489 kwargs = {}
1488 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1490 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1489 if key in inpart.params:
1491 if key in inpart.params:
1490 kwargs[key] = inpart.params[key]
1492 kwargs[key] = inpart.params[key]
1491 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1493 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1492
1494
1493 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1495 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1494 def handlepushkeyreply(op, inpart):
1496 def handlepushkeyreply(op, inpart):
1495 """retrieve the result of a pushkey request"""
1497 """retrieve the result of a pushkey request"""
1496 ret = int(inpart.params['return'])
1498 ret = int(inpart.params['return'])
1497 partid = int(inpart.params['in-reply-to'])
1499 partid = int(inpart.params['in-reply-to'])
1498 op.records.add('pushkey', {'return': ret}, partid)
1500 op.records.add('pushkey', {'return': ret}, partid)
1499
1501
1500 @parthandler('obsmarkers')
1502 @parthandler('obsmarkers')
1501 def handleobsmarker(op, inpart):
1503 def handleobsmarker(op, inpart):
1502 """add a stream of obsmarkers to the repo"""
1504 """add a stream of obsmarkers to the repo"""
1503 tr = op.gettransaction()
1505 tr = op.gettransaction()
1504 markerdata = inpart.read()
1506 markerdata = inpart.read()
1505 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1507 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1506 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1508 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1507 % len(markerdata))
1509 % len(markerdata))
1508 # The mergemarkers call will crash if marker creation is not enabled.
1510 # The mergemarkers call will crash if marker creation is not enabled.
1509 # we want to avoid this if the part is advisory.
1511 # we want to avoid this if the part is advisory.
1510 if not inpart.mandatory and op.repo.obsstore.readonly:
1512 if not inpart.mandatory and op.repo.obsstore.readonly:
1511 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1513 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1512 return
1514 return
1513 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1515 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1514 if new:
1516 if new:
1515 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1517 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1516 op.records.add('obsmarkers', {'new': new})
1518 op.records.add('obsmarkers', {'new': new})
1517 if op.reply is not None:
1519 if op.reply is not None:
1518 rpart = op.reply.newpart('reply:obsmarkers')
1520 rpart = op.reply.newpart('reply:obsmarkers')
1519 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1521 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1520 rpart.addparam('new', '%i' % new, mandatory=False)
1522 rpart.addparam('new', '%i' % new, mandatory=False)
1521
1523
1522
1524
1523 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1525 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1524 def handleobsmarkerreply(op, inpart):
1526 def handleobsmarkerreply(op, inpart):
1525 """retrieve the result of a pushkey request"""
1527 """retrieve the result of a pushkey request"""
1526 ret = int(inpart.params['new'])
1528 ret = int(inpart.params['new'])
1527 partid = int(inpart.params['in-reply-to'])
1529 partid = int(inpart.params['in-reply-to'])
1528 op.records.add('obsmarkers', {'new': ret}, partid)
1530 op.records.add('obsmarkers', {'new': ret}, partid)
1529
1531
1530 @parthandler('hgtagsfnodes')
1532 @parthandler('hgtagsfnodes')
1531 def handlehgtagsfnodes(op, inpart):
1533 def handlehgtagsfnodes(op, inpart):
1532 """Applies .hgtags fnodes cache entries to the local repo.
1534 """Applies .hgtags fnodes cache entries to the local repo.
1533
1535
1534 Payload is pairs of 20 byte changeset nodes and filenodes.
1536 Payload is pairs of 20 byte changeset nodes and filenodes.
1535 """
1537 """
1536 # Grab the transaction so we ensure that we have the lock at this point.
1538 # Grab the transaction so we ensure that we have the lock at this point.
1537 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1539 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1538 op.gettransaction()
1540 op.gettransaction()
1539 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1541 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1540
1542
1541 count = 0
1543 count = 0
1542 while True:
1544 while True:
1543 node = inpart.read(20)
1545 node = inpart.read(20)
1544 fnode = inpart.read(20)
1546 fnode = inpart.read(20)
1545 if len(node) < 20 or len(fnode) < 20:
1547 if len(node) < 20 or len(fnode) < 20:
1546 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1548 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1547 break
1549 break
1548 cache.setfnode(node, fnode)
1550 cache.setfnode(node, fnode)
1549 count += 1
1551 count += 1
1550
1552
1551 cache.write()
1553 cache.write()
1552 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1554 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,1213 +1,1213 b''
1 # scmutil.py - Mercurial core utility functions
1 # scmutil.py - Mercurial core utility functions
2 #
2 #
3 # Copyright Matt Mackall <mpm@selenic.com>
3 # Copyright Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import glob
11 import glob
12 import os
12 import os
13 import re
13 import re
14 import shutil
14 import shutil
15 import stat
15 import stat
16 import tempfile
16 import tempfile
17
17
18 from .i18n import _
18 from .i18n import _
19 from .node import wdirrev
19 from .node import wdirrev
20 from . import (
20 from . import (
21 encoding,
21 encoding,
22 error,
22 error,
23 match as matchmod,
23 match as matchmod,
24 osutil,
24 osutil,
25 pathutil,
25 pathutil,
26 phases,
26 phases,
27 revset,
27 revset,
28 similar,
28 similar,
29 util,
29 util,
30 )
30 )
31
31
32 if os.name == 'nt':
32 if os.name == 'nt':
33 from . import scmwindows as scmplatform
33 from . import scmwindows as scmplatform
34 else:
34 else:
35 from . import scmposix as scmplatform
35 from . import scmposix as scmplatform
36
36
37 systemrcpath = scmplatform.systemrcpath
37 systemrcpath = scmplatform.systemrcpath
38 userrcpath = scmplatform.userrcpath
38 userrcpath = scmplatform.userrcpath
39
39
40 class status(tuple):
40 class status(tuple):
41 '''Named tuple with a list of files per status. The 'deleted', 'unknown'
41 '''Named tuple with a list of files per status. The 'deleted', 'unknown'
42 and 'ignored' properties are only relevant to the working copy.
42 and 'ignored' properties are only relevant to the working copy.
43 '''
43 '''
44
44
45 __slots__ = ()
45 __slots__ = ()
46
46
47 def __new__(cls, modified, added, removed, deleted, unknown, ignored,
47 def __new__(cls, modified, added, removed, deleted, unknown, ignored,
48 clean):
48 clean):
49 return tuple.__new__(cls, (modified, added, removed, deleted, unknown,
49 return tuple.__new__(cls, (modified, added, removed, deleted, unknown,
50 ignored, clean))
50 ignored, clean))
51
51
52 @property
52 @property
53 def modified(self):
53 def modified(self):
54 '''files that have been modified'''
54 '''files that have been modified'''
55 return self[0]
55 return self[0]
56
56
57 @property
57 @property
58 def added(self):
58 def added(self):
59 '''files that have been added'''
59 '''files that have been added'''
60 return self[1]
60 return self[1]
61
61
62 @property
62 @property
63 def removed(self):
63 def removed(self):
64 '''files that have been removed'''
64 '''files that have been removed'''
65 return self[2]
65 return self[2]
66
66
67 @property
67 @property
68 def deleted(self):
68 def deleted(self):
69 '''files that are in the dirstate, but have been deleted from the
69 '''files that are in the dirstate, but have been deleted from the
70 working copy (aka "missing")
70 working copy (aka "missing")
71 '''
71 '''
72 return self[3]
72 return self[3]
73
73
74 @property
74 @property
75 def unknown(self):
75 def unknown(self):
76 '''files not in the dirstate that are not ignored'''
76 '''files not in the dirstate that are not ignored'''
77 return self[4]
77 return self[4]
78
78
79 @property
79 @property
80 def ignored(self):
80 def ignored(self):
81 '''files not in the dirstate that are ignored (by _dirignore())'''
81 '''files not in the dirstate that are ignored (by _dirignore())'''
82 return self[5]
82 return self[5]
83
83
84 @property
84 @property
85 def clean(self):
85 def clean(self):
86 '''files that have not been modified'''
86 '''files that have not been modified'''
87 return self[6]
87 return self[6]
88
88
89 def __repr__(self, *args, **kwargs):
89 def __repr__(self, *args, **kwargs):
90 return (('<status modified=%r, added=%r, removed=%r, deleted=%r, '
90 return (('<status modified=%r, added=%r, removed=%r, deleted=%r, '
91 'unknown=%r, ignored=%r, clean=%r>') % self)
91 'unknown=%r, ignored=%r, clean=%r>') % self)
92
92
93 def itersubrepos(ctx1, ctx2):
93 def itersubrepos(ctx1, ctx2):
94 """find subrepos in ctx1 or ctx2"""
94 """find subrepos in ctx1 or ctx2"""
95 # Create a (subpath, ctx) mapping where we prefer subpaths from
95 # Create a (subpath, ctx) mapping where we prefer subpaths from
96 # ctx1. The subpaths from ctx2 are important when the .hgsub file
96 # ctx1. The subpaths from ctx2 are important when the .hgsub file
97 # has been modified (in ctx2) but not yet committed (in ctx1).
97 # has been modified (in ctx2) but not yet committed (in ctx1).
98 subpaths = dict.fromkeys(ctx2.substate, ctx2)
98 subpaths = dict.fromkeys(ctx2.substate, ctx2)
99 subpaths.update(dict.fromkeys(ctx1.substate, ctx1))
99 subpaths.update(dict.fromkeys(ctx1.substate, ctx1))
100
100
101 missing = set()
101 missing = set()
102
102
103 for subpath in ctx2.substate:
103 for subpath in ctx2.substate:
104 if subpath not in ctx1.substate:
104 if subpath not in ctx1.substate:
105 del subpaths[subpath]
105 del subpaths[subpath]
106 missing.add(subpath)
106 missing.add(subpath)
107
107
108 for subpath, ctx in sorted(subpaths.iteritems()):
108 for subpath, ctx in sorted(subpaths.iteritems()):
109 yield subpath, ctx.sub(subpath)
109 yield subpath, ctx.sub(subpath)
110
110
111 # Yield an empty subrepo based on ctx1 for anything only in ctx2. That way,
111 # Yield an empty subrepo based on ctx1 for anything only in ctx2. That way,
112 # status and diff will have an accurate result when it does
112 # status and diff will have an accurate result when it does
113 # 'sub.{status|diff}(rev2)'. Otherwise, the ctx2 subrepo is compared
113 # 'sub.{status|diff}(rev2)'. Otherwise, the ctx2 subrepo is compared
114 # against itself.
114 # against itself.
115 for subpath in missing:
115 for subpath in missing:
116 yield subpath, ctx2.nullsub(subpath, ctx1)
116 yield subpath, ctx2.nullsub(subpath, ctx1)
117
117
118 def nochangesfound(ui, repo, excluded=None):
118 def nochangesfound(ui, repo, excluded=None):
119 '''Report no changes for push/pull, excluded is None or a list of
119 '''Report no changes for push/pull, excluded is None or a list of
120 nodes excluded from the push/pull.
120 nodes excluded from the push/pull.
121 '''
121 '''
122 secretlist = []
122 secretlist = []
123 if excluded:
123 if excluded:
124 for n in excluded:
124 for n in excluded:
125 if n not in repo:
125 if n not in repo:
126 # discovery should not have included the filtered revision,
126 # discovery should not have included the filtered revision,
127 # we have to explicitly exclude it until discovery is cleanup.
127 # we have to explicitly exclude it until discovery is cleanup.
128 continue
128 continue
129 ctx = repo[n]
129 ctx = repo[n]
130 if ctx.phase() >= phases.secret and not ctx.extinct():
130 if ctx.phase() >= phases.secret and not ctx.extinct():
131 secretlist.append(n)
131 secretlist.append(n)
132
132
133 if secretlist:
133 if secretlist:
134 ui.status(_("no changes found (ignored %d secret changesets)\n")
134 ui.status(_("no changes found (ignored %d secret changesets)\n")
135 % len(secretlist))
135 % len(secretlist))
136 else:
136 else:
137 ui.status(_("no changes found\n"))
137 ui.status(_("no changes found\n"))
138
138
139 def checknewlabel(repo, lbl, kind):
139 def checknewlabel(repo, lbl, kind):
140 # Do not use the "kind" parameter in ui output.
140 # Do not use the "kind" parameter in ui output.
141 # It makes strings difficult to translate.
141 # It makes strings difficult to translate.
142 if lbl in ['tip', '.', 'null']:
142 if lbl in ['tip', '.', 'null']:
143 raise error.Abort(_("the name '%s' is reserved") % lbl)
143 raise error.Abort(_("the name '%s' is reserved") % lbl)
144 for c in (':', '\0', '\n', '\r'):
144 for c in (':', '\0', '\n', '\r'):
145 if c in lbl:
145 if c in lbl:
146 raise error.Abort(_("%r cannot be used in a name") % c)
146 raise error.Abort(_("%r cannot be used in a name") % c)
147 try:
147 try:
148 int(lbl)
148 int(lbl)
149 raise error.Abort(_("cannot use an integer as a name"))
149 raise error.Abort(_("cannot use an integer as a name"))
150 except ValueError:
150 except ValueError:
151 pass
151 pass
152
152
153 def checkfilename(f):
153 def checkfilename(f):
154 '''Check that the filename f is an acceptable filename for a tracked file'''
154 '''Check that the filename f is an acceptable filename for a tracked file'''
155 if '\r' in f or '\n' in f:
155 if '\r' in f or '\n' in f:
156 raise error.Abort(_("'\\n' and '\\r' disallowed in filenames: %r") % f)
156 raise error.Abort(_("'\\n' and '\\r' disallowed in filenames: %r") % f)
157
157
158 def checkportable(ui, f):
158 def checkportable(ui, f):
159 '''Check if filename f is portable and warn or abort depending on config'''
159 '''Check if filename f is portable and warn or abort depending on config'''
160 checkfilename(f)
160 checkfilename(f)
161 abort, warn = checkportabilityalert(ui)
161 abort, warn = checkportabilityalert(ui)
162 if abort or warn:
162 if abort or warn:
163 msg = util.checkwinfilename(f)
163 msg = util.checkwinfilename(f)
164 if msg:
164 if msg:
165 msg = "%s: %r" % (msg, f)
165 msg = "%s: %r" % (msg, f)
166 if abort:
166 if abort:
167 raise error.Abort(msg)
167 raise error.Abort(msg)
168 ui.warn(_("warning: %s\n") % msg)
168 ui.warn(_("warning: %s\n") % msg)
169
169
170 def checkportabilityalert(ui):
170 def checkportabilityalert(ui):
171 '''check if the user's config requests nothing, a warning, or abort for
171 '''check if the user's config requests nothing, a warning, or abort for
172 non-portable filenames'''
172 non-portable filenames'''
173 val = ui.config('ui', 'portablefilenames', 'warn')
173 val = ui.config('ui', 'portablefilenames', 'warn')
174 lval = val.lower()
174 lval = val.lower()
175 bval = util.parsebool(val)
175 bval = util.parsebool(val)
176 abort = os.name == 'nt' or lval == 'abort'
176 abort = os.name == 'nt' or lval == 'abort'
177 warn = bval or lval == 'warn'
177 warn = bval or lval == 'warn'
178 if bval is None and not (warn or abort or lval == 'ignore'):
178 if bval is None and not (warn or abort or lval == 'ignore'):
179 raise error.ConfigError(
179 raise error.ConfigError(
180 _("ui.portablefilenames value is invalid ('%s')") % val)
180 _("ui.portablefilenames value is invalid ('%s')") % val)
181 return abort, warn
181 return abort, warn
182
182
183 class casecollisionauditor(object):
183 class casecollisionauditor(object):
184 def __init__(self, ui, abort, dirstate):
184 def __init__(self, ui, abort, dirstate):
185 self._ui = ui
185 self._ui = ui
186 self._abort = abort
186 self._abort = abort
187 allfiles = '\0'.join(dirstate._map)
187 allfiles = '\0'.join(dirstate._map)
188 self._loweredfiles = set(encoding.lower(allfiles).split('\0'))
188 self._loweredfiles = set(encoding.lower(allfiles).split('\0'))
189 self._dirstate = dirstate
189 self._dirstate = dirstate
190 # The purpose of _newfiles is so that we don't complain about
190 # The purpose of _newfiles is so that we don't complain about
191 # case collisions if someone were to call this object with the
191 # case collisions if someone were to call this object with the
192 # same filename twice.
192 # same filename twice.
193 self._newfiles = set()
193 self._newfiles = set()
194
194
195 def __call__(self, f):
195 def __call__(self, f):
196 if f in self._newfiles:
196 if f in self._newfiles:
197 return
197 return
198 fl = encoding.lower(f)
198 fl = encoding.lower(f)
199 if fl in self._loweredfiles and f not in self._dirstate:
199 if fl in self._loweredfiles and f not in self._dirstate:
200 msg = _('possible case-folding collision for %s') % f
200 msg = _('possible case-folding collision for %s') % f
201 if self._abort:
201 if self._abort:
202 raise error.Abort(msg)
202 raise error.Abort(msg)
203 self._ui.warn(_("warning: %s\n") % msg)
203 self._ui.warn(_("warning: %s\n") % msg)
204 self._loweredfiles.add(fl)
204 self._loweredfiles.add(fl)
205 self._newfiles.add(f)
205 self._newfiles.add(f)
206
206
207 def filteredhash(repo, maxrev):
207 def filteredhash(repo, maxrev):
208 """build hash of filtered revisions in the current repoview.
208 """build hash of filtered revisions in the current repoview.
209
209
210 Multiple caches perform up-to-date validation by checking that the
210 Multiple caches perform up-to-date validation by checking that the
211 tiprev and tipnode stored in the cache file match the current repository.
211 tiprev and tipnode stored in the cache file match the current repository.
212 However, this is not sufficient for validating repoviews because the set
212 However, this is not sufficient for validating repoviews because the set
213 of revisions in the view may change without the repository tiprev and
213 of revisions in the view may change without the repository tiprev and
214 tipnode changing.
214 tipnode changing.
215
215
216 This function hashes all the revs filtered from the view and returns
216 This function hashes all the revs filtered from the view and returns
217 that SHA-1 digest.
217 that SHA-1 digest.
218 """
218 """
219 cl = repo.changelog
219 cl = repo.changelog
220 if not cl.filteredrevs:
220 if not cl.filteredrevs:
221 return None
221 return None
222 key = None
222 key = None
223 revs = sorted(r for r in cl.filteredrevs if r <= maxrev)
223 revs = sorted(r for r in cl.filteredrevs if r <= maxrev)
224 if revs:
224 if revs:
225 s = util.sha1()
225 s = util.sha1()
226 for rev in revs:
226 for rev in revs:
227 s.update('%s;' % rev)
227 s.update('%s;' % rev)
228 key = s.digest()
228 key = s.digest()
229 return key
229 return key
230
230
231 class abstractvfs(object):
231 class abstractvfs(object):
232 """Abstract base class; cannot be instantiated"""
232 """Abstract base class; cannot be instantiated"""
233
233
234 def __init__(self, *args, **kwargs):
234 def __init__(self, *args, **kwargs):
235 '''Prevent instantiation; don't call this from subclasses.'''
235 '''Prevent instantiation; don't call this from subclasses.'''
236 raise NotImplementedError('attempted instantiating ' + str(type(self)))
236 raise NotImplementedError('attempted instantiating ' + str(type(self)))
237
237
238 def tryread(self, path):
238 def tryread(self, path):
239 '''gracefully return an empty string for missing files'''
239 '''gracefully return an empty string for missing files'''
240 try:
240 try:
241 return self.read(path)
241 return self.read(path)
242 except IOError as inst:
242 except IOError as inst:
243 if inst.errno != errno.ENOENT:
243 if inst.errno != errno.ENOENT:
244 raise
244 raise
245 return ""
245 return ""
246
246
247 def tryreadlines(self, path, mode='rb'):
247 def tryreadlines(self, path, mode='rb'):
248 '''gracefully return an empty array for missing files'''
248 '''gracefully return an empty array for missing files'''
249 try:
249 try:
250 return self.readlines(path, mode=mode)
250 return self.readlines(path, mode=mode)
251 except IOError as inst:
251 except IOError as inst:
252 if inst.errno != errno.ENOENT:
252 if inst.errno != errno.ENOENT:
253 raise
253 raise
254 return []
254 return []
255
255
256 def open(self, path, mode="r", text=False, atomictemp=False,
256 def open(self, path, mode="r", text=False, atomictemp=False,
257 notindexed=False):
257 notindexed=False):
258 '''Open ``path`` file, which is relative to vfs root.
258 '''Open ``path`` file, which is relative to vfs root.
259
259
260 Newly created directories are marked as "not to be indexed by
260 Newly created directories are marked as "not to be indexed by
261 the content indexing service", if ``notindexed`` is specified
261 the content indexing service", if ``notindexed`` is specified
262 for "write" mode access.
262 for "write" mode access.
263 '''
263 '''
264 self.open = self.__call__
264 self.open = self.__call__
265 return self.__call__(path, mode, text, atomictemp, notindexed)
265 return self.__call__(path, mode, text, atomictemp, notindexed)
266
266
267 def read(self, path):
267 def read(self, path):
268 with self(path, 'rb') as fp:
268 with self(path, 'rb') as fp:
269 return fp.read()
269 return fp.read()
270
270
271 def readlines(self, path, mode='rb'):
271 def readlines(self, path, mode='rb'):
272 with self(path, mode=mode) as fp:
272 with self(path, mode=mode) as fp:
273 return fp.readlines()
273 return fp.readlines()
274
274
275 def write(self, path, data):
275 def write(self, path, data):
276 with self(path, 'wb') as fp:
276 with self(path, 'wb') as fp:
277 return fp.write(data)
277 return fp.write(data)
278
278
279 def writelines(self, path, data, mode='wb', notindexed=False):
279 def writelines(self, path, data, mode='wb', notindexed=False):
280 with self(path, mode=mode, notindexed=notindexed) as fp:
280 with self(path, mode=mode, notindexed=notindexed) as fp:
281 return fp.writelines(data)
281 return fp.writelines(data)
282
282
283 def append(self, path, data):
283 def append(self, path, data):
284 with self(path, 'ab') as fp:
284 with self(path, 'ab') as fp:
285 return fp.write(data)
285 return fp.write(data)
286
286
287 def basename(self, path):
287 def basename(self, path):
288 """return base element of a path (as os.path.basename would do)
288 """return base element of a path (as os.path.basename would do)
289
289
290 This exists to allow handling of strange encoding if needed."""
290 This exists to allow handling of strange encoding if needed."""
291 return os.path.basename(path)
291 return os.path.basename(path)
292
292
293 def chmod(self, path, mode):
293 def chmod(self, path, mode):
294 return os.chmod(self.join(path), mode)
294 return os.chmod(self.join(path), mode)
295
295
296 def dirname(self, path):
296 def dirname(self, path):
297 """return dirname element of a path (as os.path.dirname would do)
297 """return dirname element of a path (as os.path.dirname would do)
298
298
299 This exists to allow handling of strange encoding if needed."""
299 This exists to allow handling of strange encoding if needed."""
300 return os.path.dirname(path)
300 return os.path.dirname(path)
301
301
302 def exists(self, path=None):
302 def exists(self, path=None):
303 return os.path.exists(self.join(path))
303 return os.path.exists(self.join(path))
304
304
305 def fstat(self, fp):
305 def fstat(self, fp):
306 return util.fstat(fp)
306 return util.fstat(fp)
307
307
308 def isdir(self, path=None):
308 def isdir(self, path=None):
309 return os.path.isdir(self.join(path))
309 return os.path.isdir(self.join(path))
310
310
311 def isfile(self, path=None):
311 def isfile(self, path=None):
312 return os.path.isfile(self.join(path))
312 return os.path.isfile(self.join(path))
313
313
314 def islink(self, path=None):
314 def islink(self, path=None):
315 return os.path.islink(self.join(path))
315 return os.path.islink(self.join(path))
316
316
317 def isfileorlink(self, path=None):
317 def isfileorlink(self, path=None):
318 '''return whether path is a regular file or a symlink
318 '''return whether path is a regular file or a symlink
319
319
320 Unlike isfile, this doesn't follow symlinks.'''
320 Unlike isfile, this doesn't follow symlinks.'''
321 try:
321 try:
322 st = self.lstat(path)
322 st = self.lstat(path)
323 except OSError:
323 except OSError:
324 return False
324 return False
325 mode = st.st_mode
325 mode = st.st_mode
326 return stat.S_ISREG(mode) or stat.S_ISLNK(mode)
326 return stat.S_ISREG(mode) or stat.S_ISLNK(mode)
327
327
328 def reljoin(self, *paths):
328 def reljoin(self, *paths):
329 """join various elements of a path together (as os.path.join would do)
329 """join various elements of a path together (as os.path.join would do)
330
330
331 The vfs base is not injected so that path stay relative. This exists
331 The vfs base is not injected so that path stay relative. This exists
332 to allow handling of strange encoding if needed."""
332 to allow handling of strange encoding if needed."""
333 return os.path.join(*paths)
333 return os.path.join(*paths)
334
334
335 def split(self, path):
335 def split(self, path):
336 """split top-most element of a path (as os.path.split would do)
336 """split top-most element of a path (as os.path.split would do)
337
337
338 This exists to allow handling of strange encoding if needed."""
338 This exists to allow handling of strange encoding if needed."""
339 return os.path.split(path)
339 return os.path.split(path)
340
340
341 def lexists(self, path=None):
341 def lexists(self, path=None):
342 return os.path.lexists(self.join(path))
342 return os.path.lexists(self.join(path))
343
343
344 def lstat(self, path=None):
344 def lstat(self, path=None):
345 return os.lstat(self.join(path))
345 return os.lstat(self.join(path))
346
346
347 def listdir(self, path=None):
347 def listdir(self, path=None):
348 return os.listdir(self.join(path))
348 return os.listdir(self.join(path))
349
349
350 def makedir(self, path=None, notindexed=True):
350 def makedir(self, path=None, notindexed=True):
351 return util.makedir(self.join(path), notindexed)
351 return util.makedir(self.join(path), notindexed)
352
352
353 def makedirs(self, path=None, mode=None):
353 def makedirs(self, path=None, mode=None):
354 return util.makedirs(self.join(path), mode)
354 return util.makedirs(self.join(path), mode)
355
355
356 def makelock(self, info, path):
356 def makelock(self, info, path):
357 return util.makelock(info, self.join(path))
357 return util.makelock(info, self.join(path))
358
358
359 def mkdir(self, path=None):
359 def mkdir(self, path=None):
360 return os.mkdir(self.join(path))
360 return os.mkdir(self.join(path))
361
361
362 def mkstemp(self, suffix='', prefix='tmp', dir=None, text=False):
362 def mkstemp(self, suffix='', prefix='tmp', dir=None, text=False):
363 fd, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
363 fd, name = tempfile.mkstemp(suffix=suffix, prefix=prefix,
364 dir=self.join(dir), text=text)
364 dir=self.join(dir), text=text)
365 dname, fname = util.split(name)
365 dname, fname = util.split(name)
366 if dir:
366 if dir:
367 return fd, os.path.join(dir, fname)
367 return fd, os.path.join(dir, fname)
368 else:
368 else:
369 return fd, fname
369 return fd, fname
370
370
371 def readdir(self, path=None, stat=None, skip=None):
371 def readdir(self, path=None, stat=None, skip=None):
372 return osutil.listdir(self.join(path), stat, skip)
372 return osutil.listdir(self.join(path), stat, skip)
373
373
374 def readlock(self, path):
374 def readlock(self, path):
375 return util.readlock(self.join(path))
375 return util.readlock(self.join(path))
376
376
377 def rename(self, src, dst):
377 def rename(self, src, dst):
378 return util.rename(self.join(src), self.join(dst))
378 return util.rename(self.join(src), self.join(dst))
379
379
380 def readlink(self, path):
380 def readlink(self, path):
381 return os.readlink(self.join(path))
381 return os.readlink(self.join(path))
382
382
383 def removedirs(self, path=None):
383 def removedirs(self, path=None):
384 """Remove a leaf directory and all empty intermediate ones
384 """Remove a leaf directory and all empty intermediate ones
385 """
385 """
386 return util.removedirs(self.join(path))
386 return util.removedirs(self.join(path))
387
387
388 def rmtree(self, path=None, ignore_errors=False, forcibly=False):
388 def rmtree(self, path=None, ignore_errors=False, forcibly=False):
389 """Remove a directory tree recursively
389 """Remove a directory tree recursively
390
390
391 If ``forcibly``, this tries to remove READ-ONLY files, too.
391 If ``forcibly``, this tries to remove READ-ONLY files, too.
392 """
392 """
393 if forcibly:
393 if forcibly:
394 def onerror(function, path, excinfo):
394 def onerror(function, path, excinfo):
395 if function is not os.remove:
395 if function is not os.remove:
396 raise
396 raise
397 # read-only files cannot be unlinked under Windows
397 # read-only files cannot be unlinked under Windows
398 s = os.stat(path)
398 s = os.stat(path)
399 if (s.st_mode & stat.S_IWRITE) != 0:
399 if (s.st_mode & stat.S_IWRITE) != 0:
400 raise
400 raise
401 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
401 os.chmod(path, stat.S_IMODE(s.st_mode) | stat.S_IWRITE)
402 os.remove(path)
402 os.remove(path)
403 else:
403 else:
404 onerror = None
404 onerror = None
405 return shutil.rmtree(self.join(path),
405 return shutil.rmtree(self.join(path),
406 ignore_errors=ignore_errors, onerror=onerror)
406 ignore_errors=ignore_errors, onerror=onerror)
407
407
408 def setflags(self, path, l, x):
408 def setflags(self, path, l, x):
409 return util.setflags(self.join(path), l, x)
409 return util.setflags(self.join(path), l, x)
410
410
411 def stat(self, path=None):
411 def stat(self, path=None):
412 return os.stat(self.join(path))
412 return os.stat(self.join(path))
413
413
414 def unlink(self, path=None):
414 def unlink(self, path=None):
415 return util.unlink(self.join(path))
415 return util.unlink(self.join(path))
416
416
417 def unlinkpath(self, path=None, ignoremissing=False):
417 def unlinkpath(self, path=None, ignoremissing=False):
418 return util.unlinkpath(self.join(path), ignoremissing)
418 return util.unlinkpath(self.join(path), ignoremissing)
419
419
420 def utime(self, path=None, t=None):
420 def utime(self, path=None, t=None):
421 return os.utime(self.join(path), t)
421 return os.utime(self.join(path), t)
422
422
423 def walk(self, path=None, onerror=None):
423 def walk(self, path=None, onerror=None):
424 """Yield (dirpath, dirs, files) tuple for each directories under path
424 """Yield (dirpath, dirs, files) tuple for each directories under path
425
425
426 ``dirpath`` is relative one from the root of this vfs. This
426 ``dirpath`` is relative one from the root of this vfs. This
427 uses ``os.sep`` as path separator, even you specify POSIX
427 uses ``os.sep`` as path separator, even you specify POSIX
428 style ``path``.
428 style ``path``.
429
429
430 "The root of this vfs" is represented as empty ``dirpath``.
430 "The root of this vfs" is represented as empty ``dirpath``.
431 """
431 """
432 root = os.path.normpath(self.join(None))
432 root = os.path.normpath(self.join(None))
433 # when dirpath == root, dirpath[prefixlen:] becomes empty
433 # when dirpath == root, dirpath[prefixlen:] becomes empty
434 # because len(dirpath) < prefixlen.
434 # because len(dirpath) < prefixlen.
435 prefixlen = len(pathutil.normasprefix(root))
435 prefixlen = len(pathutil.normasprefix(root))
436 for dirpath, dirs, files in os.walk(self.join(path), onerror=onerror):
436 for dirpath, dirs, files in os.walk(self.join(path), onerror=onerror):
437 yield (dirpath[prefixlen:], dirs, files)
437 yield (dirpath[prefixlen:], dirs, files)
438
438
439 class vfs(abstractvfs):
439 class vfs(abstractvfs):
440 '''Operate files relative to a base directory
440 '''Operate files relative to a base directory
441
441
442 This class is used to hide the details of COW semantics and
442 This class is used to hide the details of COW semantics and
443 remote file access from higher level code.
443 remote file access from higher level code.
444 '''
444 '''
445 def __init__(self, base, audit=True, expandpath=False, realpath=False):
445 def __init__(self, base, audit=True, expandpath=False, realpath=False):
446 if expandpath:
446 if expandpath:
447 base = util.expandpath(base)
447 base = util.expandpath(base)
448 if realpath:
448 if realpath:
449 base = os.path.realpath(base)
449 base = os.path.realpath(base)
450 self.base = base
450 self.base = base
451 self._setmustaudit(audit)
451 self.mustaudit = audit
452 self.createmode = None
452 self.createmode = None
453 self._trustnlink = None
453 self._trustnlink = None
454
454
455 def _getmustaudit(self):
455 @property
456 def mustaudit(self):
456 return self._audit
457 return self._audit
457
458
458 def _setmustaudit(self, onoff):
459 @mustaudit.setter
460 def mustaudit(self, onoff):
459 self._audit = onoff
461 self._audit = onoff
460 if onoff:
462 if onoff:
461 self.audit = pathutil.pathauditor(self.base)
463 self.audit = pathutil.pathauditor(self.base)
462 else:
464 else:
463 self.audit = util.always
465 self.audit = util.always
464
466
465 mustaudit = property(_getmustaudit, _setmustaudit)
466
467 @util.propertycache
467 @util.propertycache
468 def _cansymlink(self):
468 def _cansymlink(self):
469 return util.checklink(self.base)
469 return util.checklink(self.base)
470
470
471 @util.propertycache
471 @util.propertycache
472 def _chmod(self):
472 def _chmod(self):
473 return util.checkexec(self.base)
473 return util.checkexec(self.base)
474
474
475 def _fixfilemode(self, name):
475 def _fixfilemode(self, name):
476 if self.createmode is None or not self._chmod:
476 if self.createmode is None or not self._chmod:
477 return
477 return
478 os.chmod(name, self.createmode & 0o666)
478 os.chmod(name, self.createmode & 0o666)
479
479
480 def __call__(self, path, mode="r", text=False, atomictemp=False,
480 def __call__(self, path, mode="r", text=False, atomictemp=False,
481 notindexed=False):
481 notindexed=False):
482 '''Open ``path`` file, which is relative to vfs root.
482 '''Open ``path`` file, which is relative to vfs root.
483
483
484 Newly created directories are marked as "not to be indexed by
484 Newly created directories are marked as "not to be indexed by
485 the content indexing service", if ``notindexed`` is specified
485 the content indexing service", if ``notindexed`` is specified
486 for "write" mode access.
486 for "write" mode access.
487 '''
487 '''
488 if self._audit:
488 if self._audit:
489 r = util.checkosfilename(path)
489 r = util.checkosfilename(path)
490 if r:
490 if r:
491 raise error.Abort("%s: %r" % (r, path))
491 raise error.Abort("%s: %r" % (r, path))
492 self.audit(path)
492 self.audit(path)
493 f = self.join(path)
493 f = self.join(path)
494
494
495 if not text and "b" not in mode:
495 if not text and "b" not in mode:
496 mode += "b" # for that other OS
496 mode += "b" # for that other OS
497
497
498 nlink = -1
498 nlink = -1
499 if mode not in ('r', 'rb'):
499 if mode not in ('r', 'rb'):
500 dirname, basename = util.split(f)
500 dirname, basename = util.split(f)
501 # If basename is empty, then the path is malformed because it points
501 # If basename is empty, then the path is malformed because it points
502 # to a directory. Let the posixfile() call below raise IOError.
502 # to a directory. Let the posixfile() call below raise IOError.
503 if basename:
503 if basename:
504 if atomictemp:
504 if atomictemp:
505 util.ensuredirs(dirname, self.createmode, notindexed)
505 util.ensuredirs(dirname, self.createmode, notindexed)
506 return util.atomictempfile(f, mode, self.createmode)
506 return util.atomictempfile(f, mode, self.createmode)
507 try:
507 try:
508 if 'w' in mode:
508 if 'w' in mode:
509 util.unlink(f)
509 util.unlink(f)
510 nlink = 0
510 nlink = 0
511 else:
511 else:
512 # nlinks() may behave differently for files on Windows
512 # nlinks() may behave differently for files on Windows
513 # shares if the file is open.
513 # shares if the file is open.
514 with util.posixfile(f):
514 with util.posixfile(f):
515 nlink = util.nlinks(f)
515 nlink = util.nlinks(f)
516 if nlink < 1:
516 if nlink < 1:
517 nlink = 2 # force mktempcopy (issue1922)
517 nlink = 2 # force mktempcopy (issue1922)
518 except (OSError, IOError) as e:
518 except (OSError, IOError) as e:
519 if e.errno != errno.ENOENT:
519 if e.errno != errno.ENOENT:
520 raise
520 raise
521 nlink = 0
521 nlink = 0
522 util.ensuredirs(dirname, self.createmode, notindexed)
522 util.ensuredirs(dirname, self.createmode, notindexed)
523 if nlink > 0:
523 if nlink > 0:
524 if self._trustnlink is None:
524 if self._trustnlink is None:
525 self._trustnlink = nlink > 1 or util.checknlink(f)
525 self._trustnlink = nlink > 1 or util.checknlink(f)
526 if nlink > 1 or not self._trustnlink:
526 if nlink > 1 or not self._trustnlink:
527 util.rename(util.mktempcopy(f), f)
527 util.rename(util.mktempcopy(f), f)
528 fp = util.posixfile(f, mode)
528 fp = util.posixfile(f, mode)
529 if nlink == 0:
529 if nlink == 0:
530 self._fixfilemode(f)
530 self._fixfilemode(f)
531 return fp
531 return fp
532
532
533 def symlink(self, src, dst):
533 def symlink(self, src, dst):
534 self.audit(dst)
534 self.audit(dst)
535 linkname = self.join(dst)
535 linkname = self.join(dst)
536 try:
536 try:
537 os.unlink(linkname)
537 os.unlink(linkname)
538 except OSError:
538 except OSError:
539 pass
539 pass
540
540
541 util.ensuredirs(os.path.dirname(linkname), self.createmode)
541 util.ensuredirs(os.path.dirname(linkname), self.createmode)
542
542
543 if self._cansymlink:
543 if self._cansymlink:
544 try:
544 try:
545 os.symlink(src, linkname)
545 os.symlink(src, linkname)
546 except OSError as err:
546 except OSError as err:
547 raise OSError(err.errno, _('could not symlink to %r: %s') %
547 raise OSError(err.errno, _('could not symlink to %r: %s') %
548 (src, err.strerror), linkname)
548 (src, err.strerror), linkname)
549 else:
549 else:
550 self.write(dst, src)
550 self.write(dst, src)
551
551
552 def join(self, path, *insidef):
552 def join(self, path, *insidef):
553 if path:
553 if path:
554 return os.path.join(self.base, path, *insidef)
554 return os.path.join(self.base, path, *insidef)
555 else:
555 else:
556 return self.base
556 return self.base
557
557
558 opener = vfs
558 opener = vfs
559
559
560 class auditvfs(object):
560 class auditvfs(object):
561 def __init__(self, vfs):
561 def __init__(self, vfs):
562 self.vfs = vfs
562 self.vfs = vfs
563
563
564 def _getmustaudit(self):
564 @property
565 def mustaudit(self):
565 return self.vfs.mustaudit
566 return self.vfs.mustaudit
566
567
567 def _setmustaudit(self, onoff):
568 @mustaudit.setter
569 def mustaudit(self, onoff):
568 self.vfs.mustaudit = onoff
570 self.vfs.mustaudit = onoff
569
571
570 mustaudit = property(_getmustaudit, _setmustaudit)
571
572 class filtervfs(abstractvfs, auditvfs):
572 class filtervfs(abstractvfs, auditvfs):
573 '''Wrapper vfs for filtering filenames with a function.'''
573 '''Wrapper vfs for filtering filenames with a function.'''
574
574
575 def __init__(self, vfs, filter):
575 def __init__(self, vfs, filter):
576 auditvfs.__init__(self, vfs)
576 auditvfs.__init__(self, vfs)
577 self._filter = filter
577 self._filter = filter
578
578
579 def __call__(self, path, *args, **kwargs):
579 def __call__(self, path, *args, **kwargs):
580 return self.vfs(self._filter(path), *args, **kwargs)
580 return self.vfs(self._filter(path), *args, **kwargs)
581
581
582 def join(self, path, *insidef):
582 def join(self, path, *insidef):
583 if path:
583 if path:
584 return self.vfs.join(self._filter(self.vfs.reljoin(path, *insidef)))
584 return self.vfs.join(self._filter(self.vfs.reljoin(path, *insidef)))
585 else:
585 else:
586 return self.vfs.join(path)
586 return self.vfs.join(path)
587
587
588 filteropener = filtervfs
588 filteropener = filtervfs
589
589
590 class readonlyvfs(abstractvfs, auditvfs):
590 class readonlyvfs(abstractvfs, auditvfs):
591 '''Wrapper vfs preventing any writing.'''
591 '''Wrapper vfs preventing any writing.'''
592
592
593 def __init__(self, vfs):
593 def __init__(self, vfs):
594 auditvfs.__init__(self, vfs)
594 auditvfs.__init__(self, vfs)
595
595
596 def __call__(self, path, mode='r', *args, **kw):
596 def __call__(self, path, mode='r', *args, **kw):
597 if mode not in ('r', 'rb'):
597 if mode not in ('r', 'rb'):
598 raise error.Abort('this vfs is read only')
598 raise error.Abort('this vfs is read only')
599 return self.vfs(path, mode, *args, **kw)
599 return self.vfs(path, mode, *args, **kw)
600
600
601 def join(self, path, *insidef):
601 def join(self, path, *insidef):
602 return self.vfs.join(path, *insidef)
602 return self.vfs.join(path, *insidef)
603
603
604 def walkrepos(path, followsym=False, seen_dirs=None, recurse=False):
604 def walkrepos(path, followsym=False, seen_dirs=None, recurse=False):
605 '''yield every hg repository under path, always recursively.
605 '''yield every hg repository under path, always recursively.
606 The recurse flag will only control recursion into repo working dirs'''
606 The recurse flag will only control recursion into repo working dirs'''
607 def errhandler(err):
607 def errhandler(err):
608 if err.filename == path:
608 if err.filename == path:
609 raise err
609 raise err
610 samestat = getattr(os.path, 'samestat', None)
610 samestat = getattr(os.path, 'samestat', None)
611 if followsym and samestat is not None:
611 if followsym and samestat is not None:
612 def adddir(dirlst, dirname):
612 def adddir(dirlst, dirname):
613 match = False
613 match = False
614 dirstat = os.stat(dirname)
614 dirstat = os.stat(dirname)
615 for lstdirstat in dirlst:
615 for lstdirstat in dirlst:
616 if samestat(dirstat, lstdirstat):
616 if samestat(dirstat, lstdirstat):
617 match = True
617 match = True
618 break
618 break
619 if not match:
619 if not match:
620 dirlst.append(dirstat)
620 dirlst.append(dirstat)
621 return not match
621 return not match
622 else:
622 else:
623 followsym = False
623 followsym = False
624
624
625 if (seen_dirs is None) and followsym:
625 if (seen_dirs is None) and followsym:
626 seen_dirs = []
626 seen_dirs = []
627 adddir(seen_dirs, path)
627 adddir(seen_dirs, path)
628 for root, dirs, files in os.walk(path, topdown=True, onerror=errhandler):
628 for root, dirs, files in os.walk(path, topdown=True, onerror=errhandler):
629 dirs.sort()
629 dirs.sort()
630 if '.hg' in dirs:
630 if '.hg' in dirs:
631 yield root # found a repository
631 yield root # found a repository
632 qroot = os.path.join(root, '.hg', 'patches')
632 qroot = os.path.join(root, '.hg', 'patches')
633 if os.path.isdir(os.path.join(qroot, '.hg')):
633 if os.path.isdir(os.path.join(qroot, '.hg')):
634 yield qroot # we have a patch queue repo here
634 yield qroot # we have a patch queue repo here
635 if recurse:
635 if recurse:
636 # avoid recursing inside the .hg directory
636 # avoid recursing inside the .hg directory
637 dirs.remove('.hg')
637 dirs.remove('.hg')
638 else:
638 else:
639 dirs[:] = [] # don't descend further
639 dirs[:] = [] # don't descend further
640 elif followsym:
640 elif followsym:
641 newdirs = []
641 newdirs = []
642 for d in dirs:
642 for d in dirs:
643 fname = os.path.join(root, d)
643 fname = os.path.join(root, d)
644 if adddir(seen_dirs, fname):
644 if adddir(seen_dirs, fname):
645 if os.path.islink(fname):
645 if os.path.islink(fname):
646 for hgname in walkrepos(fname, True, seen_dirs):
646 for hgname in walkrepos(fname, True, seen_dirs):
647 yield hgname
647 yield hgname
648 else:
648 else:
649 newdirs.append(d)
649 newdirs.append(d)
650 dirs[:] = newdirs
650 dirs[:] = newdirs
651
651
652 def osrcpath():
652 def osrcpath():
653 '''return default os-specific hgrc search path'''
653 '''return default os-specific hgrc search path'''
654 path = []
654 path = []
655 defaultpath = os.path.join(util.datapath, 'default.d')
655 defaultpath = os.path.join(util.datapath, 'default.d')
656 if os.path.isdir(defaultpath):
656 if os.path.isdir(defaultpath):
657 for f, kind in osutil.listdir(defaultpath):
657 for f, kind in osutil.listdir(defaultpath):
658 if f.endswith('.rc'):
658 if f.endswith('.rc'):
659 path.append(os.path.join(defaultpath, f))
659 path.append(os.path.join(defaultpath, f))
660 path.extend(systemrcpath())
660 path.extend(systemrcpath())
661 path.extend(userrcpath())
661 path.extend(userrcpath())
662 path = [os.path.normpath(f) for f in path]
662 path = [os.path.normpath(f) for f in path]
663 return path
663 return path
664
664
665 _rcpath = None
665 _rcpath = None
666
666
667 def rcpath():
667 def rcpath():
668 '''return hgrc search path. if env var HGRCPATH is set, use it.
668 '''return hgrc search path. if env var HGRCPATH is set, use it.
669 for each item in path, if directory, use files ending in .rc,
669 for each item in path, if directory, use files ending in .rc,
670 else use item.
670 else use item.
671 make HGRCPATH empty to only look in .hg/hgrc of current repo.
671 make HGRCPATH empty to only look in .hg/hgrc of current repo.
672 if no HGRCPATH, use default os-specific path.'''
672 if no HGRCPATH, use default os-specific path.'''
673 global _rcpath
673 global _rcpath
674 if _rcpath is None:
674 if _rcpath is None:
675 if 'HGRCPATH' in os.environ:
675 if 'HGRCPATH' in os.environ:
676 _rcpath = []
676 _rcpath = []
677 for p in os.environ['HGRCPATH'].split(os.pathsep):
677 for p in os.environ['HGRCPATH'].split(os.pathsep):
678 if not p:
678 if not p:
679 continue
679 continue
680 p = util.expandpath(p)
680 p = util.expandpath(p)
681 if os.path.isdir(p):
681 if os.path.isdir(p):
682 for f, kind in osutil.listdir(p):
682 for f, kind in osutil.listdir(p):
683 if f.endswith('.rc'):
683 if f.endswith('.rc'):
684 _rcpath.append(os.path.join(p, f))
684 _rcpath.append(os.path.join(p, f))
685 else:
685 else:
686 _rcpath.append(p)
686 _rcpath.append(p)
687 else:
687 else:
688 _rcpath = osrcpath()
688 _rcpath = osrcpath()
689 return _rcpath
689 return _rcpath
690
690
691 def intrev(rev):
691 def intrev(rev):
692 """Return integer for a given revision that can be used in comparison or
692 """Return integer for a given revision that can be used in comparison or
693 arithmetic operation"""
693 arithmetic operation"""
694 if rev is None:
694 if rev is None:
695 return wdirrev
695 return wdirrev
696 return rev
696 return rev
697
697
698 def revsingle(repo, revspec, default='.'):
698 def revsingle(repo, revspec, default='.'):
699 if not revspec and revspec != 0:
699 if not revspec and revspec != 0:
700 return repo[default]
700 return repo[default]
701
701
702 l = revrange(repo, [revspec])
702 l = revrange(repo, [revspec])
703 if not l:
703 if not l:
704 raise error.Abort(_('empty revision set'))
704 raise error.Abort(_('empty revision set'))
705 return repo[l.last()]
705 return repo[l.last()]
706
706
707 def _pairspec(revspec):
707 def _pairspec(revspec):
708 tree = revset.parse(revspec)
708 tree = revset.parse(revspec)
709 tree = revset.optimize(tree, True)[1] # fix up "x^:y" -> "(x^):y"
709 tree = revset.optimize(tree, True)[1] # fix up "x^:y" -> "(x^):y"
710 return tree and tree[0] in ('range', 'rangepre', 'rangepost', 'rangeall')
710 return tree and tree[0] in ('range', 'rangepre', 'rangepost', 'rangeall')
711
711
712 def revpair(repo, revs):
712 def revpair(repo, revs):
713 if not revs:
713 if not revs:
714 return repo.dirstate.p1(), None
714 return repo.dirstate.p1(), None
715
715
716 l = revrange(repo, revs)
716 l = revrange(repo, revs)
717
717
718 if not l:
718 if not l:
719 first = second = None
719 first = second = None
720 elif l.isascending():
720 elif l.isascending():
721 first = l.min()
721 first = l.min()
722 second = l.max()
722 second = l.max()
723 elif l.isdescending():
723 elif l.isdescending():
724 first = l.max()
724 first = l.max()
725 second = l.min()
725 second = l.min()
726 else:
726 else:
727 first = l.first()
727 first = l.first()
728 second = l.last()
728 second = l.last()
729
729
730 if first is None:
730 if first is None:
731 raise error.Abort(_('empty revision range'))
731 raise error.Abort(_('empty revision range'))
732 if (first == second and len(revs) >= 2
732 if (first == second and len(revs) >= 2
733 and not all(revrange(repo, [r]) for r in revs)):
733 and not all(revrange(repo, [r]) for r in revs)):
734 raise error.Abort(_('empty revision on one side of range'))
734 raise error.Abort(_('empty revision on one side of range'))
735
735
736 # if top-level is range expression, the result must always be a pair
736 # if top-level is range expression, the result must always be a pair
737 if first == second and len(revs) == 1 and not _pairspec(revs[0]):
737 if first == second and len(revs) == 1 and not _pairspec(revs[0]):
738 return repo.lookup(first), None
738 return repo.lookup(first), None
739
739
740 return repo.lookup(first), repo.lookup(second)
740 return repo.lookup(first), repo.lookup(second)
741
741
742 def revrange(repo, revs):
742 def revrange(repo, revs):
743 """Yield revision as strings from a list of revision specifications."""
743 """Yield revision as strings from a list of revision specifications."""
744 allspecs = []
744 allspecs = []
745 for spec in revs:
745 for spec in revs:
746 if isinstance(spec, int):
746 if isinstance(spec, int):
747 spec = revset.formatspec('rev(%d)', spec)
747 spec = revset.formatspec('rev(%d)', spec)
748 allspecs.append(spec)
748 allspecs.append(spec)
749 m = revset.matchany(repo.ui, allspecs, repo)
749 m = revset.matchany(repo.ui, allspecs, repo)
750 return m(repo)
750 return m(repo)
751
751
752 def meaningfulparents(repo, ctx):
752 def meaningfulparents(repo, ctx):
753 """Return list of meaningful (or all if debug) parentrevs for rev.
753 """Return list of meaningful (or all if debug) parentrevs for rev.
754
754
755 For merges (two non-nullrev revisions) both parents are meaningful.
755 For merges (two non-nullrev revisions) both parents are meaningful.
756 Otherwise the first parent revision is considered meaningful if it
756 Otherwise the first parent revision is considered meaningful if it
757 is not the preceding revision.
757 is not the preceding revision.
758 """
758 """
759 parents = ctx.parents()
759 parents = ctx.parents()
760 if len(parents) > 1:
760 if len(parents) > 1:
761 return parents
761 return parents
762 if repo.ui.debugflag:
762 if repo.ui.debugflag:
763 return [parents[0], repo['null']]
763 return [parents[0], repo['null']]
764 if parents[0].rev() >= intrev(ctx.rev()) - 1:
764 if parents[0].rev() >= intrev(ctx.rev()) - 1:
765 return []
765 return []
766 return parents
766 return parents
767
767
768 def expandpats(pats):
768 def expandpats(pats):
769 '''Expand bare globs when running on windows.
769 '''Expand bare globs when running on windows.
770 On posix we assume it already has already been done by sh.'''
770 On posix we assume it already has already been done by sh.'''
771 if not util.expandglobs:
771 if not util.expandglobs:
772 return list(pats)
772 return list(pats)
773 ret = []
773 ret = []
774 for kindpat in pats:
774 for kindpat in pats:
775 kind, pat = matchmod._patsplit(kindpat, None)
775 kind, pat = matchmod._patsplit(kindpat, None)
776 if kind is None:
776 if kind is None:
777 try:
777 try:
778 globbed = glob.glob(pat)
778 globbed = glob.glob(pat)
779 except re.error:
779 except re.error:
780 globbed = [pat]
780 globbed = [pat]
781 if globbed:
781 if globbed:
782 ret.extend(globbed)
782 ret.extend(globbed)
783 continue
783 continue
784 ret.append(kindpat)
784 ret.append(kindpat)
785 return ret
785 return ret
786
786
787 def matchandpats(ctx, pats=(), opts=None, globbed=False, default='relpath',
787 def matchandpats(ctx, pats=(), opts=None, globbed=False, default='relpath',
788 badfn=None):
788 badfn=None):
789 '''Return a matcher and the patterns that were used.
789 '''Return a matcher and the patterns that were used.
790 The matcher will warn about bad matches, unless an alternate badfn callback
790 The matcher will warn about bad matches, unless an alternate badfn callback
791 is provided.'''
791 is provided.'''
792 if pats == ("",):
792 if pats == ("",):
793 pats = []
793 pats = []
794 if opts is None:
794 if opts is None:
795 opts = {}
795 opts = {}
796 if not globbed and default == 'relpath':
796 if not globbed and default == 'relpath':
797 pats = expandpats(pats or [])
797 pats = expandpats(pats or [])
798
798
799 def bad(f, msg):
799 def bad(f, msg):
800 ctx.repo().ui.warn("%s: %s\n" % (m.rel(f), msg))
800 ctx.repo().ui.warn("%s: %s\n" % (m.rel(f), msg))
801
801
802 if badfn is None:
802 if badfn is None:
803 badfn = bad
803 badfn = bad
804
804
805 m = ctx.match(pats, opts.get('include'), opts.get('exclude'),
805 m = ctx.match(pats, opts.get('include'), opts.get('exclude'),
806 default, listsubrepos=opts.get('subrepos'), badfn=badfn)
806 default, listsubrepos=opts.get('subrepos'), badfn=badfn)
807
807
808 if m.always():
808 if m.always():
809 pats = []
809 pats = []
810 return m, pats
810 return m, pats
811
811
812 def match(ctx, pats=(), opts=None, globbed=False, default='relpath',
812 def match(ctx, pats=(), opts=None, globbed=False, default='relpath',
813 badfn=None):
813 badfn=None):
814 '''Return a matcher that will warn about bad matches.'''
814 '''Return a matcher that will warn about bad matches.'''
815 return matchandpats(ctx, pats, opts, globbed, default, badfn=badfn)[0]
815 return matchandpats(ctx, pats, opts, globbed, default, badfn=badfn)[0]
816
816
817 def matchall(repo):
817 def matchall(repo):
818 '''Return a matcher that will efficiently match everything.'''
818 '''Return a matcher that will efficiently match everything.'''
819 return matchmod.always(repo.root, repo.getcwd())
819 return matchmod.always(repo.root, repo.getcwd())
820
820
821 def matchfiles(repo, files, badfn=None):
821 def matchfiles(repo, files, badfn=None):
822 '''Return a matcher that will efficiently match exactly these files.'''
822 '''Return a matcher that will efficiently match exactly these files.'''
823 return matchmod.exact(repo.root, repo.getcwd(), files, badfn=badfn)
823 return matchmod.exact(repo.root, repo.getcwd(), files, badfn=badfn)
824
824
825 def origpath(ui, repo, filepath):
825 def origpath(ui, repo, filepath):
826 '''customize where .orig files are created
826 '''customize where .orig files are created
827
827
828 Fetch user defined path from config file: [ui] origbackuppath = <path>
828 Fetch user defined path from config file: [ui] origbackuppath = <path>
829 Fall back to default (filepath) if not specified
829 Fall back to default (filepath) if not specified
830 '''
830 '''
831 origbackuppath = ui.config('ui', 'origbackuppath', None)
831 origbackuppath = ui.config('ui', 'origbackuppath', None)
832 if origbackuppath is None:
832 if origbackuppath is None:
833 return filepath + ".orig"
833 return filepath + ".orig"
834
834
835 filepathfromroot = os.path.relpath(filepath, start=repo.root)
835 filepathfromroot = os.path.relpath(filepath, start=repo.root)
836 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
836 fullorigpath = repo.wjoin(origbackuppath, filepathfromroot)
837
837
838 origbackupdir = repo.vfs.dirname(fullorigpath)
838 origbackupdir = repo.vfs.dirname(fullorigpath)
839 if not repo.vfs.exists(origbackupdir):
839 if not repo.vfs.exists(origbackupdir):
840 ui.note(_('creating directory: %s\n') % origbackupdir)
840 ui.note(_('creating directory: %s\n') % origbackupdir)
841 util.makedirs(origbackupdir)
841 util.makedirs(origbackupdir)
842
842
843 return fullorigpath + ".orig"
843 return fullorigpath + ".orig"
844
844
845 def addremove(repo, matcher, prefix, opts=None, dry_run=None, similarity=None):
845 def addremove(repo, matcher, prefix, opts=None, dry_run=None, similarity=None):
846 if opts is None:
846 if opts is None:
847 opts = {}
847 opts = {}
848 m = matcher
848 m = matcher
849 if dry_run is None:
849 if dry_run is None:
850 dry_run = opts.get('dry_run')
850 dry_run = opts.get('dry_run')
851 if similarity is None:
851 if similarity is None:
852 similarity = float(opts.get('similarity') or 0)
852 similarity = float(opts.get('similarity') or 0)
853
853
854 ret = 0
854 ret = 0
855 join = lambda f: os.path.join(prefix, f)
855 join = lambda f: os.path.join(prefix, f)
856
856
857 def matchessubrepo(matcher, subpath):
857 def matchessubrepo(matcher, subpath):
858 if matcher.exact(subpath):
858 if matcher.exact(subpath):
859 return True
859 return True
860 for f in matcher.files():
860 for f in matcher.files():
861 if f.startswith(subpath):
861 if f.startswith(subpath):
862 return True
862 return True
863 return False
863 return False
864
864
865 wctx = repo[None]
865 wctx = repo[None]
866 for subpath in sorted(wctx.substate):
866 for subpath in sorted(wctx.substate):
867 if opts.get('subrepos') or matchessubrepo(m, subpath):
867 if opts.get('subrepos') or matchessubrepo(m, subpath):
868 sub = wctx.sub(subpath)
868 sub = wctx.sub(subpath)
869 try:
869 try:
870 submatch = matchmod.narrowmatcher(subpath, m)
870 submatch = matchmod.narrowmatcher(subpath, m)
871 if sub.addremove(submatch, prefix, opts, dry_run, similarity):
871 if sub.addremove(submatch, prefix, opts, dry_run, similarity):
872 ret = 1
872 ret = 1
873 except error.LookupError:
873 except error.LookupError:
874 repo.ui.status(_("skipping missing subrepository: %s\n")
874 repo.ui.status(_("skipping missing subrepository: %s\n")
875 % join(subpath))
875 % join(subpath))
876
876
877 rejected = []
877 rejected = []
878 def badfn(f, msg):
878 def badfn(f, msg):
879 if f in m.files():
879 if f in m.files():
880 m.bad(f, msg)
880 m.bad(f, msg)
881 rejected.append(f)
881 rejected.append(f)
882
882
883 badmatch = matchmod.badmatch(m, badfn)
883 badmatch = matchmod.badmatch(m, badfn)
884 added, unknown, deleted, removed, forgotten = _interestingfiles(repo,
884 added, unknown, deleted, removed, forgotten = _interestingfiles(repo,
885 badmatch)
885 badmatch)
886
886
887 unknownset = set(unknown + forgotten)
887 unknownset = set(unknown + forgotten)
888 toprint = unknownset.copy()
888 toprint = unknownset.copy()
889 toprint.update(deleted)
889 toprint.update(deleted)
890 for abs in sorted(toprint):
890 for abs in sorted(toprint):
891 if repo.ui.verbose or not m.exact(abs):
891 if repo.ui.verbose or not m.exact(abs):
892 if abs in unknownset:
892 if abs in unknownset:
893 status = _('adding %s\n') % m.uipath(abs)
893 status = _('adding %s\n') % m.uipath(abs)
894 else:
894 else:
895 status = _('removing %s\n') % m.uipath(abs)
895 status = _('removing %s\n') % m.uipath(abs)
896 repo.ui.status(status)
896 repo.ui.status(status)
897
897
898 renames = _findrenames(repo, m, added + unknown, removed + deleted,
898 renames = _findrenames(repo, m, added + unknown, removed + deleted,
899 similarity)
899 similarity)
900
900
901 if not dry_run:
901 if not dry_run:
902 _markchanges(repo, unknown + forgotten, deleted, renames)
902 _markchanges(repo, unknown + forgotten, deleted, renames)
903
903
904 for f in rejected:
904 for f in rejected:
905 if f in m.files():
905 if f in m.files():
906 return 1
906 return 1
907 return ret
907 return ret
908
908
909 def marktouched(repo, files, similarity=0.0):
909 def marktouched(repo, files, similarity=0.0):
910 '''Assert that files have somehow been operated upon. files are relative to
910 '''Assert that files have somehow been operated upon. files are relative to
911 the repo root.'''
911 the repo root.'''
912 m = matchfiles(repo, files, badfn=lambda x, y: rejected.append(x))
912 m = matchfiles(repo, files, badfn=lambda x, y: rejected.append(x))
913 rejected = []
913 rejected = []
914
914
915 added, unknown, deleted, removed, forgotten = _interestingfiles(repo, m)
915 added, unknown, deleted, removed, forgotten = _interestingfiles(repo, m)
916
916
917 if repo.ui.verbose:
917 if repo.ui.verbose:
918 unknownset = set(unknown + forgotten)
918 unknownset = set(unknown + forgotten)
919 toprint = unknownset.copy()
919 toprint = unknownset.copy()
920 toprint.update(deleted)
920 toprint.update(deleted)
921 for abs in sorted(toprint):
921 for abs in sorted(toprint):
922 if abs in unknownset:
922 if abs in unknownset:
923 status = _('adding %s\n') % abs
923 status = _('adding %s\n') % abs
924 else:
924 else:
925 status = _('removing %s\n') % abs
925 status = _('removing %s\n') % abs
926 repo.ui.status(status)
926 repo.ui.status(status)
927
927
928 renames = _findrenames(repo, m, added + unknown, removed + deleted,
928 renames = _findrenames(repo, m, added + unknown, removed + deleted,
929 similarity)
929 similarity)
930
930
931 _markchanges(repo, unknown + forgotten, deleted, renames)
931 _markchanges(repo, unknown + forgotten, deleted, renames)
932
932
933 for f in rejected:
933 for f in rejected:
934 if f in m.files():
934 if f in m.files():
935 return 1
935 return 1
936 return 0
936 return 0
937
937
938 def _interestingfiles(repo, matcher):
938 def _interestingfiles(repo, matcher):
939 '''Walk dirstate with matcher, looking for files that addremove would care
939 '''Walk dirstate with matcher, looking for files that addremove would care
940 about.
940 about.
941
941
942 This is different from dirstate.status because it doesn't care about
942 This is different from dirstate.status because it doesn't care about
943 whether files are modified or clean.'''
943 whether files are modified or clean.'''
944 added, unknown, deleted, removed, forgotten = [], [], [], [], []
944 added, unknown, deleted, removed, forgotten = [], [], [], [], []
945 audit_path = pathutil.pathauditor(repo.root)
945 audit_path = pathutil.pathauditor(repo.root)
946
946
947 ctx = repo[None]
947 ctx = repo[None]
948 dirstate = repo.dirstate
948 dirstate = repo.dirstate
949 walkresults = dirstate.walk(matcher, sorted(ctx.substate), True, False,
949 walkresults = dirstate.walk(matcher, sorted(ctx.substate), True, False,
950 full=False)
950 full=False)
951 for abs, st in walkresults.iteritems():
951 for abs, st in walkresults.iteritems():
952 dstate = dirstate[abs]
952 dstate = dirstate[abs]
953 if dstate == '?' and audit_path.check(abs):
953 if dstate == '?' and audit_path.check(abs):
954 unknown.append(abs)
954 unknown.append(abs)
955 elif dstate != 'r' and not st:
955 elif dstate != 'r' and not st:
956 deleted.append(abs)
956 deleted.append(abs)
957 elif dstate == 'r' and st:
957 elif dstate == 'r' and st:
958 forgotten.append(abs)
958 forgotten.append(abs)
959 # for finding renames
959 # for finding renames
960 elif dstate == 'r' and not st:
960 elif dstate == 'r' and not st:
961 removed.append(abs)
961 removed.append(abs)
962 elif dstate == 'a':
962 elif dstate == 'a':
963 added.append(abs)
963 added.append(abs)
964
964
965 return added, unknown, deleted, removed, forgotten
965 return added, unknown, deleted, removed, forgotten
966
966
967 def _findrenames(repo, matcher, added, removed, similarity):
967 def _findrenames(repo, matcher, added, removed, similarity):
968 '''Find renames from removed files to added ones.'''
968 '''Find renames from removed files to added ones.'''
969 renames = {}
969 renames = {}
970 if similarity > 0:
970 if similarity > 0:
971 for old, new, score in similar.findrenames(repo, added, removed,
971 for old, new, score in similar.findrenames(repo, added, removed,
972 similarity):
972 similarity):
973 if (repo.ui.verbose or not matcher.exact(old)
973 if (repo.ui.verbose or not matcher.exact(old)
974 or not matcher.exact(new)):
974 or not matcher.exact(new)):
975 repo.ui.status(_('recording removal of %s as rename to %s '
975 repo.ui.status(_('recording removal of %s as rename to %s '
976 '(%d%% similar)\n') %
976 '(%d%% similar)\n') %
977 (matcher.rel(old), matcher.rel(new),
977 (matcher.rel(old), matcher.rel(new),
978 score * 100))
978 score * 100))
979 renames[new] = old
979 renames[new] = old
980 return renames
980 return renames
981
981
982 def _markchanges(repo, unknown, deleted, renames):
982 def _markchanges(repo, unknown, deleted, renames):
983 '''Marks the files in unknown as added, the files in deleted as removed,
983 '''Marks the files in unknown as added, the files in deleted as removed,
984 and the files in renames as copied.'''
984 and the files in renames as copied.'''
985 wctx = repo[None]
985 wctx = repo[None]
986 with repo.wlock():
986 with repo.wlock():
987 wctx.forget(deleted)
987 wctx.forget(deleted)
988 wctx.add(unknown)
988 wctx.add(unknown)
989 for new, old in renames.iteritems():
989 for new, old in renames.iteritems():
990 wctx.copy(old, new)
990 wctx.copy(old, new)
991
991
992 def dirstatecopy(ui, repo, wctx, src, dst, dryrun=False, cwd=None):
992 def dirstatecopy(ui, repo, wctx, src, dst, dryrun=False, cwd=None):
993 """Update the dirstate to reflect the intent of copying src to dst. For
993 """Update the dirstate to reflect the intent of copying src to dst. For
994 different reasons it might not end with dst being marked as copied from src.
994 different reasons it might not end with dst being marked as copied from src.
995 """
995 """
996 origsrc = repo.dirstate.copied(src) or src
996 origsrc = repo.dirstate.copied(src) or src
997 if dst == origsrc: # copying back a copy?
997 if dst == origsrc: # copying back a copy?
998 if repo.dirstate[dst] not in 'mn' and not dryrun:
998 if repo.dirstate[dst] not in 'mn' and not dryrun:
999 repo.dirstate.normallookup(dst)
999 repo.dirstate.normallookup(dst)
1000 else:
1000 else:
1001 if repo.dirstate[origsrc] == 'a' and origsrc == src:
1001 if repo.dirstate[origsrc] == 'a' and origsrc == src:
1002 if not ui.quiet:
1002 if not ui.quiet:
1003 ui.warn(_("%s has not been committed yet, so no copy "
1003 ui.warn(_("%s has not been committed yet, so no copy "
1004 "data will be stored for %s.\n")
1004 "data will be stored for %s.\n")
1005 % (repo.pathto(origsrc, cwd), repo.pathto(dst, cwd)))
1005 % (repo.pathto(origsrc, cwd), repo.pathto(dst, cwd)))
1006 if repo.dirstate[dst] in '?r' and not dryrun:
1006 if repo.dirstate[dst] in '?r' and not dryrun:
1007 wctx.add([dst])
1007 wctx.add([dst])
1008 elif not dryrun:
1008 elif not dryrun:
1009 wctx.copy(origsrc, dst)
1009 wctx.copy(origsrc, dst)
1010
1010
1011 def readrequires(opener, supported):
1011 def readrequires(opener, supported):
1012 '''Reads and parses .hg/requires and checks if all entries found
1012 '''Reads and parses .hg/requires and checks if all entries found
1013 are in the list of supported features.'''
1013 are in the list of supported features.'''
1014 requirements = set(opener.read("requires").splitlines())
1014 requirements = set(opener.read("requires").splitlines())
1015 missings = []
1015 missings = []
1016 for r in requirements:
1016 for r in requirements:
1017 if r not in supported:
1017 if r not in supported:
1018 if not r or not r[0].isalnum():
1018 if not r or not r[0].isalnum():
1019 raise error.RequirementError(_(".hg/requires file is corrupt"))
1019 raise error.RequirementError(_(".hg/requires file is corrupt"))
1020 missings.append(r)
1020 missings.append(r)
1021 missings.sort()
1021 missings.sort()
1022 if missings:
1022 if missings:
1023 raise error.RequirementError(
1023 raise error.RequirementError(
1024 _("repository requires features unknown to this Mercurial: %s")
1024 _("repository requires features unknown to this Mercurial: %s")
1025 % " ".join(missings),
1025 % " ".join(missings),
1026 hint=_("see https://mercurial-scm.org/wiki/MissingRequirement"
1026 hint=_("see https://mercurial-scm.org/wiki/MissingRequirement"
1027 " for more information"))
1027 " for more information"))
1028 return requirements
1028 return requirements
1029
1029
1030 def writerequires(opener, requirements):
1030 def writerequires(opener, requirements):
1031 with opener('requires', 'w') as fp:
1031 with opener('requires', 'w') as fp:
1032 for r in sorted(requirements):
1032 for r in sorted(requirements):
1033 fp.write("%s\n" % r)
1033 fp.write("%s\n" % r)
1034
1034
1035 class filecachesubentry(object):
1035 class filecachesubentry(object):
1036 def __init__(self, path, stat):
1036 def __init__(self, path, stat):
1037 self.path = path
1037 self.path = path
1038 self.cachestat = None
1038 self.cachestat = None
1039 self._cacheable = None
1039 self._cacheable = None
1040
1040
1041 if stat:
1041 if stat:
1042 self.cachestat = filecachesubentry.stat(self.path)
1042 self.cachestat = filecachesubentry.stat(self.path)
1043
1043
1044 if self.cachestat:
1044 if self.cachestat:
1045 self._cacheable = self.cachestat.cacheable()
1045 self._cacheable = self.cachestat.cacheable()
1046 else:
1046 else:
1047 # None means we don't know yet
1047 # None means we don't know yet
1048 self._cacheable = None
1048 self._cacheable = None
1049
1049
1050 def refresh(self):
1050 def refresh(self):
1051 if self.cacheable():
1051 if self.cacheable():
1052 self.cachestat = filecachesubentry.stat(self.path)
1052 self.cachestat = filecachesubentry.stat(self.path)
1053
1053
1054 def cacheable(self):
1054 def cacheable(self):
1055 if self._cacheable is not None:
1055 if self._cacheable is not None:
1056 return self._cacheable
1056 return self._cacheable
1057
1057
1058 # we don't know yet, assume it is for now
1058 # we don't know yet, assume it is for now
1059 return True
1059 return True
1060
1060
1061 def changed(self):
1061 def changed(self):
1062 # no point in going further if we can't cache it
1062 # no point in going further if we can't cache it
1063 if not self.cacheable():
1063 if not self.cacheable():
1064 return True
1064 return True
1065
1065
1066 newstat = filecachesubentry.stat(self.path)
1066 newstat = filecachesubentry.stat(self.path)
1067
1067
1068 # we may not know if it's cacheable yet, check again now
1068 # we may not know if it's cacheable yet, check again now
1069 if newstat and self._cacheable is None:
1069 if newstat and self._cacheable is None:
1070 self._cacheable = newstat.cacheable()
1070 self._cacheable = newstat.cacheable()
1071
1071
1072 # check again
1072 # check again
1073 if not self._cacheable:
1073 if not self._cacheable:
1074 return True
1074 return True
1075
1075
1076 if self.cachestat != newstat:
1076 if self.cachestat != newstat:
1077 self.cachestat = newstat
1077 self.cachestat = newstat
1078 return True
1078 return True
1079 else:
1079 else:
1080 return False
1080 return False
1081
1081
1082 @staticmethod
1082 @staticmethod
1083 def stat(path):
1083 def stat(path):
1084 try:
1084 try:
1085 return util.cachestat(path)
1085 return util.cachestat(path)
1086 except OSError as e:
1086 except OSError as e:
1087 if e.errno != errno.ENOENT:
1087 if e.errno != errno.ENOENT:
1088 raise
1088 raise
1089
1089
1090 class filecacheentry(object):
1090 class filecacheentry(object):
1091 def __init__(self, paths, stat=True):
1091 def __init__(self, paths, stat=True):
1092 self._entries = []
1092 self._entries = []
1093 for path in paths:
1093 for path in paths:
1094 self._entries.append(filecachesubentry(path, stat))
1094 self._entries.append(filecachesubentry(path, stat))
1095
1095
1096 def changed(self):
1096 def changed(self):
1097 '''true if any entry has changed'''
1097 '''true if any entry has changed'''
1098 for entry in self._entries:
1098 for entry in self._entries:
1099 if entry.changed():
1099 if entry.changed():
1100 return True
1100 return True
1101 return False
1101 return False
1102
1102
1103 def refresh(self):
1103 def refresh(self):
1104 for entry in self._entries:
1104 for entry in self._entries:
1105 entry.refresh()
1105 entry.refresh()
1106
1106
1107 class filecache(object):
1107 class filecache(object):
1108 '''A property like decorator that tracks files under .hg/ for updates.
1108 '''A property like decorator that tracks files under .hg/ for updates.
1109
1109
1110 Records stat info when called in _filecache.
1110 Records stat info when called in _filecache.
1111
1111
1112 On subsequent calls, compares old stat info with new info, and recreates the
1112 On subsequent calls, compares old stat info with new info, and recreates the
1113 object when any of the files changes, updating the new stat info in
1113 object when any of the files changes, updating the new stat info in
1114 _filecache.
1114 _filecache.
1115
1115
1116 Mercurial either atomic renames or appends for files under .hg,
1116 Mercurial either atomic renames or appends for files under .hg,
1117 so to ensure the cache is reliable we need the filesystem to be able
1117 so to ensure the cache is reliable we need the filesystem to be able
1118 to tell us if a file has been replaced. If it can't, we fallback to
1118 to tell us if a file has been replaced. If it can't, we fallback to
1119 recreating the object on every call (essentially the same behavior as
1119 recreating the object on every call (essentially the same behavior as
1120 propertycache).
1120 propertycache).
1121
1121
1122 '''
1122 '''
1123 def __init__(self, *paths):
1123 def __init__(self, *paths):
1124 self.paths = paths
1124 self.paths = paths
1125
1125
1126 def join(self, obj, fname):
1126 def join(self, obj, fname):
1127 """Used to compute the runtime path of a cached file.
1127 """Used to compute the runtime path of a cached file.
1128
1128
1129 Users should subclass filecache and provide their own version of this
1129 Users should subclass filecache and provide their own version of this
1130 function to call the appropriate join function on 'obj' (an instance
1130 function to call the appropriate join function on 'obj' (an instance
1131 of the class that its member function was decorated).
1131 of the class that its member function was decorated).
1132 """
1132 """
1133 return obj.join(fname)
1133 return obj.join(fname)
1134
1134
1135 def __call__(self, func):
1135 def __call__(self, func):
1136 self.func = func
1136 self.func = func
1137 self.name = func.__name__
1137 self.name = func.__name__
1138 return self
1138 return self
1139
1139
1140 def __get__(self, obj, type=None):
1140 def __get__(self, obj, type=None):
1141 # do we need to check if the file changed?
1141 # do we need to check if the file changed?
1142 if self.name in obj.__dict__:
1142 if self.name in obj.__dict__:
1143 assert self.name in obj._filecache, self.name
1143 assert self.name in obj._filecache, self.name
1144 return obj.__dict__[self.name]
1144 return obj.__dict__[self.name]
1145
1145
1146 entry = obj._filecache.get(self.name)
1146 entry = obj._filecache.get(self.name)
1147
1147
1148 if entry:
1148 if entry:
1149 if entry.changed():
1149 if entry.changed():
1150 entry.obj = self.func(obj)
1150 entry.obj = self.func(obj)
1151 else:
1151 else:
1152 paths = [self.join(obj, path) for path in self.paths]
1152 paths = [self.join(obj, path) for path in self.paths]
1153
1153
1154 # We stat -before- creating the object so our cache doesn't lie if
1154 # We stat -before- creating the object so our cache doesn't lie if
1155 # a writer modified between the time we read and stat
1155 # a writer modified between the time we read and stat
1156 entry = filecacheentry(paths, True)
1156 entry = filecacheentry(paths, True)
1157 entry.obj = self.func(obj)
1157 entry.obj = self.func(obj)
1158
1158
1159 obj._filecache[self.name] = entry
1159 obj._filecache[self.name] = entry
1160
1160
1161 obj.__dict__[self.name] = entry.obj
1161 obj.__dict__[self.name] = entry.obj
1162 return entry.obj
1162 return entry.obj
1163
1163
1164 def __set__(self, obj, value):
1164 def __set__(self, obj, value):
1165 if self.name not in obj._filecache:
1165 if self.name not in obj._filecache:
1166 # we add an entry for the missing value because X in __dict__
1166 # we add an entry for the missing value because X in __dict__
1167 # implies X in _filecache
1167 # implies X in _filecache
1168 paths = [self.join(obj, path) for path in self.paths]
1168 paths = [self.join(obj, path) for path in self.paths]
1169 ce = filecacheentry(paths, False)
1169 ce = filecacheentry(paths, False)
1170 obj._filecache[self.name] = ce
1170 obj._filecache[self.name] = ce
1171 else:
1171 else:
1172 ce = obj._filecache[self.name]
1172 ce = obj._filecache[self.name]
1173
1173
1174 ce.obj = value # update cached copy
1174 ce.obj = value # update cached copy
1175 obj.__dict__[self.name] = value # update copy returned by obj.x
1175 obj.__dict__[self.name] = value # update copy returned by obj.x
1176
1176
1177 def __delete__(self, obj):
1177 def __delete__(self, obj):
1178 try:
1178 try:
1179 del obj.__dict__[self.name]
1179 del obj.__dict__[self.name]
1180 except KeyError:
1180 except KeyError:
1181 raise AttributeError(self.name)
1181 raise AttributeError(self.name)
1182
1182
1183 def _locksub(repo, lock, envvar, cmd, environ=None, *args, **kwargs):
1183 def _locksub(repo, lock, envvar, cmd, environ=None, *args, **kwargs):
1184 if lock is None:
1184 if lock is None:
1185 raise error.LockInheritanceContractViolation(
1185 raise error.LockInheritanceContractViolation(
1186 'lock can only be inherited while held')
1186 'lock can only be inherited while held')
1187 if environ is None:
1187 if environ is None:
1188 environ = {}
1188 environ = {}
1189 with lock.inherit() as locker:
1189 with lock.inherit() as locker:
1190 environ[envvar] = locker
1190 environ[envvar] = locker
1191 return repo.ui.system(cmd, environ=environ, *args, **kwargs)
1191 return repo.ui.system(cmd, environ=environ, *args, **kwargs)
1192
1192
1193 def wlocksub(repo, cmd, *args, **kwargs):
1193 def wlocksub(repo, cmd, *args, **kwargs):
1194 """run cmd as a subprocess that allows inheriting repo's wlock
1194 """run cmd as a subprocess that allows inheriting repo's wlock
1195
1195
1196 This can only be called while the wlock is held. This takes all the
1196 This can only be called while the wlock is held. This takes all the
1197 arguments that ui.system does, and returns the exit code of the
1197 arguments that ui.system does, and returns the exit code of the
1198 subprocess."""
1198 subprocess."""
1199 return _locksub(repo, repo.currentwlock(), 'HG_WLOCK_LOCKER', cmd, *args,
1199 return _locksub(repo, repo.currentwlock(), 'HG_WLOCK_LOCKER', cmd, *args,
1200 **kwargs)
1200 **kwargs)
1201
1201
1202 def gdinitconfig(ui):
1202 def gdinitconfig(ui):
1203 """helper function to know if a repo should be created as general delta
1203 """helper function to know if a repo should be created as general delta
1204 """
1204 """
1205 # experimental config: format.generaldelta
1205 # experimental config: format.generaldelta
1206 return (ui.configbool('format', 'generaldelta', False)
1206 return (ui.configbool('format', 'generaldelta', False)
1207 or ui.configbool('format', 'usegeneraldelta', True))
1207 or ui.configbool('format', 'usegeneraldelta', True))
1208
1208
1209 def gddeltaconfig(ui):
1209 def gddeltaconfig(ui):
1210 """helper function to know if incoming delta should be optimised
1210 """helper function to know if incoming delta should be optimised
1211 """
1211 """
1212 # experimental config: format.generaldelta
1212 # experimental config: format.generaldelta
1213 return ui.configbool('format', 'generaldelta', False)
1213 return ui.configbool('format', 'generaldelta', False)
General Comments 0
You need to be logged in to leave comments. Login now