##// END OF EJS Templates
localrepo: extract resolving of opener options to standalone functions...
Gregory Szorc -
r39736:b10d1458 default
parent child Browse files
Show More
@@ -1,2265 +1,2268
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157
157
158 from .i18n import _
158 from .i18n import _
159 from . import (
159 from . import (
160 bookmarks,
160 bookmarks,
161 changegroup,
161 changegroup,
162 encoding,
162 encoding,
163 error,
163 error,
164 node as nodemod,
164 node as nodemod,
165 obsolete,
165 obsolete,
166 phases,
166 phases,
167 pushkey,
167 pushkey,
168 pycompat,
168 pycompat,
169 streamclone,
169 streamclone,
170 tags,
170 tags,
171 url,
171 url,
172 util,
172 util,
173 )
173 )
174 from .utils import (
174 from .utils import (
175 stringutil,
175 stringutil,
176 )
176 )
177
177
178 urlerr = util.urlerr
178 urlerr = util.urlerr
179 urlreq = util.urlreq
179 urlreq = util.urlreq
180
180
181 _pack = struct.pack
181 _pack = struct.pack
182 _unpack = struct.unpack
182 _unpack = struct.unpack
183
183
184 _fstreamparamsize = '>i'
184 _fstreamparamsize = '>i'
185 _fpartheadersize = '>i'
185 _fpartheadersize = '>i'
186 _fparttypesize = '>B'
186 _fparttypesize = '>B'
187 _fpartid = '>I'
187 _fpartid = '>I'
188 _fpayloadsize = '>i'
188 _fpayloadsize = '>i'
189 _fpartparamcount = '>BB'
189 _fpartparamcount = '>BB'
190
190
191 preferedchunksize = 32768
191 preferedchunksize = 32768
192
192
193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
193 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
194
194
195 def outdebug(ui, message):
195 def outdebug(ui, message):
196 """debug regarding output stream (bundling)"""
196 """debug regarding output stream (bundling)"""
197 if ui.configbool('devel', 'bundle2.debug'):
197 if ui.configbool('devel', 'bundle2.debug'):
198 ui.debug('bundle2-output: %s\n' % message)
198 ui.debug('bundle2-output: %s\n' % message)
199
199
200 def indebug(ui, message):
200 def indebug(ui, message):
201 """debug on input stream (unbundling)"""
201 """debug on input stream (unbundling)"""
202 if ui.configbool('devel', 'bundle2.debug'):
202 if ui.configbool('devel', 'bundle2.debug'):
203 ui.debug('bundle2-input: %s\n' % message)
203 ui.debug('bundle2-input: %s\n' % message)
204
204
205 def validateparttype(parttype):
205 def validateparttype(parttype):
206 """raise ValueError if a parttype contains invalid character"""
206 """raise ValueError if a parttype contains invalid character"""
207 if _parttypeforbidden.search(parttype):
207 if _parttypeforbidden.search(parttype):
208 raise ValueError(parttype)
208 raise ValueError(parttype)
209
209
210 def _makefpartparamsizes(nbparams):
210 def _makefpartparamsizes(nbparams):
211 """return a struct format to read part parameter sizes
211 """return a struct format to read part parameter sizes
212
212
213 The number parameters is variable so we need to build that format
213 The number parameters is variable so we need to build that format
214 dynamically.
214 dynamically.
215 """
215 """
216 return '>'+('BB'*nbparams)
216 return '>'+('BB'*nbparams)
217
217
218 parthandlermapping = {}
218 parthandlermapping = {}
219
219
220 def parthandler(parttype, params=()):
220 def parthandler(parttype, params=()):
221 """decorator that register a function as a bundle2 part handler
221 """decorator that register a function as a bundle2 part handler
222
222
223 eg::
223 eg::
224
224
225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
225 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
226 def myparttypehandler(...):
226 def myparttypehandler(...):
227 '''process a part of type "my part".'''
227 '''process a part of type "my part".'''
228 ...
228 ...
229 """
229 """
230 validateparttype(parttype)
230 validateparttype(parttype)
231 def _decorator(func):
231 def _decorator(func):
232 lparttype = parttype.lower() # enforce lower case matching.
232 lparttype = parttype.lower() # enforce lower case matching.
233 assert lparttype not in parthandlermapping
233 assert lparttype not in parthandlermapping
234 parthandlermapping[lparttype] = func
234 parthandlermapping[lparttype] = func
235 func.params = frozenset(params)
235 func.params = frozenset(params)
236 return func
236 return func
237 return _decorator
237 return _decorator
238
238
239 class unbundlerecords(object):
239 class unbundlerecords(object):
240 """keep record of what happens during and unbundle
240 """keep record of what happens during and unbundle
241
241
242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
242 New records are added using `records.add('cat', obj)`. Where 'cat' is a
243 category of record and obj is an arbitrary object.
243 category of record and obj is an arbitrary object.
244
244
245 `records['cat']` will return all entries of this category 'cat'.
245 `records['cat']` will return all entries of this category 'cat'.
246
246
247 Iterating on the object itself will yield `('category', obj)` tuples
247 Iterating on the object itself will yield `('category', obj)` tuples
248 for all entries.
248 for all entries.
249
249
250 All iterations happens in chronological order.
250 All iterations happens in chronological order.
251 """
251 """
252
252
253 def __init__(self):
253 def __init__(self):
254 self._categories = {}
254 self._categories = {}
255 self._sequences = []
255 self._sequences = []
256 self._replies = {}
256 self._replies = {}
257
257
258 def add(self, category, entry, inreplyto=None):
258 def add(self, category, entry, inreplyto=None):
259 """add a new record of a given category.
259 """add a new record of a given category.
260
260
261 The entry can then be retrieved in the list returned by
261 The entry can then be retrieved in the list returned by
262 self['category']."""
262 self['category']."""
263 self._categories.setdefault(category, []).append(entry)
263 self._categories.setdefault(category, []).append(entry)
264 self._sequences.append((category, entry))
264 self._sequences.append((category, entry))
265 if inreplyto is not None:
265 if inreplyto is not None:
266 self.getreplies(inreplyto).add(category, entry)
266 self.getreplies(inreplyto).add(category, entry)
267
267
268 def getreplies(self, partid):
268 def getreplies(self, partid):
269 """get the records that are replies to a specific part"""
269 """get the records that are replies to a specific part"""
270 return self._replies.setdefault(partid, unbundlerecords())
270 return self._replies.setdefault(partid, unbundlerecords())
271
271
272 def __getitem__(self, cat):
272 def __getitem__(self, cat):
273 return tuple(self._categories.get(cat, ()))
273 return tuple(self._categories.get(cat, ()))
274
274
275 def __iter__(self):
275 def __iter__(self):
276 return iter(self._sequences)
276 return iter(self._sequences)
277
277
278 def __len__(self):
278 def __len__(self):
279 return len(self._sequences)
279 return len(self._sequences)
280
280
281 def __nonzero__(self):
281 def __nonzero__(self):
282 return bool(self._sequences)
282 return bool(self._sequences)
283
283
284 __bool__ = __nonzero__
284 __bool__ = __nonzero__
285
285
286 class bundleoperation(object):
286 class bundleoperation(object):
287 """an object that represents a single bundling process
287 """an object that represents a single bundling process
288
288
289 Its purpose is to carry unbundle-related objects and states.
289 Its purpose is to carry unbundle-related objects and states.
290
290
291 A new object should be created at the beginning of each bundle processing.
291 A new object should be created at the beginning of each bundle processing.
292 The object is to be returned by the processing function.
292 The object is to be returned by the processing function.
293
293
294 The object has very little content now it will ultimately contain:
294 The object has very little content now it will ultimately contain:
295 * an access to the repo the bundle is applied to,
295 * an access to the repo the bundle is applied to,
296 * a ui object,
296 * a ui object,
297 * a way to retrieve a transaction to add changes to the repo,
297 * a way to retrieve a transaction to add changes to the repo,
298 * a way to record the result of processing each part,
298 * a way to record the result of processing each part,
299 * a way to construct a bundle response when applicable.
299 * a way to construct a bundle response when applicable.
300 """
300 """
301
301
302 def __init__(self, repo, transactiongetter, captureoutput=True, source=''):
302 def __init__(self, repo, transactiongetter, captureoutput=True, source=''):
303 self.repo = repo
303 self.repo = repo
304 self.ui = repo.ui
304 self.ui = repo.ui
305 self.records = unbundlerecords()
305 self.records = unbundlerecords()
306 self.reply = None
306 self.reply = None
307 self.captureoutput = captureoutput
307 self.captureoutput = captureoutput
308 self.hookargs = {}
308 self.hookargs = {}
309 self._gettransaction = transactiongetter
309 self._gettransaction = transactiongetter
310 # carries value that can modify part behavior
310 # carries value that can modify part behavior
311 self.modes = {}
311 self.modes = {}
312 self.source = source
312 self.source = source
313
313
314 def gettransaction(self):
314 def gettransaction(self):
315 transaction = self._gettransaction()
315 transaction = self._gettransaction()
316
316
317 if self.hookargs:
317 if self.hookargs:
318 # the ones added to the transaction supercede those added
318 # the ones added to the transaction supercede those added
319 # to the operation.
319 # to the operation.
320 self.hookargs.update(transaction.hookargs)
320 self.hookargs.update(transaction.hookargs)
321 transaction.hookargs = self.hookargs
321 transaction.hookargs = self.hookargs
322
322
323 # mark the hookargs as flushed. further attempts to add to
323 # mark the hookargs as flushed. further attempts to add to
324 # hookargs will result in an abort.
324 # hookargs will result in an abort.
325 self.hookargs = None
325 self.hookargs = None
326
326
327 return transaction
327 return transaction
328
328
329 def addhookargs(self, hookargs):
329 def addhookargs(self, hookargs):
330 if self.hookargs is None:
330 if self.hookargs is None:
331 raise error.ProgrammingError('attempted to add hookargs to '
331 raise error.ProgrammingError('attempted to add hookargs to '
332 'operation after transaction started')
332 'operation after transaction started')
333 self.hookargs.update(hookargs)
333 self.hookargs.update(hookargs)
334
334
335 class TransactionUnavailable(RuntimeError):
335 class TransactionUnavailable(RuntimeError):
336 pass
336 pass
337
337
338 def _notransaction():
338 def _notransaction():
339 """default method to get a transaction while processing a bundle
339 """default method to get a transaction while processing a bundle
340
340
341 Raise an exception to highlight the fact that no transaction was expected
341 Raise an exception to highlight the fact that no transaction was expected
342 to be created"""
342 to be created"""
343 raise TransactionUnavailable()
343 raise TransactionUnavailable()
344
344
345 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
345 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
346 # transform me into unbundler.apply() as soon as the freeze is lifted
346 # transform me into unbundler.apply() as soon as the freeze is lifted
347 if isinstance(unbundler, unbundle20):
347 if isinstance(unbundler, unbundle20):
348 tr.hookargs['bundle2'] = '1'
348 tr.hookargs['bundle2'] = '1'
349 if source is not None and 'source' not in tr.hookargs:
349 if source is not None and 'source' not in tr.hookargs:
350 tr.hookargs['source'] = source
350 tr.hookargs['source'] = source
351 if url is not None and 'url' not in tr.hookargs:
351 if url is not None and 'url' not in tr.hookargs:
352 tr.hookargs['url'] = url
352 tr.hookargs['url'] = url
353 return processbundle(repo, unbundler, lambda: tr, source=source)
353 return processbundle(repo, unbundler, lambda: tr, source=source)
354 else:
354 else:
355 # the transactiongetter won't be used, but we might as well set it
355 # the transactiongetter won't be used, but we might as well set it
356 op = bundleoperation(repo, lambda: tr, source=source)
356 op = bundleoperation(repo, lambda: tr, source=source)
357 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
357 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
358 return op
358 return op
359
359
360 class partiterator(object):
360 class partiterator(object):
361 def __init__(self, repo, op, unbundler):
361 def __init__(self, repo, op, unbundler):
362 self.repo = repo
362 self.repo = repo
363 self.op = op
363 self.op = op
364 self.unbundler = unbundler
364 self.unbundler = unbundler
365 self.iterator = None
365 self.iterator = None
366 self.count = 0
366 self.count = 0
367 self.current = None
367 self.current = None
368
368
369 def __enter__(self):
369 def __enter__(self):
370 def func():
370 def func():
371 itr = enumerate(self.unbundler.iterparts())
371 itr = enumerate(self.unbundler.iterparts())
372 for count, p in itr:
372 for count, p in itr:
373 self.count = count
373 self.count = count
374 self.current = p
374 self.current = p
375 yield p
375 yield p
376 p.consume()
376 p.consume()
377 self.current = None
377 self.current = None
378 self.iterator = func()
378 self.iterator = func()
379 return self.iterator
379 return self.iterator
380
380
381 def __exit__(self, type, exc, tb):
381 def __exit__(self, type, exc, tb):
382 if not self.iterator:
382 if not self.iterator:
383 return
383 return
384
384
385 # Only gracefully abort in a normal exception situation. User aborts
385 # Only gracefully abort in a normal exception situation. User aborts
386 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
386 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
387 # and should not gracefully cleanup.
387 # and should not gracefully cleanup.
388 if isinstance(exc, Exception):
388 if isinstance(exc, Exception):
389 # Any exceptions seeking to the end of the bundle at this point are
389 # Any exceptions seeking to the end of the bundle at this point are
390 # almost certainly related to the underlying stream being bad.
390 # almost certainly related to the underlying stream being bad.
391 # And, chances are that the exception we're handling is related to
391 # And, chances are that the exception we're handling is related to
392 # getting in that bad state. So, we swallow the seeking error and
392 # getting in that bad state. So, we swallow the seeking error and
393 # re-raise the original error.
393 # re-raise the original error.
394 seekerror = False
394 seekerror = False
395 try:
395 try:
396 if self.current:
396 if self.current:
397 # consume the part content to not corrupt the stream.
397 # consume the part content to not corrupt the stream.
398 self.current.consume()
398 self.current.consume()
399
399
400 for part in self.iterator:
400 for part in self.iterator:
401 # consume the bundle content
401 # consume the bundle content
402 part.consume()
402 part.consume()
403 except Exception:
403 except Exception:
404 seekerror = True
404 seekerror = True
405
405
406 # Small hack to let caller code distinguish exceptions from bundle2
406 # Small hack to let caller code distinguish exceptions from bundle2
407 # processing from processing the old format. This is mostly needed
407 # processing from processing the old format. This is mostly needed
408 # to handle different return codes to unbundle according to the type
408 # to handle different return codes to unbundle according to the type
409 # of bundle. We should probably clean up or drop this return code
409 # of bundle. We should probably clean up or drop this return code
410 # craziness in a future version.
410 # craziness in a future version.
411 exc.duringunbundle2 = True
411 exc.duringunbundle2 = True
412 salvaged = []
412 salvaged = []
413 replycaps = None
413 replycaps = None
414 if self.op.reply is not None:
414 if self.op.reply is not None:
415 salvaged = self.op.reply.salvageoutput()
415 salvaged = self.op.reply.salvageoutput()
416 replycaps = self.op.reply.capabilities
416 replycaps = self.op.reply.capabilities
417 exc._replycaps = replycaps
417 exc._replycaps = replycaps
418 exc._bundle2salvagedoutput = salvaged
418 exc._bundle2salvagedoutput = salvaged
419
419
420 # Re-raising from a variable loses the original stack. So only use
420 # Re-raising from a variable loses the original stack. So only use
421 # that form if we need to.
421 # that form if we need to.
422 if seekerror:
422 if seekerror:
423 raise exc
423 raise exc
424
424
425 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
425 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
426 self.count)
426 self.count)
427
427
428 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''):
428 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=''):
429 """This function process a bundle, apply effect to/from a repo
429 """This function process a bundle, apply effect to/from a repo
430
430
431 It iterates over each part then searches for and uses the proper handling
431 It iterates over each part then searches for and uses the proper handling
432 code to process the part. Parts are processed in order.
432 code to process the part. Parts are processed in order.
433
433
434 Unknown Mandatory part will abort the process.
434 Unknown Mandatory part will abort the process.
435
435
436 It is temporarily possible to provide a prebuilt bundleoperation to the
436 It is temporarily possible to provide a prebuilt bundleoperation to the
437 function. This is used to ensure output is properly propagated in case of
437 function. This is used to ensure output is properly propagated in case of
438 an error during the unbundling. This output capturing part will likely be
438 an error during the unbundling. This output capturing part will likely be
439 reworked and this ability will probably go away in the process.
439 reworked and this ability will probably go away in the process.
440 """
440 """
441 if op is None:
441 if op is None:
442 if transactiongetter is None:
442 if transactiongetter is None:
443 transactiongetter = _notransaction
443 transactiongetter = _notransaction
444 op = bundleoperation(repo, transactiongetter, source=source)
444 op = bundleoperation(repo, transactiongetter, source=source)
445 # todo:
445 # todo:
446 # - replace this is a init function soon.
446 # - replace this is a init function soon.
447 # - exception catching
447 # - exception catching
448 unbundler.params
448 unbundler.params
449 if repo.ui.debugflag:
449 if repo.ui.debugflag:
450 msg = ['bundle2-input-bundle:']
450 msg = ['bundle2-input-bundle:']
451 if unbundler.params:
451 if unbundler.params:
452 msg.append(' %i params' % len(unbundler.params))
452 msg.append(' %i params' % len(unbundler.params))
453 if op._gettransaction is None or op._gettransaction is _notransaction:
453 if op._gettransaction is None or op._gettransaction is _notransaction:
454 msg.append(' no-transaction')
454 msg.append(' no-transaction')
455 else:
455 else:
456 msg.append(' with-transaction')
456 msg.append(' with-transaction')
457 msg.append('\n')
457 msg.append('\n')
458 repo.ui.debug(''.join(msg))
458 repo.ui.debug(''.join(msg))
459
459
460 processparts(repo, op, unbundler)
460 processparts(repo, op, unbundler)
461
461
462 return op
462 return op
463
463
464 def processparts(repo, op, unbundler):
464 def processparts(repo, op, unbundler):
465 with partiterator(repo, op, unbundler) as parts:
465 with partiterator(repo, op, unbundler) as parts:
466 for part in parts:
466 for part in parts:
467 _processpart(op, part)
467 _processpart(op, part)
468
468
469 def _processchangegroup(op, cg, tr, source, url, **kwargs):
469 def _processchangegroup(op, cg, tr, source, url, **kwargs):
470 ret = cg.apply(op.repo, tr, source, url, **kwargs)
470 ret = cg.apply(op.repo, tr, source, url, **kwargs)
471 op.records.add('changegroup', {
471 op.records.add('changegroup', {
472 'return': ret,
472 'return': ret,
473 })
473 })
474 return ret
474 return ret
475
475
476 def _gethandler(op, part):
476 def _gethandler(op, part):
477 status = 'unknown' # used by debug output
477 status = 'unknown' # used by debug output
478 try:
478 try:
479 handler = parthandlermapping.get(part.type)
479 handler = parthandlermapping.get(part.type)
480 if handler is None:
480 if handler is None:
481 status = 'unsupported-type'
481 status = 'unsupported-type'
482 raise error.BundleUnknownFeatureError(parttype=part.type)
482 raise error.BundleUnknownFeatureError(parttype=part.type)
483 indebug(op.ui, 'found a handler for part %s' % part.type)
483 indebug(op.ui, 'found a handler for part %s' % part.type)
484 unknownparams = part.mandatorykeys - handler.params
484 unknownparams = part.mandatorykeys - handler.params
485 if unknownparams:
485 if unknownparams:
486 unknownparams = list(unknownparams)
486 unknownparams = list(unknownparams)
487 unknownparams.sort()
487 unknownparams.sort()
488 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
488 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
489 raise error.BundleUnknownFeatureError(parttype=part.type,
489 raise error.BundleUnknownFeatureError(parttype=part.type,
490 params=unknownparams)
490 params=unknownparams)
491 status = 'supported'
491 status = 'supported'
492 except error.BundleUnknownFeatureError as exc:
492 except error.BundleUnknownFeatureError as exc:
493 if part.mandatory: # mandatory parts
493 if part.mandatory: # mandatory parts
494 raise
494 raise
495 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
495 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
496 return # skip to part processing
496 return # skip to part processing
497 finally:
497 finally:
498 if op.ui.debugflag:
498 if op.ui.debugflag:
499 msg = ['bundle2-input-part: "%s"' % part.type]
499 msg = ['bundle2-input-part: "%s"' % part.type]
500 if not part.mandatory:
500 if not part.mandatory:
501 msg.append(' (advisory)')
501 msg.append(' (advisory)')
502 nbmp = len(part.mandatorykeys)
502 nbmp = len(part.mandatorykeys)
503 nbap = len(part.params) - nbmp
503 nbap = len(part.params) - nbmp
504 if nbmp or nbap:
504 if nbmp or nbap:
505 msg.append(' (params:')
505 msg.append(' (params:')
506 if nbmp:
506 if nbmp:
507 msg.append(' %i mandatory' % nbmp)
507 msg.append(' %i mandatory' % nbmp)
508 if nbap:
508 if nbap:
509 msg.append(' %i advisory' % nbmp)
509 msg.append(' %i advisory' % nbmp)
510 msg.append(')')
510 msg.append(')')
511 msg.append(' %s\n' % status)
511 msg.append(' %s\n' % status)
512 op.ui.debug(''.join(msg))
512 op.ui.debug(''.join(msg))
513
513
514 return handler
514 return handler
515
515
516 def _processpart(op, part):
516 def _processpart(op, part):
517 """process a single part from a bundle
517 """process a single part from a bundle
518
518
519 The part is guaranteed to have been fully consumed when the function exits
519 The part is guaranteed to have been fully consumed when the function exits
520 (even if an exception is raised)."""
520 (even if an exception is raised)."""
521 handler = _gethandler(op, part)
521 handler = _gethandler(op, part)
522 if handler is None:
522 if handler is None:
523 return
523 return
524
524
525 # handler is called outside the above try block so that we don't
525 # handler is called outside the above try block so that we don't
526 # risk catching KeyErrors from anything other than the
526 # risk catching KeyErrors from anything other than the
527 # parthandlermapping lookup (any KeyError raised by handler()
527 # parthandlermapping lookup (any KeyError raised by handler()
528 # itself represents a defect of a different variety).
528 # itself represents a defect of a different variety).
529 output = None
529 output = None
530 if op.captureoutput and op.reply is not None:
530 if op.captureoutput and op.reply is not None:
531 op.ui.pushbuffer(error=True, subproc=True)
531 op.ui.pushbuffer(error=True, subproc=True)
532 output = ''
532 output = ''
533 try:
533 try:
534 handler(op, part)
534 handler(op, part)
535 finally:
535 finally:
536 if output is not None:
536 if output is not None:
537 output = op.ui.popbuffer()
537 output = op.ui.popbuffer()
538 if output:
538 if output:
539 outpart = op.reply.newpart('output', data=output,
539 outpart = op.reply.newpart('output', data=output,
540 mandatory=False)
540 mandatory=False)
541 outpart.addparam(
541 outpart.addparam(
542 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
542 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
543
543
544 def decodecaps(blob):
544 def decodecaps(blob):
545 """decode a bundle2 caps bytes blob into a dictionary
545 """decode a bundle2 caps bytes blob into a dictionary
546
546
547 The blob is a list of capabilities (one per line)
547 The blob is a list of capabilities (one per line)
548 Capabilities may have values using a line of the form::
548 Capabilities may have values using a line of the form::
549
549
550 capability=value1,value2,value3
550 capability=value1,value2,value3
551
551
552 The values are always a list."""
552 The values are always a list."""
553 caps = {}
553 caps = {}
554 for line in blob.splitlines():
554 for line in blob.splitlines():
555 if not line:
555 if not line:
556 continue
556 continue
557 if '=' not in line:
557 if '=' not in line:
558 key, vals = line, ()
558 key, vals = line, ()
559 else:
559 else:
560 key, vals = line.split('=', 1)
560 key, vals = line.split('=', 1)
561 vals = vals.split(',')
561 vals = vals.split(',')
562 key = urlreq.unquote(key)
562 key = urlreq.unquote(key)
563 vals = [urlreq.unquote(v) for v in vals]
563 vals = [urlreq.unquote(v) for v in vals]
564 caps[key] = vals
564 caps[key] = vals
565 return caps
565 return caps
566
566
567 def encodecaps(caps):
567 def encodecaps(caps):
568 """encode a bundle2 caps dictionary into a bytes blob"""
568 """encode a bundle2 caps dictionary into a bytes blob"""
569 chunks = []
569 chunks = []
570 for ca in sorted(caps):
570 for ca in sorted(caps):
571 vals = caps[ca]
571 vals = caps[ca]
572 ca = urlreq.quote(ca)
572 ca = urlreq.quote(ca)
573 vals = [urlreq.quote(v) for v in vals]
573 vals = [urlreq.quote(v) for v in vals]
574 if vals:
574 if vals:
575 ca = "%s=%s" % (ca, ','.join(vals))
575 ca = "%s=%s" % (ca, ','.join(vals))
576 chunks.append(ca)
576 chunks.append(ca)
577 return '\n'.join(chunks)
577 return '\n'.join(chunks)
578
578
579 bundletypes = {
579 bundletypes = {
580 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
580 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
581 # since the unification ssh accepts a header but there
581 # since the unification ssh accepts a header but there
582 # is no capability signaling it.
582 # is no capability signaling it.
583 "HG20": (), # special-cased below
583 "HG20": (), # special-cased below
584 "HG10UN": ("HG10UN", 'UN'),
584 "HG10UN": ("HG10UN", 'UN'),
585 "HG10BZ": ("HG10", 'BZ'),
585 "HG10BZ": ("HG10", 'BZ'),
586 "HG10GZ": ("HG10GZ", 'GZ'),
586 "HG10GZ": ("HG10GZ", 'GZ'),
587 }
587 }
588
588
589 # hgweb uses this list to communicate its preferred type
589 # hgweb uses this list to communicate its preferred type
590 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
590 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
591
591
592 class bundle20(object):
592 class bundle20(object):
593 """represent an outgoing bundle2 container
593 """represent an outgoing bundle2 container
594
594
595 Use the `addparam` method to add stream level parameter. and `newpart` to
595 Use the `addparam` method to add stream level parameter. and `newpart` to
596 populate it. Then call `getchunks` to retrieve all the binary chunks of
596 populate it. Then call `getchunks` to retrieve all the binary chunks of
597 data that compose the bundle2 container."""
597 data that compose the bundle2 container."""
598
598
599 _magicstring = 'HG20'
599 _magicstring = 'HG20'
600
600
601 def __init__(self, ui, capabilities=()):
601 def __init__(self, ui, capabilities=()):
602 self.ui = ui
602 self.ui = ui
603 self._params = []
603 self._params = []
604 self._parts = []
604 self._parts = []
605 self.capabilities = dict(capabilities)
605 self.capabilities = dict(capabilities)
606 self._compengine = util.compengines.forbundletype('UN')
606 self._compengine = util.compengines.forbundletype('UN')
607 self._compopts = None
607 self._compopts = None
608 # If compression is being handled by a consumer of the raw
608 # If compression is being handled by a consumer of the raw
609 # data (e.g. the wire protocol), unsetting this flag tells
609 # data (e.g. the wire protocol), unsetting this flag tells
610 # consumers that the bundle is best left uncompressed.
610 # consumers that the bundle is best left uncompressed.
611 self.prefercompressed = True
611 self.prefercompressed = True
612
612
613 def setcompression(self, alg, compopts=None):
613 def setcompression(self, alg, compopts=None):
614 """setup core part compression to <alg>"""
614 """setup core part compression to <alg>"""
615 if alg in (None, 'UN'):
615 if alg in (None, 'UN'):
616 return
616 return
617 assert not any(n.lower() == 'compression' for n, v in self._params)
617 assert not any(n.lower() == 'compression' for n, v in self._params)
618 self.addparam('Compression', alg)
618 self.addparam('Compression', alg)
619 self._compengine = util.compengines.forbundletype(alg)
619 self._compengine = util.compengines.forbundletype(alg)
620 self._compopts = compopts
620 self._compopts = compopts
621
621
622 @property
622 @property
623 def nbparts(self):
623 def nbparts(self):
624 """total number of parts added to the bundler"""
624 """total number of parts added to the bundler"""
625 return len(self._parts)
625 return len(self._parts)
626
626
627 # methods used to defines the bundle2 content
627 # methods used to defines the bundle2 content
628 def addparam(self, name, value=None):
628 def addparam(self, name, value=None):
629 """add a stream level parameter"""
629 """add a stream level parameter"""
630 if not name:
630 if not name:
631 raise error.ProgrammingError(b'empty parameter name')
631 raise error.ProgrammingError(b'empty parameter name')
632 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
632 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
633 raise error.ProgrammingError(b'non letter first character: %s'
633 raise error.ProgrammingError(b'non letter first character: %s'
634 % name)
634 % name)
635 self._params.append((name, value))
635 self._params.append((name, value))
636
636
637 def addpart(self, part):
637 def addpart(self, part):
638 """add a new part to the bundle2 container
638 """add a new part to the bundle2 container
639
639
640 Parts contains the actual applicative payload."""
640 Parts contains the actual applicative payload."""
641 assert part.id is None
641 assert part.id is None
642 part.id = len(self._parts) # very cheap counter
642 part.id = len(self._parts) # very cheap counter
643 self._parts.append(part)
643 self._parts.append(part)
644
644
645 def newpart(self, typeid, *args, **kwargs):
645 def newpart(self, typeid, *args, **kwargs):
646 """create a new part and add it to the containers
646 """create a new part and add it to the containers
647
647
648 As the part is directly added to the containers. For now, this means
648 As the part is directly added to the containers. For now, this means
649 that any failure to properly initialize the part after calling
649 that any failure to properly initialize the part after calling
650 ``newpart`` should result in a failure of the whole bundling process.
650 ``newpart`` should result in a failure of the whole bundling process.
651
651
652 You can still fall back to manually create and add if you need better
652 You can still fall back to manually create and add if you need better
653 control."""
653 control."""
654 part = bundlepart(typeid, *args, **kwargs)
654 part = bundlepart(typeid, *args, **kwargs)
655 self.addpart(part)
655 self.addpart(part)
656 return part
656 return part
657
657
658 # methods used to generate the bundle2 stream
658 # methods used to generate the bundle2 stream
659 def getchunks(self):
659 def getchunks(self):
660 if self.ui.debugflag:
660 if self.ui.debugflag:
661 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
661 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
662 if self._params:
662 if self._params:
663 msg.append(' (%i params)' % len(self._params))
663 msg.append(' (%i params)' % len(self._params))
664 msg.append(' %i parts total\n' % len(self._parts))
664 msg.append(' %i parts total\n' % len(self._parts))
665 self.ui.debug(''.join(msg))
665 self.ui.debug(''.join(msg))
666 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
666 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
667 yield self._magicstring
667 yield self._magicstring
668 param = self._paramchunk()
668 param = self._paramchunk()
669 outdebug(self.ui, 'bundle parameter: %s' % param)
669 outdebug(self.ui, 'bundle parameter: %s' % param)
670 yield _pack(_fstreamparamsize, len(param))
670 yield _pack(_fstreamparamsize, len(param))
671 if param:
671 if param:
672 yield param
672 yield param
673 for chunk in self._compengine.compressstream(self._getcorechunk(),
673 for chunk in self._compengine.compressstream(self._getcorechunk(),
674 self._compopts):
674 self._compopts):
675 yield chunk
675 yield chunk
676
676
677 def _paramchunk(self):
677 def _paramchunk(self):
678 """return a encoded version of all stream parameters"""
678 """return a encoded version of all stream parameters"""
679 blocks = []
679 blocks = []
680 for par, value in self._params:
680 for par, value in self._params:
681 par = urlreq.quote(par)
681 par = urlreq.quote(par)
682 if value is not None:
682 if value is not None:
683 value = urlreq.quote(value)
683 value = urlreq.quote(value)
684 par = '%s=%s' % (par, value)
684 par = '%s=%s' % (par, value)
685 blocks.append(par)
685 blocks.append(par)
686 return ' '.join(blocks)
686 return ' '.join(blocks)
687
687
688 def _getcorechunk(self):
688 def _getcorechunk(self):
689 """yield chunk for the core part of the bundle
689 """yield chunk for the core part of the bundle
690
690
691 (all but headers and parameters)"""
691 (all but headers and parameters)"""
692 outdebug(self.ui, 'start of parts')
692 outdebug(self.ui, 'start of parts')
693 for part in self._parts:
693 for part in self._parts:
694 outdebug(self.ui, 'bundle part: "%s"' % part.type)
694 outdebug(self.ui, 'bundle part: "%s"' % part.type)
695 for chunk in part.getchunks(ui=self.ui):
695 for chunk in part.getchunks(ui=self.ui):
696 yield chunk
696 yield chunk
697 outdebug(self.ui, 'end of bundle')
697 outdebug(self.ui, 'end of bundle')
698 yield _pack(_fpartheadersize, 0)
698 yield _pack(_fpartheadersize, 0)
699
699
700
700
701 def salvageoutput(self):
701 def salvageoutput(self):
702 """return a list with a copy of all output parts in the bundle
702 """return a list with a copy of all output parts in the bundle
703
703
704 This is meant to be used during error handling to make sure we preserve
704 This is meant to be used during error handling to make sure we preserve
705 server output"""
705 server output"""
706 salvaged = []
706 salvaged = []
707 for part in self._parts:
707 for part in self._parts:
708 if part.type.startswith('output'):
708 if part.type.startswith('output'):
709 salvaged.append(part.copy())
709 salvaged.append(part.copy())
710 return salvaged
710 return salvaged
711
711
712
712
713 class unpackermixin(object):
713 class unpackermixin(object):
714 """A mixin to extract bytes and struct data from a stream"""
714 """A mixin to extract bytes and struct data from a stream"""
715
715
716 def __init__(self, fp):
716 def __init__(self, fp):
717 self._fp = fp
717 self._fp = fp
718
718
719 def _unpack(self, format):
719 def _unpack(self, format):
720 """unpack this struct format from the stream
720 """unpack this struct format from the stream
721
721
722 This method is meant for internal usage by the bundle2 protocol only.
722 This method is meant for internal usage by the bundle2 protocol only.
723 They directly manipulate the low level stream including bundle2 level
723 They directly manipulate the low level stream including bundle2 level
724 instruction.
724 instruction.
725
725
726 Do not use it to implement higher-level logic or methods."""
726 Do not use it to implement higher-level logic or methods."""
727 data = self._readexact(struct.calcsize(format))
727 data = self._readexact(struct.calcsize(format))
728 return _unpack(format, data)
728 return _unpack(format, data)
729
729
730 def _readexact(self, size):
730 def _readexact(self, size):
731 """read exactly <size> bytes from the stream
731 """read exactly <size> bytes from the stream
732
732
733 This method is meant for internal usage by the bundle2 protocol only.
733 This method is meant for internal usage by the bundle2 protocol only.
734 They directly manipulate the low level stream including bundle2 level
734 They directly manipulate the low level stream including bundle2 level
735 instruction.
735 instruction.
736
736
737 Do not use it to implement higher-level logic or methods."""
737 Do not use it to implement higher-level logic or methods."""
738 return changegroup.readexactly(self._fp, size)
738 return changegroup.readexactly(self._fp, size)
739
739
740 def getunbundler(ui, fp, magicstring=None):
740 def getunbundler(ui, fp, magicstring=None):
741 """return a valid unbundler object for a given magicstring"""
741 """return a valid unbundler object for a given magicstring"""
742 if magicstring is None:
742 if magicstring is None:
743 magicstring = changegroup.readexactly(fp, 4)
743 magicstring = changegroup.readexactly(fp, 4)
744 magic, version = magicstring[0:2], magicstring[2:4]
744 magic, version = magicstring[0:2], magicstring[2:4]
745 if magic != 'HG':
745 if magic != 'HG':
746 ui.debug(
746 ui.debug(
747 "error: invalid magic: %r (version %r), should be 'HG'\n"
747 "error: invalid magic: %r (version %r), should be 'HG'\n"
748 % (magic, version))
748 % (magic, version))
749 raise error.Abort(_('not a Mercurial bundle'))
749 raise error.Abort(_('not a Mercurial bundle'))
750 unbundlerclass = formatmap.get(version)
750 unbundlerclass = formatmap.get(version)
751 if unbundlerclass is None:
751 if unbundlerclass is None:
752 raise error.Abort(_('unknown bundle version %s') % version)
752 raise error.Abort(_('unknown bundle version %s') % version)
753 unbundler = unbundlerclass(ui, fp)
753 unbundler = unbundlerclass(ui, fp)
754 indebug(ui, 'start processing of %s stream' % magicstring)
754 indebug(ui, 'start processing of %s stream' % magicstring)
755 return unbundler
755 return unbundler
756
756
757 class unbundle20(unpackermixin):
757 class unbundle20(unpackermixin):
758 """interpret a bundle2 stream
758 """interpret a bundle2 stream
759
759
760 This class is fed with a binary stream and yields parts through its
760 This class is fed with a binary stream and yields parts through its
761 `iterparts` methods."""
761 `iterparts` methods."""
762
762
763 _magicstring = 'HG20'
763 _magicstring = 'HG20'
764
764
765 def __init__(self, ui, fp):
765 def __init__(self, ui, fp):
766 """If header is specified, we do not read it out of the stream."""
766 """If header is specified, we do not read it out of the stream."""
767 self.ui = ui
767 self.ui = ui
768 self._compengine = util.compengines.forbundletype('UN')
768 self._compengine = util.compengines.forbundletype('UN')
769 self._compressed = None
769 self._compressed = None
770 super(unbundle20, self).__init__(fp)
770 super(unbundle20, self).__init__(fp)
771
771
772 @util.propertycache
772 @util.propertycache
773 def params(self):
773 def params(self):
774 """dictionary of stream level parameters"""
774 """dictionary of stream level parameters"""
775 indebug(self.ui, 'reading bundle2 stream parameters')
775 indebug(self.ui, 'reading bundle2 stream parameters')
776 params = {}
776 params = {}
777 paramssize = self._unpack(_fstreamparamsize)[0]
777 paramssize = self._unpack(_fstreamparamsize)[0]
778 if paramssize < 0:
778 if paramssize < 0:
779 raise error.BundleValueError('negative bundle param size: %i'
779 raise error.BundleValueError('negative bundle param size: %i'
780 % paramssize)
780 % paramssize)
781 if paramssize:
781 if paramssize:
782 params = self._readexact(paramssize)
782 params = self._readexact(paramssize)
783 params = self._processallparams(params)
783 params = self._processallparams(params)
784 return params
784 return params
785
785
786 def _processallparams(self, paramsblock):
786 def _processallparams(self, paramsblock):
787 """"""
787 """"""
788 params = util.sortdict()
788 params = util.sortdict()
789 for p in paramsblock.split(' '):
789 for p in paramsblock.split(' '):
790 p = p.split('=', 1)
790 p = p.split('=', 1)
791 p = [urlreq.unquote(i) for i in p]
791 p = [urlreq.unquote(i) for i in p]
792 if len(p) < 2:
792 if len(p) < 2:
793 p.append(None)
793 p.append(None)
794 self._processparam(*p)
794 self._processparam(*p)
795 params[p[0]] = p[1]
795 params[p[0]] = p[1]
796 return params
796 return params
797
797
798
798
799 def _processparam(self, name, value):
799 def _processparam(self, name, value):
800 """process a parameter, applying its effect if needed
800 """process a parameter, applying its effect if needed
801
801
802 Parameter starting with a lower case letter are advisory and will be
802 Parameter starting with a lower case letter are advisory and will be
803 ignored when unknown. Those starting with an upper case letter are
803 ignored when unknown. Those starting with an upper case letter are
804 mandatory and will this function will raise a KeyError when unknown.
804 mandatory and will this function will raise a KeyError when unknown.
805
805
806 Note: no option are currently supported. Any input will be either
806 Note: no option are currently supported. Any input will be either
807 ignored or failing.
807 ignored or failing.
808 """
808 """
809 if not name:
809 if not name:
810 raise ValueError(r'empty parameter name')
810 raise ValueError(r'empty parameter name')
811 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
811 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
812 raise ValueError(r'non letter first character: %s' % name)
812 raise ValueError(r'non letter first character: %s' % name)
813 try:
813 try:
814 handler = b2streamparamsmap[name.lower()]
814 handler = b2streamparamsmap[name.lower()]
815 except KeyError:
815 except KeyError:
816 if name[0:1].islower():
816 if name[0:1].islower():
817 indebug(self.ui, "ignoring unknown parameter %s" % name)
817 indebug(self.ui, "ignoring unknown parameter %s" % name)
818 else:
818 else:
819 raise error.BundleUnknownFeatureError(params=(name,))
819 raise error.BundleUnknownFeatureError(params=(name,))
820 else:
820 else:
821 handler(self, name, value)
821 handler(self, name, value)
822
822
823 def _forwardchunks(self):
823 def _forwardchunks(self):
824 """utility to transfer a bundle2 as binary
824 """utility to transfer a bundle2 as binary
825
825
826 This is made necessary by the fact the 'getbundle' command over 'ssh'
826 This is made necessary by the fact the 'getbundle' command over 'ssh'
827 have no way to know then the reply end, relying on the bundle to be
827 have no way to know then the reply end, relying on the bundle to be
828 interpreted to know its end. This is terrible and we are sorry, but we
828 interpreted to know its end. This is terrible and we are sorry, but we
829 needed to move forward to get general delta enabled.
829 needed to move forward to get general delta enabled.
830 """
830 """
831 yield self._magicstring
831 yield self._magicstring
832 assert 'params' not in vars(self)
832 assert 'params' not in vars(self)
833 paramssize = self._unpack(_fstreamparamsize)[0]
833 paramssize = self._unpack(_fstreamparamsize)[0]
834 if paramssize < 0:
834 if paramssize < 0:
835 raise error.BundleValueError('negative bundle param size: %i'
835 raise error.BundleValueError('negative bundle param size: %i'
836 % paramssize)
836 % paramssize)
837 yield _pack(_fstreamparamsize, paramssize)
837 yield _pack(_fstreamparamsize, paramssize)
838 if paramssize:
838 if paramssize:
839 params = self._readexact(paramssize)
839 params = self._readexact(paramssize)
840 self._processallparams(params)
840 self._processallparams(params)
841 yield params
841 yield params
842 assert self._compengine.bundletype == 'UN'
842 assert self._compengine.bundletype == 'UN'
843 # From there, payload might need to be decompressed
843 # From there, payload might need to be decompressed
844 self._fp = self._compengine.decompressorreader(self._fp)
844 self._fp = self._compengine.decompressorreader(self._fp)
845 emptycount = 0
845 emptycount = 0
846 while emptycount < 2:
846 while emptycount < 2:
847 # so we can brainlessly loop
847 # so we can brainlessly loop
848 assert _fpartheadersize == _fpayloadsize
848 assert _fpartheadersize == _fpayloadsize
849 size = self._unpack(_fpartheadersize)[0]
849 size = self._unpack(_fpartheadersize)[0]
850 yield _pack(_fpartheadersize, size)
850 yield _pack(_fpartheadersize, size)
851 if size:
851 if size:
852 emptycount = 0
852 emptycount = 0
853 else:
853 else:
854 emptycount += 1
854 emptycount += 1
855 continue
855 continue
856 if size == flaginterrupt:
856 if size == flaginterrupt:
857 continue
857 continue
858 elif size < 0:
858 elif size < 0:
859 raise error.BundleValueError('negative chunk size: %i')
859 raise error.BundleValueError('negative chunk size: %i')
860 yield self._readexact(size)
860 yield self._readexact(size)
861
861
862
862
863 def iterparts(self, seekable=False):
863 def iterparts(self, seekable=False):
864 """yield all parts contained in the stream"""
864 """yield all parts contained in the stream"""
865 cls = seekableunbundlepart if seekable else unbundlepart
865 cls = seekableunbundlepart if seekable else unbundlepart
866 # make sure param have been loaded
866 # make sure param have been loaded
867 self.params
867 self.params
868 # From there, payload need to be decompressed
868 # From there, payload need to be decompressed
869 self._fp = self._compengine.decompressorreader(self._fp)
869 self._fp = self._compengine.decompressorreader(self._fp)
870 indebug(self.ui, 'start extraction of bundle2 parts')
870 indebug(self.ui, 'start extraction of bundle2 parts')
871 headerblock = self._readpartheader()
871 headerblock = self._readpartheader()
872 while headerblock is not None:
872 while headerblock is not None:
873 part = cls(self.ui, headerblock, self._fp)
873 part = cls(self.ui, headerblock, self._fp)
874 yield part
874 yield part
875 # Ensure part is fully consumed so we can start reading the next
875 # Ensure part is fully consumed so we can start reading the next
876 # part.
876 # part.
877 part.consume()
877 part.consume()
878
878
879 headerblock = self._readpartheader()
879 headerblock = self._readpartheader()
880 indebug(self.ui, 'end of bundle2 stream')
880 indebug(self.ui, 'end of bundle2 stream')
881
881
882 def _readpartheader(self):
882 def _readpartheader(self):
883 """reads a part header size and return the bytes blob
883 """reads a part header size and return the bytes blob
884
884
885 returns None if empty"""
885 returns None if empty"""
886 headersize = self._unpack(_fpartheadersize)[0]
886 headersize = self._unpack(_fpartheadersize)[0]
887 if headersize < 0:
887 if headersize < 0:
888 raise error.BundleValueError('negative part header size: %i'
888 raise error.BundleValueError('negative part header size: %i'
889 % headersize)
889 % headersize)
890 indebug(self.ui, 'part header size: %i' % headersize)
890 indebug(self.ui, 'part header size: %i' % headersize)
891 if headersize:
891 if headersize:
892 return self._readexact(headersize)
892 return self._readexact(headersize)
893 return None
893 return None
894
894
895 def compressed(self):
895 def compressed(self):
896 self.params # load params
896 self.params # load params
897 return self._compressed
897 return self._compressed
898
898
899 def close(self):
899 def close(self):
900 """close underlying file"""
900 """close underlying file"""
901 if util.safehasattr(self._fp, 'close'):
901 if util.safehasattr(self._fp, 'close'):
902 return self._fp.close()
902 return self._fp.close()
903
903
904 formatmap = {'20': unbundle20}
904 formatmap = {'20': unbundle20}
905
905
906 b2streamparamsmap = {}
906 b2streamparamsmap = {}
907
907
908 def b2streamparamhandler(name):
908 def b2streamparamhandler(name):
909 """register a handler for a stream level parameter"""
909 """register a handler for a stream level parameter"""
910 def decorator(func):
910 def decorator(func):
911 assert name not in formatmap
911 assert name not in formatmap
912 b2streamparamsmap[name] = func
912 b2streamparamsmap[name] = func
913 return func
913 return func
914 return decorator
914 return decorator
915
915
916 @b2streamparamhandler('compression')
916 @b2streamparamhandler('compression')
917 def processcompression(unbundler, param, value):
917 def processcompression(unbundler, param, value):
918 """read compression parameter and install payload decompression"""
918 """read compression parameter and install payload decompression"""
919 if value not in util.compengines.supportedbundletypes:
919 if value not in util.compengines.supportedbundletypes:
920 raise error.BundleUnknownFeatureError(params=(param,),
920 raise error.BundleUnknownFeatureError(params=(param,),
921 values=(value,))
921 values=(value,))
922 unbundler._compengine = util.compengines.forbundletype(value)
922 unbundler._compengine = util.compengines.forbundletype(value)
923 if value is not None:
923 if value is not None:
924 unbundler._compressed = True
924 unbundler._compressed = True
925
925
926 class bundlepart(object):
926 class bundlepart(object):
927 """A bundle2 part contains application level payload
927 """A bundle2 part contains application level payload
928
928
929 The part `type` is used to route the part to the application level
929 The part `type` is used to route the part to the application level
930 handler.
930 handler.
931
931
932 The part payload is contained in ``part.data``. It could be raw bytes or a
932 The part payload is contained in ``part.data``. It could be raw bytes or a
933 generator of byte chunks.
933 generator of byte chunks.
934
934
935 You can add parameters to the part using the ``addparam`` method.
935 You can add parameters to the part using the ``addparam`` method.
936 Parameters can be either mandatory (default) or advisory. Remote side
936 Parameters can be either mandatory (default) or advisory. Remote side
937 should be able to safely ignore the advisory ones.
937 should be able to safely ignore the advisory ones.
938
938
939 Both data and parameters cannot be modified after the generation has begun.
939 Both data and parameters cannot be modified after the generation has begun.
940 """
940 """
941
941
942 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
942 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
943 data='', mandatory=True):
943 data='', mandatory=True):
944 validateparttype(parttype)
944 validateparttype(parttype)
945 self.id = None
945 self.id = None
946 self.type = parttype
946 self.type = parttype
947 self._data = data
947 self._data = data
948 self._mandatoryparams = list(mandatoryparams)
948 self._mandatoryparams = list(mandatoryparams)
949 self._advisoryparams = list(advisoryparams)
949 self._advisoryparams = list(advisoryparams)
950 # checking for duplicated entries
950 # checking for duplicated entries
951 self._seenparams = set()
951 self._seenparams = set()
952 for pname, __ in self._mandatoryparams + self._advisoryparams:
952 for pname, __ in self._mandatoryparams + self._advisoryparams:
953 if pname in self._seenparams:
953 if pname in self._seenparams:
954 raise error.ProgrammingError('duplicated params: %s' % pname)
954 raise error.ProgrammingError('duplicated params: %s' % pname)
955 self._seenparams.add(pname)
955 self._seenparams.add(pname)
956 # status of the part's generation:
956 # status of the part's generation:
957 # - None: not started,
957 # - None: not started,
958 # - False: currently generated,
958 # - False: currently generated,
959 # - True: generation done.
959 # - True: generation done.
960 self._generated = None
960 self._generated = None
961 self.mandatory = mandatory
961 self.mandatory = mandatory
962
962
963 def __repr__(self):
963 def __repr__(self):
964 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
964 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
965 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
965 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
966 % (cls, id(self), self.id, self.type, self.mandatory))
966 % (cls, id(self), self.id, self.type, self.mandatory))
967
967
968 def copy(self):
968 def copy(self):
969 """return a copy of the part
969 """return a copy of the part
970
970
971 The new part have the very same content but no partid assigned yet.
971 The new part have the very same content but no partid assigned yet.
972 Parts with generated data cannot be copied."""
972 Parts with generated data cannot be copied."""
973 assert not util.safehasattr(self.data, 'next')
973 assert not util.safehasattr(self.data, 'next')
974 return self.__class__(self.type, self._mandatoryparams,
974 return self.__class__(self.type, self._mandatoryparams,
975 self._advisoryparams, self._data, self.mandatory)
975 self._advisoryparams, self._data, self.mandatory)
976
976
977 # methods used to defines the part content
977 # methods used to defines the part content
978 @property
978 @property
979 def data(self):
979 def data(self):
980 return self._data
980 return self._data
981
981
982 @data.setter
982 @data.setter
983 def data(self, data):
983 def data(self, data):
984 if self._generated is not None:
984 if self._generated is not None:
985 raise error.ReadOnlyPartError('part is being generated')
985 raise error.ReadOnlyPartError('part is being generated')
986 self._data = data
986 self._data = data
987
987
988 @property
988 @property
989 def mandatoryparams(self):
989 def mandatoryparams(self):
990 # make it an immutable tuple to force people through ``addparam``
990 # make it an immutable tuple to force people through ``addparam``
991 return tuple(self._mandatoryparams)
991 return tuple(self._mandatoryparams)
992
992
993 @property
993 @property
994 def advisoryparams(self):
994 def advisoryparams(self):
995 # make it an immutable tuple to force people through ``addparam``
995 # make it an immutable tuple to force people through ``addparam``
996 return tuple(self._advisoryparams)
996 return tuple(self._advisoryparams)
997
997
998 def addparam(self, name, value='', mandatory=True):
998 def addparam(self, name, value='', mandatory=True):
999 """add a parameter to the part
999 """add a parameter to the part
1000
1000
1001 If 'mandatory' is set to True, the remote handler must claim support
1001 If 'mandatory' is set to True, the remote handler must claim support
1002 for this parameter or the unbundling will be aborted.
1002 for this parameter or the unbundling will be aborted.
1003
1003
1004 The 'name' and 'value' cannot exceed 255 bytes each.
1004 The 'name' and 'value' cannot exceed 255 bytes each.
1005 """
1005 """
1006 if self._generated is not None:
1006 if self._generated is not None:
1007 raise error.ReadOnlyPartError('part is being generated')
1007 raise error.ReadOnlyPartError('part is being generated')
1008 if name in self._seenparams:
1008 if name in self._seenparams:
1009 raise ValueError('duplicated params: %s' % name)
1009 raise ValueError('duplicated params: %s' % name)
1010 self._seenparams.add(name)
1010 self._seenparams.add(name)
1011 params = self._advisoryparams
1011 params = self._advisoryparams
1012 if mandatory:
1012 if mandatory:
1013 params = self._mandatoryparams
1013 params = self._mandatoryparams
1014 params.append((name, value))
1014 params.append((name, value))
1015
1015
1016 # methods used to generates the bundle2 stream
1016 # methods used to generates the bundle2 stream
1017 def getchunks(self, ui):
1017 def getchunks(self, ui):
1018 if self._generated is not None:
1018 if self._generated is not None:
1019 raise error.ProgrammingError('part can only be consumed once')
1019 raise error.ProgrammingError('part can only be consumed once')
1020 self._generated = False
1020 self._generated = False
1021
1021
1022 if ui.debugflag:
1022 if ui.debugflag:
1023 msg = ['bundle2-output-part: "%s"' % self.type]
1023 msg = ['bundle2-output-part: "%s"' % self.type]
1024 if not self.mandatory:
1024 if not self.mandatory:
1025 msg.append(' (advisory)')
1025 msg.append(' (advisory)')
1026 nbmp = len(self.mandatoryparams)
1026 nbmp = len(self.mandatoryparams)
1027 nbap = len(self.advisoryparams)
1027 nbap = len(self.advisoryparams)
1028 if nbmp or nbap:
1028 if nbmp or nbap:
1029 msg.append(' (params:')
1029 msg.append(' (params:')
1030 if nbmp:
1030 if nbmp:
1031 msg.append(' %i mandatory' % nbmp)
1031 msg.append(' %i mandatory' % nbmp)
1032 if nbap:
1032 if nbap:
1033 msg.append(' %i advisory' % nbmp)
1033 msg.append(' %i advisory' % nbmp)
1034 msg.append(')')
1034 msg.append(')')
1035 if not self.data:
1035 if not self.data:
1036 msg.append(' empty payload')
1036 msg.append(' empty payload')
1037 elif (util.safehasattr(self.data, 'next')
1037 elif (util.safehasattr(self.data, 'next')
1038 or util.safehasattr(self.data, '__next__')):
1038 or util.safehasattr(self.data, '__next__')):
1039 msg.append(' streamed payload')
1039 msg.append(' streamed payload')
1040 else:
1040 else:
1041 msg.append(' %i bytes payload' % len(self.data))
1041 msg.append(' %i bytes payload' % len(self.data))
1042 msg.append('\n')
1042 msg.append('\n')
1043 ui.debug(''.join(msg))
1043 ui.debug(''.join(msg))
1044
1044
1045 #### header
1045 #### header
1046 if self.mandatory:
1046 if self.mandatory:
1047 parttype = self.type.upper()
1047 parttype = self.type.upper()
1048 else:
1048 else:
1049 parttype = self.type.lower()
1049 parttype = self.type.lower()
1050 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1050 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1051 ## parttype
1051 ## parttype
1052 header = [_pack(_fparttypesize, len(parttype)),
1052 header = [_pack(_fparttypesize, len(parttype)),
1053 parttype, _pack(_fpartid, self.id),
1053 parttype, _pack(_fpartid, self.id),
1054 ]
1054 ]
1055 ## parameters
1055 ## parameters
1056 # count
1056 # count
1057 manpar = self.mandatoryparams
1057 manpar = self.mandatoryparams
1058 advpar = self.advisoryparams
1058 advpar = self.advisoryparams
1059 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1059 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1060 # size
1060 # size
1061 parsizes = []
1061 parsizes = []
1062 for key, value in manpar:
1062 for key, value in manpar:
1063 parsizes.append(len(key))
1063 parsizes.append(len(key))
1064 parsizes.append(len(value))
1064 parsizes.append(len(value))
1065 for key, value in advpar:
1065 for key, value in advpar:
1066 parsizes.append(len(key))
1066 parsizes.append(len(key))
1067 parsizes.append(len(value))
1067 parsizes.append(len(value))
1068 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1068 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1069 header.append(paramsizes)
1069 header.append(paramsizes)
1070 # key, value
1070 # key, value
1071 for key, value in manpar:
1071 for key, value in manpar:
1072 header.append(key)
1072 header.append(key)
1073 header.append(value)
1073 header.append(value)
1074 for key, value in advpar:
1074 for key, value in advpar:
1075 header.append(key)
1075 header.append(key)
1076 header.append(value)
1076 header.append(value)
1077 ## finalize header
1077 ## finalize header
1078 try:
1078 try:
1079 headerchunk = ''.join(header)
1079 headerchunk = ''.join(header)
1080 except TypeError:
1080 except TypeError:
1081 raise TypeError(r'Found a non-bytes trying to '
1081 raise TypeError(r'Found a non-bytes trying to '
1082 r'build bundle part header: %r' % header)
1082 r'build bundle part header: %r' % header)
1083 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1083 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1084 yield _pack(_fpartheadersize, len(headerchunk))
1084 yield _pack(_fpartheadersize, len(headerchunk))
1085 yield headerchunk
1085 yield headerchunk
1086 ## payload
1086 ## payload
1087 try:
1087 try:
1088 for chunk in self._payloadchunks():
1088 for chunk in self._payloadchunks():
1089 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1089 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1090 yield _pack(_fpayloadsize, len(chunk))
1090 yield _pack(_fpayloadsize, len(chunk))
1091 yield chunk
1091 yield chunk
1092 except GeneratorExit:
1092 except GeneratorExit:
1093 # GeneratorExit means that nobody is listening for our
1093 # GeneratorExit means that nobody is listening for our
1094 # results anyway, so just bail quickly rather than trying
1094 # results anyway, so just bail quickly rather than trying
1095 # to produce an error part.
1095 # to produce an error part.
1096 ui.debug('bundle2-generatorexit\n')
1096 ui.debug('bundle2-generatorexit\n')
1097 raise
1097 raise
1098 except BaseException as exc:
1098 except BaseException as exc:
1099 bexc = stringutil.forcebytestr(exc)
1099 bexc = stringutil.forcebytestr(exc)
1100 # backup exception data for later
1100 # backup exception data for later
1101 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1101 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1102 % bexc)
1102 % bexc)
1103 tb = sys.exc_info()[2]
1103 tb = sys.exc_info()[2]
1104 msg = 'unexpected error: %s' % bexc
1104 msg = 'unexpected error: %s' % bexc
1105 interpart = bundlepart('error:abort', [('message', msg)],
1105 interpart = bundlepart('error:abort', [('message', msg)],
1106 mandatory=False)
1106 mandatory=False)
1107 interpart.id = 0
1107 interpart.id = 0
1108 yield _pack(_fpayloadsize, -1)
1108 yield _pack(_fpayloadsize, -1)
1109 for chunk in interpart.getchunks(ui=ui):
1109 for chunk in interpart.getchunks(ui=ui):
1110 yield chunk
1110 yield chunk
1111 outdebug(ui, 'closing payload chunk')
1111 outdebug(ui, 'closing payload chunk')
1112 # abort current part payload
1112 # abort current part payload
1113 yield _pack(_fpayloadsize, 0)
1113 yield _pack(_fpayloadsize, 0)
1114 pycompat.raisewithtb(exc, tb)
1114 pycompat.raisewithtb(exc, tb)
1115 # end of payload
1115 # end of payload
1116 outdebug(ui, 'closing payload chunk')
1116 outdebug(ui, 'closing payload chunk')
1117 yield _pack(_fpayloadsize, 0)
1117 yield _pack(_fpayloadsize, 0)
1118 self._generated = True
1118 self._generated = True
1119
1119
1120 def _payloadchunks(self):
1120 def _payloadchunks(self):
1121 """yield chunks of a the part payload
1121 """yield chunks of a the part payload
1122
1122
1123 Exists to handle the different methods to provide data to a part."""
1123 Exists to handle the different methods to provide data to a part."""
1124 # we only support fixed size data now.
1124 # we only support fixed size data now.
1125 # This will be improved in the future.
1125 # This will be improved in the future.
1126 if (util.safehasattr(self.data, 'next')
1126 if (util.safehasattr(self.data, 'next')
1127 or util.safehasattr(self.data, '__next__')):
1127 or util.safehasattr(self.data, '__next__')):
1128 buff = util.chunkbuffer(self.data)
1128 buff = util.chunkbuffer(self.data)
1129 chunk = buff.read(preferedchunksize)
1129 chunk = buff.read(preferedchunksize)
1130 while chunk:
1130 while chunk:
1131 yield chunk
1131 yield chunk
1132 chunk = buff.read(preferedchunksize)
1132 chunk = buff.read(preferedchunksize)
1133 elif len(self.data):
1133 elif len(self.data):
1134 yield self.data
1134 yield self.data
1135
1135
1136
1136
1137 flaginterrupt = -1
1137 flaginterrupt = -1
1138
1138
1139 class interrupthandler(unpackermixin):
1139 class interrupthandler(unpackermixin):
1140 """read one part and process it with restricted capability
1140 """read one part and process it with restricted capability
1141
1141
1142 This allows to transmit exception raised on the producer size during part
1142 This allows to transmit exception raised on the producer size during part
1143 iteration while the consumer is reading a part.
1143 iteration while the consumer is reading a part.
1144
1144
1145 Part processed in this manner only have access to a ui object,"""
1145 Part processed in this manner only have access to a ui object,"""
1146
1146
1147 def __init__(self, ui, fp):
1147 def __init__(self, ui, fp):
1148 super(interrupthandler, self).__init__(fp)
1148 super(interrupthandler, self).__init__(fp)
1149 self.ui = ui
1149 self.ui = ui
1150
1150
1151 def _readpartheader(self):
1151 def _readpartheader(self):
1152 """reads a part header size and return the bytes blob
1152 """reads a part header size and return the bytes blob
1153
1153
1154 returns None if empty"""
1154 returns None if empty"""
1155 headersize = self._unpack(_fpartheadersize)[0]
1155 headersize = self._unpack(_fpartheadersize)[0]
1156 if headersize < 0:
1156 if headersize < 0:
1157 raise error.BundleValueError('negative part header size: %i'
1157 raise error.BundleValueError('negative part header size: %i'
1158 % headersize)
1158 % headersize)
1159 indebug(self.ui, 'part header size: %i\n' % headersize)
1159 indebug(self.ui, 'part header size: %i\n' % headersize)
1160 if headersize:
1160 if headersize:
1161 return self._readexact(headersize)
1161 return self._readexact(headersize)
1162 return None
1162 return None
1163
1163
1164 def __call__(self):
1164 def __call__(self):
1165
1165
1166 self.ui.debug('bundle2-input-stream-interrupt:'
1166 self.ui.debug('bundle2-input-stream-interrupt:'
1167 ' opening out of band context\n')
1167 ' opening out of band context\n')
1168 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1168 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1169 headerblock = self._readpartheader()
1169 headerblock = self._readpartheader()
1170 if headerblock is None:
1170 if headerblock is None:
1171 indebug(self.ui, 'no part found during interruption.')
1171 indebug(self.ui, 'no part found during interruption.')
1172 return
1172 return
1173 part = unbundlepart(self.ui, headerblock, self._fp)
1173 part = unbundlepart(self.ui, headerblock, self._fp)
1174 op = interruptoperation(self.ui)
1174 op = interruptoperation(self.ui)
1175 hardabort = False
1175 hardabort = False
1176 try:
1176 try:
1177 _processpart(op, part)
1177 _processpart(op, part)
1178 except (SystemExit, KeyboardInterrupt):
1178 except (SystemExit, KeyboardInterrupt):
1179 hardabort = True
1179 hardabort = True
1180 raise
1180 raise
1181 finally:
1181 finally:
1182 if not hardabort:
1182 if not hardabort:
1183 part.consume()
1183 part.consume()
1184 self.ui.debug('bundle2-input-stream-interrupt:'
1184 self.ui.debug('bundle2-input-stream-interrupt:'
1185 ' closing out of band context\n')
1185 ' closing out of band context\n')
1186
1186
1187 class interruptoperation(object):
1187 class interruptoperation(object):
1188 """A limited operation to be use by part handler during interruption
1188 """A limited operation to be use by part handler during interruption
1189
1189
1190 It only have access to an ui object.
1190 It only have access to an ui object.
1191 """
1191 """
1192
1192
1193 def __init__(self, ui):
1193 def __init__(self, ui):
1194 self.ui = ui
1194 self.ui = ui
1195 self.reply = None
1195 self.reply = None
1196 self.captureoutput = False
1196 self.captureoutput = False
1197
1197
1198 @property
1198 @property
1199 def repo(self):
1199 def repo(self):
1200 raise error.ProgrammingError('no repo access from stream interruption')
1200 raise error.ProgrammingError('no repo access from stream interruption')
1201
1201
1202 def gettransaction(self):
1202 def gettransaction(self):
1203 raise TransactionUnavailable('no repo access from stream interruption')
1203 raise TransactionUnavailable('no repo access from stream interruption')
1204
1204
1205 def decodepayloadchunks(ui, fh):
1205 def decodepayloadchunks(ui, fh):
1206 """Reads bundle2 part payload data into chunks.
1206 """Reads bundle2 part payload data into chunks.
1207
1207
1208 Part payload data consists of framed chunks. This function takes
1208 Part payload data consists of framed chunks. This function takes
1209 a file handle and emits those chunks.
1209 a file handle and emits those chunks.
1210 """
1210 """
1211 dolog = ui.configbool('devel', 'bundle2.debug')
1211 dolog = ui.configbool('devel', 'bundle2.debug')
1212 debug = ui.debug
1212 debug = ui.debug
1213
1213
1214 headerstruct = struct.Struct(_fpayloadsize)
1214 headerstruct = struct.Struct(_fpayloadsize)
1215 headersize = headerstruct.size
1215 headersize = headerstruct.size
1216 unpack = headerstruct.unpack
1216 unpack = headerstruct.unpack
1217
1217
1218 readexactly = changegroup.readexactly
1218 readexactly = changegroup.readexactly
1219 read = fh.read
1219 read = fh.read
1220
1220
1221 chunksize = unpack(readexactly(fh, headersize))[0]
1221 chunksize = unpack(readexactly(fh, headersize))[0]
1222 indebug(ui, 'payload chunk size: %i' % chunksize)
1222 indebug(ui, 'payload chunk size: %i' % chunksize)
1223
1223
1224 # changegroup.readexactly() is inlined below for performance.
1224 # changegroup.readexactly() is inlined below for performance.
1225 while chunksize:
1225 while chunksize:
1226 if chunksize >= 0:
1226 if chunksize >= 0:
1227 s = read(chunksize)
1227 s = read(chunksize)
1228 if len(s) < chunksize:
1228 if len(s) < chunksize:
1229 raise error.Abort(_('stream ended unexpectedly '
1229 raise error.Abort(_('stream ended unexpectedly '
1230 ' (got %d bytes, expected %d)') %
1230 ' (got %d bytes, expected %d)') %
1231 (len(s), chunksize))
1231 (len(s), chunksize))
1232
1232
1233 yield s
1233 yield s
1234 elif chunksize == flaginterrupt:
1234 elif chunksize == flaginterrupt:
1235 # Interrupt "signal" detected. The regular stream is interrupted
1235 # Interrupt "signal" detected. The regular stream is interrupted
1236 # and a bundle2 part follows. Consume it.
1236 # and a bundle2 part follows. Consume it.
1237 interrupthandler(ui, fh)()
1237 interrupthandler(ui, fh)()
1238 else:
1238 else:
1239 raise error.BundleValueError(
1239 raise error.BundleValueError(
1240 'negative payload chunk size: %s' % chunksize)
1240 'negative payload chunk size: %s' % chunksize)
1241
1241
1242 s = read(headersize)
1242 s = read(headersize)
1243 if len(s) < headersize:
1243 if len(s) < headersize:
1244 raise error.Abort(_('stream ended unexpectedly '
1244 raise error.Abort(_('stream ended unexpectedly '
1245 ' (got %d bytes, expected %d)') %
1245 ' (got %d bytes, expected %d)') %
1246 (len(s), chunksize))
1246 (len(s), chunksize))
1247
1247
1248 chunksize = unpack(s)[0]
1248 chunksize = unpack(s)[0]
1249
1249
1250 # indebug() inlined for performance.
1250 # indebug() inlined for performance.
1251 if dolog:
1251 if dolog:
1252 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1252 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1253
1253
1254 class unbundlepart(unpackermixin):
1254 class unbundlepart(unpackermixin):
1255 """a bundle part read from a bundle"""
1255 """a bundle part read from a bundle"""
1256
1256
1257 def __init__(self, ui, header, fp):
1257 def __init__(self, ui, header, fp):
1258 super(unbundlepart, self).__init__(fp)
1258 super(unbundlepart, self).__init__(fp)
1259 self._seekable = (util.safehasattr(fp, 'seek') and
1259 self._seekable = (util.safehasattr(fp, 'seek') and
1260 util.safehasattr(fp, 'tell'))
1260 util.safehasattr(fp, 'tell'))
1261 self.ui = ui
1261 self.ui = ui
1262 # unbundle state attr
1262 # unbundle state attr
1263 self._headerdata = header
1263 self._headerdata = header
1264 self._headeroffset = 0
1264 self._headeroffset = 0
1265 self._initialized = False
1265 self._initialized = False
1266 self.consumed = False
1266 self.consumed = False
1267 # part data
1267 # part data
1268 self.id = None
1268 self.id = None
1269 self.type = None
1269 self.type = None
1270 self.mandatoryparams = None
1270 self.mandatoryparams = None
1271 self.advisoryparams = None
1271 self.advisoryparams = None
1272 self.params = None
1272 self.params = None
1273 self.mandatorykeys = ()
1273 self.mandatorykeys = ()
1274 self._readheader()
1274 self._readheader()
1275 self._mandatory = None
1275 self._mandatory = None
1276 self._pos = 0
1276 self._pos = 0
1277
1277
1278 def _fromheader(self, size):
1278 def _fromheader(self, size):
1279 """return the next <size> byte from the header"""
1279 """return the next <size> byte from the header"""
1280 offset = self._headeroffset
1280 offset = self._headeroffset
1281 data = self._headerdata[offset:(offset + size)]
1281 data = self._headerdata[offset:(offset + size)]
1282 self._headeroffset = offset + size
1282 self._headeroffset = offset + size
1283 return data
1283 return data
1284
1284
1285 def _unpackheader(self, format):
1285 def _unpackheader(self, format):
1286 """read given format from header
1286 """read given format from header
1287
1287
1288 This automatically compute the size of the format to read."""
1288 This automatically compute the size of the format to read."""
1289 data = self._fromheader(struct.calcsize(format))
1289 data = self._fromheader(struct.calcsize(format))
1290 return _unpack(format, data)
1290 return _unpack(format, data)
1291
1291
1292 def _initparams(self, mandatoryparams, advisoryparams):
1292 def _initparams(self, mandatoryparams, advisoryparams):
1293 """internal function to setup all logic related parameters"""
1293 """internal function to setup all logic related parameters"""
1294 # make it read only to prevent people touching it by mistake.
1294 # make it read only to prevent people touching it by mistake.
1295 self.mandatoryparams = tuple(mandatoryparams)
1295 self.mandatoryparams = tuple(mandatoryparams)
1296 self.advisoryparams = tuple(advisoryparams)
1296 self.advisoryparams = tuple(advisoryparams)
1297 # user friendly UI
1297 # user friendly UI
1298 self.params = util.sortdict(self.mandatoryparams)
1298 self.params = util.sortdict(self.mandatoryparams)
1299 self.params.update(self.advisoryparams)
1299 self.params.update(self.advisoryparams)
1300 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1300 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1301
1301
1302 def _readheader(self):
1302 def _readheader(self):
1303 """read the header and setup the object"""
1303 """read the header and setup the object"""
1304 typesize = self._unpackheader(_fparttypesize)[0]
1304 typesize = self._unpackheader(_fparttypesize)[0]
1305 self.type = self._fromheader(typesize)
1305 self.type = self._fromheader(typesize)
1306 indebug(self.ui, 'part type: "%s"' % self.type)
1306 indebug(self.ui, 'part type: "%s"' % self.type)
1307 self.id = self._unpackheader(_fpartid)[0]
1307 self.id = self._unpackheader(_fpartid)[0]
1308 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1308 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1309 # extract mandatory bit from type
1309 # extract mandatory bit from type
1310 self.mandatory = (self.type != self.type.lower())
1310 self.mandatory = (self.type != self.type.lower())
1311 self.type = self.type.lower()
1311 self.type = self.type.lower()
1312 ## reading parameters
1312 ## reading parameters
1313 # param count
1313 # param count
1314 mancount, advcount = self._unpackheader(_fpartparamcount)
1314 mancount, advcount = self._unpackheader(_fpartparamcount)
1315 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1315 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1316 # param size
1316 # param size
1317 fparamsizes = _makefpartparamsizes(mancount + advcount)
1317 fparamsizes = _makefpartparamsizes(mancount + advcount)
1318 paramsizes = self._unpackheader(fparamsizes)
1318 paramsizes = self._unpackheader(fparamsizes)
1319 # make it a list of couple again
1319 # make it a list of couple again
1320 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1320 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1321 # split mandatory from advisory
1321 # split mandatory from advisory
1322 mansizes = paramsizes[:mancount]
1322 mansizes = paramsizes[:mancount]
1323 advsizes = paramsizes[mancount:]
1323 advsizes = paramsizes[mancount:]
1324 # retrieve param value
1324 # retrieve param value
1325 manparams = []
1325 manparams = []
1326 for key, value in mansizes:
1326 for key, value in mansizes:
1327 manparams.append((self._fromheader(key), self._fromheader(value)))
1327 manparams.append((self._fromheader(key), self._fromheader(value)))
1328 advparams = []
1328 advparams = []
1329 for key, value in advsizes:
1329 for key, value in advsizes:
1330 advparams.append((self._fromheader(key), self._fromheader(value)))
1330 advparams.append((self._fromheader(key), self._fromheader(value)))
1331 self._initparams(manparams, advparams)
1331 self._initparams(manparams, advparams)
1332 ## part payload
1332 ## part payload
1333 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1333 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1334 # we read the data, tell it
1334 # we read the data, tell it
1335 self._initialized = True
1335 self._initialized = True
1336
1336
1337 def _payloadchunks(self):
1337 def _payloadchunks(self):
1338 """Generator of decoded chunks in the payload."""
1338 """Generator of decoded chunks in the payload."""
1339 return decodepayloadchunks(self.ui, self._fp)
1339 return decodepayloadchunks(self.ui, self._fp)
1340
1340
1341 def consume(self):
1341 def consume(self):
1342 """Read the part payload until completion.
1342 """Read the part payload until completion.
1343
1343
1344 By consuming the part data, the underlying stream read offset will
1344 By consuming the part data, the underlying stream read offset will
1345 be advanced to the next part (or end of stream).
1345 be advanced to the next part (or end of stream).
1346 """
1346 """
1347 if self.consumed:
1347 if self.consumed:
1348 return
1348 return
1349
1349
1350 chunk = self.read(32768)
1350 chunk = self.read(32768)
1351 while chunk:
1351 while chunk:
1352 self._pos += len(chunk)
1352 self._pos += len(chunk)
1353 chunk = self.read(32768)
1353 chunk = self.read(32768)
1354
1354
1355 def read(self, size=None):
1355 def read(self, size=None):
1356 """read payload data"""
1356 """read payload data"""
1357 if not self._initialized:
1357 if not self._initialized:
1358 self._readheader()
1358 self._readheader()
1359 if size is None:
1359 if size is None:
1360 data = self._payloadstream.read()
1360 data = self._payloadstream.read()
1361 else:
1361 else:
1362 data = self._payloadstream.read(size)
1362 data = self._payloadstream.read(size)
1363 self._pos += len(data)
1363 self._pos += len(data)
1364 if size is None or len(data) < size:
1364 if size is None or len(data) < size:
1365 if not self.consumed and self._pos:
1365 if not self.consumed and self._pos:
1366 self.ui.debug('bundle2-input-part: total payload size %i\n'
1366 self.ui.debug('bundle2-input-part: total payload size %i\n'
1367 % self._pos)
1367 % self._pos)
1368 self.consumed = True
1368 self.consumed = True
1369 return data
1369 return data
1370
1370
1371 class seekableunbundlepart(unbundlepart):
1371 class seekableunbundlepart(unbundlepart):
1372 """A bundle2 part in a bundle that is seekable.
1372 """A bundle2 part in a bundle that is seekable.
1373
1373
1374 Regular ``unbundlepart`` instances can only be read once. This class
1374 Regular ``unbundlepart`` instances can only be read once. This class
1375 extends ``unbundlepart`` to enable bi-directional seeking within the
1375 extends ``unbundlepart`` to enable bi-directional seeking within the
1376 part.
1376 part.
1377
1377
1378 Bundle2 part data consists of framed chunks. Offsets when seeking
1378 Bundle2 part data consists of framed chunks. Offsets when seeking
1379 refer to the decoded data, not the offsets in the underlying bundle2
1379 refer to the decoded data, not the offsets in the underlying bundle2
1380 stream.
1380 stream.
1381
1381
1382 To facilitate quickly seeking within the decoded data, instances of this
1382 To facilitate quickly seeking within the decoded data, instances of this
1383 class maintain a mapping between offsets in the underlying stream and
1383 class maintain a mapping between offsets in the underlying stream and
1384 the decoded payload. This mapping will consume memory in proportion
1384 the decoded payload. This mapping will consume memory in proportion
1385 to the number of chunks within the payload (which almost certainly
1385 to the number of chunks within the payload (which almost certainly
1386 increases in proportion with the size of the part).
1386 increases in proportion with the size of the part).
1387 """
1387 """
1388 def __init__(self, ui, header, fp):
1388 def __init__(self, ui, header, fp):
1389 # (payload, file) offsets for chunk starts.
1389 # (payload, file) offsets for chunk starts.
1390 self._chunkindex = []
1390 self._chunkindex = []
1391
1391
1392 super(seekableunbundlepart, self).__init__(ui, header, fp)
1392 super(seekableunbundlepart, self).__init__(ui, header, fp)
1393
1393
1394 def _payloadchunks(self, chunknum=0):
1394 def _payloadchunks(self, chunknum=0):
1395 '''seek to specified chunk and start yielding data'''
1395 '''seek to specified chunk and start yielding data'''
1396 if len(self._chunkindex) == 0:
1396 if len(self._chunkindex) == 0:
1397 assert chunknum == 0, 'Must start with chunk 0'
1397 assert chunknum == 0, 'Must start with chunk 0'
1398 self._chunkindex.append((0, self._tellfp()))
1398 self._chunkindex.append((0, self._tellfp()))
1399 else:
1399 else:
1400 assert chunknum < len(self._chunkindex), \
1400 assert chunknum < len(self._chunkindex), \
1401 'Unknown chunk %d' % chunknum
1401 'Unknown chunk %d' % chunknum
1402 self._seekfp(self._chunkindex[chunknum][1])
1402 self._seekfp(self._chunkindex[chunknum][1])
1403
1403
1404 pos = self._chunkindex[chunknum][0]
1404 pos = self._chunkindex[chunknum][0]
1405
1405
1406 for chunk in decodepayloadchunks(self.ui, self._fp):
1406 for chunk in decodepayloadchunks(self.ui, self._fp):
1407 chunknum += 1
1407 chunknum += 1
1408 pos += len(chunk)
1408 pos += len(chunk)
1409 if chunknum == len(self._chunkindex):
1409 if chunknum == len(self._chunkindex):
1410 self._chunkindex.append((pos, self._tellfp()))
1410 self._chunkindex.append((pos, self._tellfp()))
1411
1411
1412 yield chunk
1412 yield chunk
1413
1413
1414 def _findchunk(self, pos):
1414 def _findchunk(self, pos):
1415 '''for a given payload position, return a chunk number and offset'''
1415 '''for a given payload position, return a chunk number and offset'''
1416 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1416 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1417 if ppos == pos:
1417 if ppos == pos:
1418 return chunk, 0
1418 return chunk, 0
1419 elif ppos > pos:
1419 elif ppos > pos:
1420 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1420 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1421 raise ValueError('Unknown chunk')
1421 raise ValueError('Unknown chunk')
1422
1422
1423 def tell(self):
1423 def tell(self):
1424 return self._pos
1424 return self._pos
1425
1425
1426 def seek(self, offset, whence=os.SEEK_SET):
1426 def seek(self, offset, whence=os.SEEK_SET):
1427 if whence == os.SEEK_SET:
1427 if whence == os.SEEK_SET:
1428 newpos = offset
1428 newpos = offset
1429 elif whence == os.SEEK_CUR:
1429 elif whence == os.SEEK_CUR:
1430 newpos = self._pos + offset
1430 newpos = self._pos + offset
1431 elif whence == os.SEEK_END:
1431 elif whence == os.SEEK_END:
1432 if not self.consumed:
1432 if not self.consumed:
1433 # Can't use self.consume() here because it advances self._pos.
1433 # Can't use self.consume() here because it advances self._pos.
1434 chunk = self.read(32768)
1434 chunk = self.read(32768)
1435 while chunk:
1435 while chunk:
1436 chunk = self.read(32768)
1436 chunk = self.read(32768)
1437 newpos = self._chunkindex[-1][0] - offset
1437 newpos = self._chunkindex[-1][0] - offset
1438 else:
1438 else:
1439 raise ValueError('Unknown whence value: %r' % (whence,))
1439 raise ValueError('Unknown whence value: %r' % (whence,))
1440
1440
1441 if newpos > self._chunkindex[-1][0] and not self.consumed:
1441 if newpos > self._chunkindex[-1][0] and not self.consumed:
1442 # Can't use self.consume() here because it advances self._pos.
1442 # Can't use self.consume() here because it advances self._pos.
1443 chunk = self.read(32768)
1443 chunk = self.read(32768)
1444 while chunk:
1444 while chunk:
1445 chunk = self.read(32668)
1445 chunk = self.read(32668)
1446
1446
1447 if not 0 <= newpos <= self._chunkindex[-1][0]:
1447 if not 0 <= newpos <= self._chunkindex[-1][0]:
1448 raise ValueError('Offset out of range')
1448 raise ValueError('Offset out of range')
1449
1449
1450 if self._pos != newpos:
1450 if self._pos != newpos:
1451 chunk, internaloffset = self._findchunk(newpos)
1451 chunk, internaloffset = self._findchunk(newpos)
1452 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1452 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1453 adjust = self.read(internaloffset)
1453 adjust = self.read(internaloffset)
1454 if len(adjust) != internaloffset:
1454 if len(adjust) != internaloffset:
1455 raise error.Abort(_('Seek failed\n'))
1455 raise error.Abort(_('Seek failed\n'))
1456 self._pos = newpos
1456 self._pos = newpos
1457
1457
1458 def _seekfp(self, offset, whence=0):
1458 def _seekfp(self, offset, whence=0):
1459 """move the underlying file pointer
1459 """move the underlying file pointer
1460
1460
1461 This method is meant for internal usage by the bundle2 protocol only.
1461 This method is meant for internal usage by the bundle2 protocol only.
1462 They directly manipulate the low level stream including bundle2 level
1462 They directly manipulate the low level stream including bundle2 level
1463 instruction.
1463 instruction.
1464
1464
1465 Do not use it to implement higher-level logic or methods."""
1465 Do not use it to implement higher-level logic or methods."""
1466 if self._seekable:
1466 if self._seekable:
1467 return self._fp.seek(offset, whence)
1467 return self._fp.seek(offset, whence)
1468 else:
1468 else:
1469 raise NotImplementedError(_('File pointer is not seekable'))
1469 raise NotImplementedError(_('File pointer is not seekable'))
1470
1470
1471 def _tellfp(self):
1471 def _tellfp(self):
1472 """return the file offset, or None if file is not seekable
1472 """return the file offset, or None if file is not seekable
1473
1473
1474 This method is meant for internal usage by the bundle2 protocol only.
1474 This method is meant for internal usage by the bundle2 protocol only.
1475 They directly manipulate the low level stream including bundle2 level
1475 They directly manipulate the low level stream including bundle2 level
1476 instruction.
1476 instruction.
1477
1477
1478 Do not use it to implement higher-level logic or methods."""
1478 Do not use it to implement higher-level logic or methods."""
1479 if self._seekable:
1479 if self._seekable:
1480 try:
1480 try:
1481 return self._fp.tell()
1481 return self._fp.tell()
1482 except IOError as e:
1482 except IOError as e:
1483 if e.errno == errno.ESPIPE:
1483 if e.errno == errno.ESPIPE:
1484 self._seekable = False
1484 self._seekable = False
1485 else:
1485 else:
1486 raise
1486 raise
1487 return None
1487 return None
1488
1488
1489 # These are only the static capabilities.
1489 # These are only the static capabilities.
1490 # Check the 'getrepocaps' function for the rest.
1490 # Check the 'getrepocaps' function for the rest.
1491 capabilities = {'HG20': (),
1491 capabilities = {'HG20': (),
1492 'bookmarks': (),
1492 'bookmarks': (),
1493 'error': ('abort', 'unsupportedcontent', 'pushraced',
1493 'error': ('abort', 'unsupportedcontent', 'pushraced',
1494 'pushkey'),
1494 'pushkey'),
1495 'listkeys': (),
1495 'listkeys': (),
1496 'pushkey': (),
1496 'pushkey': (),
1497 'digests': tuple(sorted(util.DIGESTS.keys())),
1497 'digests': tuple(sorted(util.DIGESTS.keys())),
1498 'remote-changegroup': ('http', 'https'),
1498 'remote-changegroup': ('http', 'https'),
1499 'hgtagsfnodes': (),
1499 'hgtagsfnodes': (),
1500 'rev-branch-cache': (),
1500 'rev-branch-cache': (),
1501 'phases': ('heads',),
1501 'phases': ('heads',),
1502 'stream': ('v2',),
1502 'stream': ('v2',),
1503 }
1503 }
1504
1504
1505 def getrepocaps(repo, allowpushback=False, role=None):
1505 def getrepocaps(repo, allowpushback=False, role=None):
1506 """return the bundle2 capabilities for a given repo
1506 """return the bundle2 capabilities for a given repo
1507
1507
1508 Exists to allow extensions (like evolution) to mutate the capabilities.
1508 Exists to allow extensions (like evolution) to mutate the capabilities.
1509
1509
1510 The returned value is used for servers advertising their capabilities as
1510 The returned value is used for servers advertising their capabilities as
1511 well as clients advertising their capabilities to servers as part of
1511 well as clients advertising their capabilities to servers as part of
1512 bundle2 requests. The ``role`` argument specifies which is which.
1512 bundle2 requests. The ``role`` argument specifies which is which.
1513 """
1513 """
1514 if role not in ('client', 'server'):
1514 if role not in ('client', 'server'):
1515 raise error.ProgrammingError('role argument must be client or server')
1515 raise error.ProgrammingError('role argument must be client or server')
1516
1516
1517 caps = capabilities.copy()
1517 caps = capabilities.copy()
1518 caps['changegroup'] = tuple(sorted(
1518 caps['changegroup'] = tuple(sorted(
1519 changegroup.supportedincomingversions(repo)))
1519 changegroup.supportedincomingversions(repo)))
1520 if obsolete.isenabled(repo, obsolete.exchangeopt):
1520 if obsolete.isenabled(repo, obsolete.exchangeopt):
1521 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1521 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1522 caps['obsmarkers'] = supportedformat
1522 caps['obsmarkers'] = supportedformat
1523 if allowpushback:
1523 if allowpushback:
1524 caps['pushback'] = ()
1524 caps['pushback'] = ()
1525 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1525 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1526 if cpmode == 'check-related':
1526 if cpmode == 'check-related':
1527 caps['checkheads'] = ('related',)
1527 caps['checkheads'] = ('related',)
1528 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1528 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1529 caps.pop('phases')
1529 caps.pop('phases')
1530
1530
1531 # Don't advertise stream clone support in server mode if not configured.
1531 # Don't advertise stream clone support in server mode if not configured.
1532 if role == 'server':
1532 if role == 'server':
1533 streamsupported = repo.ui.configbool('server', 'uncompressed',
1533 streamsupported = repo.ui.configbool('server', 'uncompressed',
1534 untrusted=True)
1534 untrusted=True)
1535 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1535 featuresupported = repo.ui.configbool('experimental', 'bundle2.stream')
1536
1536
1537 if not streamsupported or not featuresupported:
1537 if not streamsupported or not featuresupported:
1538 caps.pop('stream')
1538 caps.pop('stream')
1539 # Else always advertise support on client, because payload support
1539 # Else always advertise support on client, because payload support
1540 # should always be advertised.
1540 # should always be advertised.
1541
1541
1542 return caps
1542 return caps
1543
1543
1544 def bundle2caps(remote):
1544 def bundle2caps(remote):
1545 """return the bundle capabilities of a peer as dict"""
1545 """return the bundle capabilities of a peer as dict"""
1546 raw = remote.capable('bundle2')
1546 raw = remote.capable('bundle2')
1547 if not raw and raw != '':
1547 if not raw and raw != '':
1548 return {}
1548 return {}
1549 capsblob = urlreq.unquote(remote.capable('bundle2'))
1549 capsblob = urlreq.unquote(remote.capable('bundle2'))
1550 return decodecaps(capsblob)
1550 return decodecaps(capsblob)
1551
1551
1552 def obsmarkersversion(caps):
1552 def obsmarkersversion(caps):
1553 """extract the list of supported obsmarkers versions from a bundle2caps dict
1553 """extract the list of supported obsmarkers versions from a bundle2caps dict
1554 """
1554 """
1555 obscaps = caps.get('obsmarkers', ())
1555 obscaps = caps.get('obsmarkers', ())
1556 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1556 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1557
1557
1558 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1558 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1559 vfs=None, compression=None, compopts=None):
1559 vfs=None, compression=None, compopts=None):
1560 if bundletype.startswith('HG10'):
1560 if bundletype.startswith('HG10'):
1561 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1561 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1562 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1562 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1563 compression=compression, compopts=compopts)
1563 compression=compression, compopts=compopts)
1564 elif not bundletype.startswith('HG20'):
1564 elif not bundletype.startswith('HG20'):
1565 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1565 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1566
1566
1567 caps = {}
1567 caps = {}
1568 if 'obsolescence' in opts:
1568 if 'obsolescence' in opts:
1569 caps['obsmarkers'] = ('V1',)
1569 caps['obsmarkers'] = ('V1',)
1570 bundle = bundle20(ui, caps)
1570 bundle = bundle20(ui, caps)
1571 bundle.setcompression(compression, compopts)
1571 bundle.setcompression(compression, compopts)
1572 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1572 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1573 chunkiter = bundle.getchunks()
1573 chunkiter = bundle.getchunks()
1574
1574
1575 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1575 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1576
1576
1577 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1577 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1578 # We should eventually reconcile this logic with the one behind
1578 # We should eventually reconcile this logic with the one behind
1579 # 'exchange.getbundle2partsgenerator'.
1579 # 'exchange.getbundle2partsgenerator'.
1580 #
1580 #
1581 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1581 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1582 # different right now. So we keep them separated for now for the sake of
1582 # different right now. So we keep them separated for now for the sake of
1583 # simplicity.
1583 # simplicity.
1584
1584
1585 # we might not always want a changegroup in such bundle, for example in
1585 # we might not always want a changegroup in such bundle, for example in
1586 # stream bundles
1586 # stream bundles
1587 if opts.get('changegroup', True):
1587 if opts.get('changegroup', True):
1588 cgversion = opts.get('cg.version')
1588 cgversion = opts.get('cg.version')
1589 if cgversion is None:
1589 if cgversion is None:
1590 cgversion = changegroup.safeversion(repo)
1590 cgversion = changegroup.safeversion(repo)
1591 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1591 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1592 part = bundler.newpart('changegroup', data=cg.getchunks())
1592 part = bundler.newpart('changegroup', data=cg.getchunks())
1593 part.addparam('version', cg.version)
1593 part.addparam('version', cg.version)
1594 if 'clcount' in cg.extras:
1594 if 'clcount' in cg.extras:
1595 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1595 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1596 mandatory=False)
1596 mandatory=False)
1597 if opts.get('phases') and repo.revs('%ln and secret()',
1597 if opts.get('phases') and repo.revs('%ln and secret()',
1598 outgoing.missingheads):
1598 outgoing.missingheads):
1599 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1599 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1600
1600
1601 if opts.get('streamv2', False):
1601 if opts.get('streamv2', False):
1602 addpartbundlestream2(bundler, repo, stream=True)
1602 addpartbundlestream2(bundler, repo, stream=True)
1603
1603
1604 if opts.get('tagsfnodescache', True):
1604 if opts.get('tagsfnodescache', True):
1605 addparttagsfnodescache(repo, bundler, outgoing)
1605 addparttagsfnodescache(repo, bundler, outgoing)
1606
1606
1607 if opts.get('revbranchcache', True):
1607 if opts.get('revbranchcache', True):
1608 addpartrevbranchcache(repo, bundler, outgoing)
1608 addpartrevbranchcache(repo, bundler, outgoing)
1609
1609
1610 if opts.get('obsolescence', False):
1610 if opts.get('obsolescence', False):
1611 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1611 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1612 buildobsmarkerspart(bundler, obsmarkers)
1612 buildobsmarkerspart(bundler, obsmarkers)
1613
1613
1614 if opts.get('phases', False):
1614 if opts.get('phases', False):
1615 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1615 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1616 phasedata = phases.binaryencode(headsbyphase)
1616 phasedata = phases.binaryencode(headsbyphase)
1617 bundler.newpart('phase-heads', data=phasedata)
1617 bundler.newpart('phase-heads', data=phasedata)
1618
1618
1619 def addparttagsfnodescache(repo, bundler, outgoing):
1619 def addparttagsfnodescache(repo, bundler, outgoing):
1620 # we include the tags fnode cache for the bundle changeset
1620 # we include the tags fnode cache for the bundle changeset
1621 # (as an optional parts)
1621 # (as an optional parts)
1622 cache = tags.hgtagsfnodescache(repo.unfiltered())
1622 cache = tags.hgtagsfnodescache(repo.unfiltered())
1623 chunks = []
1623 chunks = []
1624
1624
1625 # .hgtags fnodes are only relevant for head changesets. While we could
1625 # .hgtags fnodes are only relevant for head changesets. While we could
1626 # transfer values for all known nodes, there will likely be little to
1626 # transfer values for all known nodes, there will likely be little to
1627 # no benefit.
1627 # no benefit.
1628 #
1628 #
1629 # We don't bother using a generator to produce output data because
1629 # We don't bother using a generator to produce output data because
1630 # a) we only have 40 bytes per head and even esoteric numbers of heads
1630 # a) we only have 40 bytes per head and even esoteric numbers of heads
1631 # consume little memory (1M heads is 40MB) b) we don't want to send the
1631 # consume little memory (1M heads is 40MB) b) we don't want to send the
1632 # part if we don't have entries and knowing if we have entries requires
1632 # part if we don't have entries and knowing if we have entries requires
1633 # cache lookups.
1633 # cache lookups.
1634 for node in outgoing.missingheads:
1634 for node in outgoing.missingheads:
1635 # Don't compute missing, as this may slow down serving.
1635 # Don't compute missing, as this may slow down serving.
1636 fnode = cache.getfnode(node, computemissing=False)
1636 fnode = cache.getfnode(node, computemissing=False)
1637 if fnode is not None:
1637 if fnode is not None:
1638 chunks.extend([node, fnode])
1638 chunks.extend([node, fnode])
1639
1639
1640 if chunks:
1640 if chunks:
1641 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1641 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1642
1642
1643 def addpartrevbranchcache(repo, bundler, outgoing):
1643 def addpartrevbranchcache(repo, bundler, outgoing):
1644 # we include the rev branch cache for the bundle changeset
1644 # we include the rev branch cache for the bundle changeset
1645 # (as an optional parts)
1645 # (as an optional parts)
1646 cache = repo.revbranchcache()
1646 cache = repo.revbranchcache()
1647 cl = repo.unfiltered().changelog
1647 cl = repo.unfiltered().changelog
1648 branchesdata = collections.defaultdict(lambda: (set(), set()))
1648 branchesdata = collections.defaultdict(lambda: (set(), set()))
1649 for node in outgoing.missing:
1649 for node in outgoing.missing:
1650 branch, close = cache.branchinfo(cl.rev(node))
1650 branch, close = cache.branchinfo(cl.rev(node))
1651 branchesdata[branch][close].add(node)
1651 branchesdata[branch][close].add(node)
1652
1652
1653 def generate():
1653 def generate():
1654 for branch, (nodes, closed) in sorted(branchesdata.items()):
1654 for branch, (nodes, closed) in sorted(branchesdata.items()):
1655 utf8branch = encoding.fromlocal(branch)
1655 utf8branch = encoding.fromlocal(branch)
1656 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1656 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1657 yield utf8branch
1657 yield utf8branch
1658 for n in sorted(nodes):
1658 for n in sorted(nodes):
1659 yield n
1659 yield n
1660 for n in sorted(closed):
1660 for n in sorted(closed):
1661 yield n
1661 yield n
1662
1662
1663 bundler.newpart('cache:rev-branch-cache', data=generate(),
1663 bundler.newpart('cache:rev-branch-cache', data=generate(),
1664 mandatory=False)
1664 mandatory=False)
1665
1665
1666 def _formatrequirementsspec(requirements):
1666 def _formatrequirementsspec(requirements):
1667 return urlreq.quote(','.join(sorted(requirements)))
1667 return urlreq.quote(','.join(sorted(requirements)))
1668
1668
1669 def _formatrequirementsparams(requirements):
1669 def _formatrequirementsparams(requirements):
1670 requirements = _formatrequirementsspec(requirements)
1670 requirements = _formatrequirementsspec(requirements)
1671 params = "%s%s" % (urlreq.quote("requirements="), requirements)
1671 params = "%s%s" % (urlreq.quote("requirements="), requirements)
1672 return params
1672 return params
1673
1673
1674 def addpartbundlestream2(bundler, repo, **kwargs):
1674 def addpartbundlestream2(bundler, repo, **kwargs):
1675 if not kwargs.get('stream', False):
1675 if not kwargs.get('stream', False):
1676 return
1676 return
1677
1677
1678 if not streamclone.allowservergeneration(repo):
1678 if not streamclone.allowservergeneration(repo):
1679 raise error.Abort(_('stream data requested but server does not allow '
1679 raise error.Abort(_('stream data requested but server does not allow '
1680 'this feature'),
1680 'this feature'),
1681 hint=_('well-behaved clients should not be '
1681 hint=_('well-behaved clients should not be '
1682 'requesting stream data from servers not '
1682 'requesting stream data from servers not '
1683 'advertising it; the client may be buggy'))
1683 'advertising it; the client may be buggy'))
1684
1684
1685 # Stream clones don't compress well. And compression undermines a
1685 # Stream clones don't compress well. And compression undermines a
1686 # goal of stream clones, which is to be fast. Communicate the desire
1686 # goal of stream clones, which is to be fast. Communicate the desire
1687 # to avoid compression to consumers of the bundle.
1687 # to avoid compression to consumers of the bundle.
1688 bundler.prefercompressed = False
1688 bundler.prefercompressed = False
1689
1689
1690 filecount, bytecount, it = streamclone.generatev2(repo)
1690 filecount, bytecount, it = streamclone.generatev2(repo)
1691 requirements = _formatrequirementsspec(repo.requirements)
1691 requirements = _formatrequirementsspec(repo.requirements)
1692 part = bundler.newpart('stream2', data=it)
1692 part = bundler.newpart('stream2', data=it)
1693 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1693 part.addparam('bytecount', '%d' % bytecount, mandatory=True)
1694 part.addparam('filecount', '%d' % filecount, mandatory=True)
1694 part.addparam('filecount', '%d' % filecount, mandatory=True)
1695 part.addparam('requirements', requirements, mandatory=True)
1695 part.addparam('requirements', requirements, mandatory=True)
1696
1696
1697 def buildobsmarkerspart(bundler, markers):
1697 def buildobsmarkerspart(bundler, markers):
1698 """add an obsmarker part to the bundler with <markers>
1698 """add an obsmarker part to the bundler with <markers>
1699
1699
1700 No part is created if markers is empty.
1700 No part is created if markers is empty.
1701 Raises ValueError if the bundler doesn't support any known obsmarker format.
1701 Raises ValueError if the bundler doesn't support any known obsmarker format.
1702 """
1702 """
1703 if not markers:
1703 if not markers:
1704 return None
1704 return None
1705
1705
1706 remoteversions = obsmarkersversion(bundler.capabilities)
1706 remoteversions = obsmarkersversion(bundler.capabilities)
1707 version = obsolete.commonversion(remoteversions)
1707 version = obsolete.commonversion(remoteversions)
1708 if version is None:
1708 if version is None:
1709 raise ValueError('bundler does not support common obsmarker format')
1709 raise ValueError('bundler does not support common obsmarker format')
1710 stream = obsolete.encodemarkers(markers, True, version=version)
1710 stream = obsolete.encodemarkers(markers, True, version=version)
1711 return bundler.newpart('obsmarkers', data=stream)
1711 return bundler.newpart('obsmarkers', data=stream)
1712
1712
1713 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1713 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1714 compopts=None):
1714 compopts=None):
1715 """Write a bundle file and return its filename.
1715 """Write a bundle file and return its filename.
1716
1716
1717 Existing files will not be overwritten.
1717 Existing files will not be overwritten.
1718 If no filename is specified, a temporary file is created.
1718 If no filename is specified, a temporary file is created.
1719 bz2 compression can be turned off.
1719 bz2 compression can be turned off.
1720 The bundle file will be deleted in case of errors.
1720 The bundle file will be deleted in case of errors.
1721 """
1721 """
1722
1722
1723 if bundletype == "HG20":
1723 if bundletype == "HG20":
1724 bundle = bundle20(ui)
1724 bundle = bundle20(ui)
1725 bundle.setcompression(compression, compopts)
1725 bundle.setcompression(compression, compopts)
1726 part = bundle.newpart('changegroup', data=cg.getchunks())
1726 part = bundle.newpart('changegroup', data=cg.getchunks())
1727 part.addparam('version', cg.version)
1727 part.addparam('version', cg.version)
1728 if 'clcount' in cg.extras:
1728 if 'clcount' in cg.extras:
1729 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1729 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1730 mandatory=False)
1730 mandatory=False)
1731 chunkiter = bundle.getchunks()
1731 chunkiter = bundle.getchunks()
1732 else:
1732 else:
1733 # compression argument is only for the bundle2 case
1733 # compression argument is only for the bundle2 case
1734 assert compression is None
1734 assert compression is None
1735 if cg.version != '01':
1735 if cg.version != '01':
1736 raise error.Abort(_('old bundle types only supports v1 '
1736 raise error.Abort(_('old bundle types only supports v1 '
1737 'changegroups'))
1737 'changegroups'))
1738 header, comp = bundletypes[bundletype]
1738 header, comp = bundletypes[bundletype]
1739 if comp not in util.compengines.supportedbundletypes:
1739 if comp not in util.compengines.supportedbundletypes:
1740 raise error.Abort(_('unknown stream compression type: %s')
1740 raise error.Abort(_('unknown stream compression type: %s')
1741 % comp)
1741 % comp)
1742 compengine = util.compengines.forbundletype(comp)
1742 compengine = util.compengines.forbundletype(comp)
1743 def chunkiter():
1743 def chunkiter():
1744 yield header
1744 yield header
1745 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1745 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1746 yield chunk
1746 yield chunk
1747 chunkiter = chunkiter()
1747 chunkiter = chunkiter()
1748
1748
1749 # parse the changegroup data, otherwise we will block
1749 # parse the changegroup data, otherwise we will block
1750 # in case of sshrepo because we don't know the end of the stream
1750 # in case of sshrepo because we don't know the end of the stream
1751 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1751 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1752
1752
1753 def combinechangegroupresults(op):
1753 def combinechangegroupresults(op):
1754 """logic to combine 0 or more addchangegroup results into one"""
1754 """logic to combine 0 or more addchangegroup results into one"""
1755 results = [r.get('return', 0)
1755 results = [r.get('return', 0)
1756 for r in op.records['changegroup']]
1756 for r in op.records['changegroup']]
1757 changedheads = 0
1757 changedheads = 0
1758 result = 1
1758 result = 1
1759 for ret in results:
1759 for ret in results:
1760 # If any changegroup result is 0, return 0
1760 # If any changegroup result is 0, return 0
1761 if ret == 0:
1761 if ret == 0:
1762 result = 0
1762 result = 0
1763 break
1763 break
1764 if ret < -1:
1764 if ret < -1:
1765 changedheads += ret + 1
1765 changedheads += ret + 1
1766 elif ret > 1:
1766 elif ret > 1:
1767 changedheads += ret - 1
1767 changedheads += ret - 1
1768 if changedheads > 0:
1768 if changedheads > 0:
1769 result = 1 + changedheads
1769 result = 1 + changedheads
1770 elif changedheads < 0:
1770 elif changedheads < 0:
1771 result = -1 + changedheads
1771 result = -1 + changedheads
1772 return result
1772 return result
1773
1773
1774 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1774 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1775 'targetphase'))
1775 'targetphase'))
1776 def handlechangegroup(op, inpart):
1776 def handlechangegroup(op, inpart):
1777 """apply a changegroup part on the repo
1777 """apply a changegroup part on the repo
1778
1778
1779 This is a very early implementation that will massive rework before being
1779 This is a very early implementation that will massive rework before being
1780 inflicted to any end-user.
1780 inflicted to any end-user.
1781 """
1781 """
1782 from . import localrepo
1783
1782 tr = op.gettransaction()
1784 tr = op.gettransaction()
1783 unpackerversion = inpart.params.get('version', '01')
1785 unpackerversion = inpart.params.get('version', '01')
1784 # We should raise an appropriate exception here
1786 # We should raise an appropriate exception here
1785 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1787 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1786 # the source and url passed here are overwritten by the one contained in
1788 # the source and url passed here are overwritten by the one contained in
1787 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1789 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1788 nbchangesets = None
1790 nbchangesets = None
1789 if 'nbchanges' in inpart.params:
1791 if 'nbchanges' in inpart.params:
1790 nbchangesets = int(inpart.params.get('nbchanges'))
1792 nbchangesets = int(inpart.params.get('nbchanges'))
1791 if ('treemanifest' in inpart.params and
1793 if ('treemanifest' in inpart.params and
1792 'treemanifest' not in op.repo.requirements):
1794 'treemanifest' not in op.repo.requirements):
1793 if len(op.repo.changelog) != 0:
1795 if len(op.repo.changelog) != 0:
1794 raise error.Abort(_(
1796 raise error.Abort(_(
1795 "bundle contains tree manifests, but local repo is "
1797 "bundle contains tree manifests, but local repo is "
1796 "non-empty and does not use tree manifests"))
1798 "non-empty and does not use tree manifests"))
1797 op.repo.requirements.add('treemanifest')
1799 op.repo.requirements.add('treemanifest')
1798 op.repo._applyopenerreqs()
1800 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
1801 op.repo.ui, op.repo.requirements)
1799 op.repo._writerequirements()
1802 op.repo._writerequirements()
1800 extrakwargs = {}
1803 extrakwargs = {}
1801 targetphase = inpart.params.get('targetphase')
1804 targetphase = inpart.params.get('targetphase')
1802 if targetphase is not None:
1805 if targetphase is not None:
1803 extrakwargs[r'targetphase'] = int(targetphase)
1806 extrakwargs[r'targetphase'] = int(targetphase)
1804 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1807 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1805 expectedtotal=nbchangesets, **extrakwargs)
1808 expectedtotal=nbchangesets, **extrakwargs)
1806 if op.reply is not None:
1809 if op.reply is not None:
1807 # This is definitely not the final form of this
1810 # This is definitely not the final form of this
1808 # return. But one need to start somewhere.
1811 # return. But one need to start somewhere.
1809 part = op.reply.newpart('reply:changegroup', mandatory=False)
1812 part = op.reply.newpart('reply:changegroup', mandatory=False)
1810 part.addparam(
1813 part.addparam(
1811 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1814 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1812 part.addparam('return', '%i' % ret, mandatory=False)
1815 part.addparam('return', '%i' % ret, mandatory=False)
1813 assert not inpart.read()
1816 assert not inpart.read()
1814
1817
1815 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1818 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1816 ['digest:%s' % k for k in util.DIGESTS.keys()])
1819 ['digest:%s' % k for k in util.DIGESTS.keys()])
1817 @parthandler('remote-changegroup', _remotechangegroupparams)
1820 @parthandler('remote-changegroup', _remotechangegroupparams)
1818 def handleremotechangegroup(op, inpart):
1821 def handleremotechangegroup(op, inpart):
1819 """apply a bundle10 on the repo, given an url and validation information
1822 """apply a bundle10 on the repo, given an url and validation information
1820
1823
1821 All the information about the remote bundle to import are given as
1824 All the information about the remote bundle to import are given as
1822 parameters. The parameters include:
1825 parameters. The parameters include:
1823 - url: the url to the bundle10.
1826 - url: the url to the bundle10.
1824 - size: the bundle10 file size. It is used to validate what was
1827 - size: the bundle10 file size. It is used to validate what was
1825 retrieved by the client matches the server knowledge about the bundle.
1828 retrieved by the client matches the server knowledge about the bundle.
1826 - digests: a space separated list of the digest types provided as
1829 - digests: a space separated list of the digest types provided as
1827 parameters.
1830 parameters.
1828 - digest:<digest-type>: the hexadecimal representation of the digest with
1831 - digest:<digest-type>: the hexadecimal representation of the digest with
1829 that name. Like the size, it is used to validate what was retrieved by
1832 that name. Like the size, it is used to validate what was retrieved by
1830 the client matches what the server knows about the bundle.
1833 the client matches what the server knows about the bundle.
1831
1834
1832 When multiple digest types are given, all of them are checked.
1835 When multiple digest types are given, all of them are checked.
1833 """
1836 """
1834 try:
1837 try:
1835 raw_url = inpart.params['url']
1838 raw_url = inpart.params['url']
1836 except KeyError:
1839 except KeyError:
1837 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1840 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1838 parsed_url = util.url(raw_url)
1841 parsed_url = util.url(raw_url)
1839 if parsed_url.scheme not in capabilities['remote-changegroup']:
1842 if parsed_url.scheme not in capabilities['remote-changegroup']:
1840 raise error.Abort(_('remote-changegroup does not support %s urls') %
1843 raise error.Abort(_('remote-changegroup does not support %s urls') %
1841 parsed_url.scheme)
1844 parsed_url.scheme)
1842
1845
1843 try:
1846 try:
1844 size = int(inpart.params['size'])
1847 size = int(inpart.params['size'])
1845 except ValueError:
1848 except ValueError:
1846 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1849 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1847 % 'size')
1850 % 'size')
1848 except KeyError:
1851 except KeyError:
1849 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1852 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1850
1853
1851 digests = {}
1854 digests = {}
1852 for typ in inpart.params.get('digests', '').split():
1855 for typ in inpart.params.get('digests', '').split():
1853 param = 'digest:%s' % typ
1856 param = 'digest:%s' % typ
1854 try:
1857 try:
1855 value = inpart.params[param]
1858 value = inpart.params[param]
1856 except KeyError:
1859 except KeyError:
1857 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1860 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1858 param)
1861 param)
1859 digests[typ] = value
1862 digests[typ] = value
1860
1863
1861 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1864 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1862
1865
1863 tr = op.gettransaction()
1866 tr = op.gettransaction()
1864 from . import exchange
1867 from . import exchange
1865 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1868 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1866 if not isinstance(cg, changegroup.cg1unpacker):
1869 if not isinstance(cg, changegroup.cg1unpacker):
1867 raise error.Abort(_('%s: not a bundle version 1.0') %
1870 raise error.Abort(_('%s: not a bundle version 1.0') %
1868 util.hidepassword(raw_url))
1871 util.hidepassword(raw_url))
1869 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1872 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1870 if op.reply is not None:
1873 if op.reply is not None:
1871 # This is definitely not the final form of this
1874 # This is definitely not the final form of this
1872 # return. But one need to start somewhere.
1875 # return. But one need to start somewhere.
1873 part = op.reply.newpart('reply:changegroup')
1876 part = op.reply.newpart('reply:changegroup')
1874 part.addparam(
1877 part.addparam(
1875 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1878 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1876 part.addparam('return', '%i' % ret, mandatory=False)
1879 part.addparam('return', '%i' % ret, mandatory=False)
1877 try:
1880 try:
1878 real_part.validate()
1881 real_part.validate()
1879 except error.Abort as e:
1882 except error.Abort as e:
1880 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1883 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1881 (util.hidepassword(raw_url), bytes(e)))
1884 (util.hidepassword(raw_url), bytes(e)))
1882 assert not inpart.read()
1885 assert not inpart.read()
1883
1886
1884 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1887 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1885 def handlereplychangegroup(op, inpart):
1888 def handlereplychangegroup(op, inpart):
1886 ret = int(inpart.params['return'])
1889 ret = int(inpart.params['return'])
1887 replyto = int(inpart.params['in-reply-to'])
1890 replyto = int(inpart.params['in-reply-to'])
1888 op.records.add('changegroup', {'return': ret}, replyto)
1891 op.records.add('changegroup', {'return': ret}, replyto)
1889
1892
1890 @parthandler('check:bookmarks')
1893 @parthandler('check:bookmarks')
1891 def handlecheckbookmarks(op, inpart):
1894 def handlecheckbookmarks(op, inpart):
1892 """check location of bookmarks
1895 """check location of bookmarks
1893
1896
1894 This part is to be used to detect push race regarding bookmark, it
1897 This part is to be used to detect push race regarding bookmark, it
1895 contains binary encoded (bookmark, node) tuple. If the local state does
1898 contains binary encoded (bookmark, node) tuple. If the local state does
1896 not marks the one in the part, a PushRaced exception is raised
1899 not marks the one in the part, a PushRaced exception is raised
1897 """
1900 """
1898 bookdata = bookmarks.binarydecode(inpart)
1901 bookdata = bookmarks.binarydecode(inpart)
1899
1902
1900 msgstandard = ('repository changed while pushing - please try again '
1903 msgstandard = ('repository changed while pushing - please try again '
1901 '(bookmark "%s" move from %s to %s)')
1904 '(bookmark "%s" move from %s to %s)')
1902 msgmissing = ('repository changed while pushing - please try again '
1905 msgmissing = ('repository changed while pushing - please try again '
1903 '(bookmark "%s" is missing, expected %s)')
1906 '(bookmark "%s" is missing, expected %s)')
1904 msgexist = ('repository changed while pushing - please try again '
1907 msgexist = ('repository changed while pushing - please try again '
1905 '(bookmark "%s" set on %s, expected missing)')
1908 '(bookmark "%s" set on %s, expected missing)')
1906 for book, node in bookdata:
1909 for book, node in bookdata:
1907 currentnode = op.repo._bookmarks.get(book)
1910 currentnode = op.repo._bookmarks.get(book)
1908 if currentnode != node:
1911 if currentnode != node:
1909 if node is None:
1912 if node is None:
1910 finalmsg = msgexist % (book, nodemod.short(currentnode))
1913 finalmsg = msgexist % (book, nodemod.short(currentnode))
1911 elif currentnode is None:
1914 elif currentnode is None:
1912 finalmsg = msgmissing % (book, nodemod.short(node))
1915 finalmsg = msgmissing % (book, nodemod.short(node))
1913 else:
1916 else:
1914 finalmsg = msgstandard % (book, nodemod.short(node),
1917 finalmsg = msgstandard % (book, nodemod.short(node),
1915 nodemod.short(currentnode))
1918 nodemod.short(currentnode))
1916 raise error.PushRaced(finalmsg)
1919 raise error.PushRaced(finalmsg)
1917
1920
1918 @parthandler('check:heads')
1921 @parthandler('check:heads')
1919 def handlecheckheads(op, inpart):
1922 def handlecheckheads(op, inpart):
1920 """check that head of the repo did not change
1923 """check that head of the repo did not change
1921
1924
1922 This is used to detect a push race when using unbundle.
1925 This is used to detect a push race when using unbundle.
1923 This replaces the "heads" argument of unbundle."""
1926 This replaces the "heads" argument of unbundle."""
1924 h = inpart.read(20)
1927 h = inpart.read(20)
1925 heads = []
1928 heads = []
1926 while len(h) == 20:
1929 while len(h) == 20:
1927 heads.append(h)
1930 heads.append(h)
1928 h = inpart.read(20)
1931 h = inpart.read(20)
1929 assert not h
1932 assert not h
1930 # Trigger a transaction so that we are guaranteed to have the lock now.
1933 # Trigger a transaction so that we are guaranteed to have the lock now.
1931 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1934 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1932 op.gettransaction()
1935 op.gettransaction()
1933 if sorted(heads) != sorted(op.repo.heads()):
1936 if sorted(heads) != sorted(op.repo.heads()):
1934 raise error.PushRaced('repository changed while pushing - '
1937 raise error.PushRaced('repository changed while pushing - '
1935 'please try again')
1938 'please try again')
1936
1939
1937 @parthandler('check:updated-heads')
1940 @parthandler('check:updated-heads')
1938 def handlecheckupdatedheads(op, inpart):
1941 def handlecheckupdatedheads(op, inpart):
1939 """check for race on the heads touched by a push
1942 """check for race on the heads touched by a push
1940
1943
1941 This is similar to 'check:heads' but focus on the heads actually updated
1944 This is similar to 'check:heads' but focus on the heads actually updated
1942 during the push. If other activities happen on unrelated heads, it is
1945 during the push. If other activities happen on unrelated heads, it is
1943 ignored.
1946 ignored.
1944
1947
1945 This allow server with high traffic to avoid push contention as long as
1948 This allow server with high traffic to avoid push contention as long as
1946 unrelated parts of the graph are involved."""
1949 unrelated parts of the graph are involved."""
1947 h = inpart.read(20)
1950 h = inpart.read(20)
1948 heads = []
1951 heads = []
1949 while len(h) == 20:
1952 while len(h) == 20:
1950 heads.append(h)
1953 heads.append(h)
1951 h = inpart.read(20)
1954 h = inpart.read(20)
1952 assert not h
1955 assert not h
1953 # trigger a transaction so that we are guaranteed to have the lock now.
1956 # trigger a transaction so that we are guaranteed to have the lock now.
1954 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1957 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1955 op.gettransaction()
1958 op.gettransaction()
1956
1959
1957 currentheads = set()
1960 currentheads = set()
1958 for ls in op.repo.branchmap().itervalues():
1961 for ls in op.repo.branchmap().itervalues():
1959 currentheads.update(ls)
1962 currentheads.update(ls)
1960
1963
1961 for h in heads:
1964 for h in heads:
1962 if h not in currentheads:
1965 if h not in currentheads:
1963 raise error.PushRaced('repository changed while pushing - '
1966 raise error.PushRaced('repository changed while pushing - '
1964 'please try again')
1967 'please try again')
1965
1968
1966 @parthandler('check:phases')
1969 @parthandler('check:phases')
1967 def handlecheckphases(op, inpart):
1970 def handlecheckphases(op, inpart):
1968 """check that phase boundaries of the repository did not change
1971 """check that phase boundaries of the repository did not change
1969
1972
1970 This is used to detect a push race.
1973 This is used to detect a push race.
1971 """
1974 """
1972 phasetonodes = phases.binarydecode(inpart)
1975 phasetonodes = phases.binarydecode(inpart)
1973 unfi = op.repo.unfiltered()
1976 unfi = op.repo.unfiltered()
1974 cl = unfi.changelog
1977 cl = unfi.changelog
1975 phasecache = unfi._phasecache
1978 phasecache = unfi._phasecache
1976 msg = ('repository changed while pushing - please try again '
1979 msg = ('repository changed while pushing - please try again '
1977 '(%s is %s expected %s)')
1980 '(%s is %s expected %s)')
1978 for expectedphase, nodes in enumerate(phasetonodes):
1981 for expectedphase, nodes in enumerate(phasetonodes):
1979 for n in nodes:
1982 for n in nodes:
1980 actualphase = phasecache.phase(unfi, cl.rev(n))
1983 actualphase = phasecache.phase(unfi, cl.rev(n))
1981 if actualphase != expectedphase:
1984 if actualphase != expectedphase:
1982 finalmsg = msg % (nodemod.short(n),
1985 finalmsg = msg % (nodemod.short(n),
1983 phases.phasenames[actualphase],
1986 phases.phasenames[actualphase],
1984 phases.phasenames[expectedphase])
1987 phases.phasenames[expectedphase])
1985 raise error.PushRaced(finalmsg)
1988 raise error.PushRaced(finalmsg)
1986
1989
1987 @parthandler('output')
1990 @parthandler('output')
1988 def handleoutput(op, inpart):
1991 def handleoutput(op, inpart):
1989 """forward output captured on the server to the client"""
1992 """forward output captured on the server to the client"""
1990 for line in inpart.read().splitlines():
1993 for line in inpart.read().splitlines():
1991 op.ui.status(_('remote: %s\n') % line)
1994 op.ui.status(_('remote: %s\n') % line)
1992
1995
1993 @parthandler('replycaps')
1996 @parthandler('replycaps')
1994 def handlereplycaps(op, inpart):
1997 def handlereplycaps(op, inpart):
1995 """Notify that a reply bundle should be created
1998 """Notify that a reply bundle should be created
1996
1999
1997 The payload contains the capabilities information for the reply"""
2000 The payload contains the capabilities information for the reply"""
1998 caps = decodecaps(inpart.read())
2001 caps = decodecaps(inpart.read())
1999 if op.reply is None:
2002 if op.reply is None:
2000 op.reply = bundle20(op.ui, caps)
2003 op.reply = bundle20(op.ui, caps)
2001
2004
2002 class AbortFromPart(error.Abort):
2005 class AbortFromPart(error.Abort):
2003 """Sub-class of Abort that denotes an error from a bundle2 part."""
2006 """Sub-class of Abort that denotes an error from a bundle2 part."""
2004
2007
2005 @parthandler('error:abort', ('message', 'hint'))
2008 @parthandler('error:abort', ('message', 'hint'))
2006 def handleerrorabort(op, inpart):
2009 def handleerrorabort(op, inpart):
2007 """Used to transmit abort error over the wire"""
2010 """Used to transmit abort error over the wire"""
2008 raise AbortFromPart(inpart.params['message'],
2011 raise AbortFromPart(inpart.params['message'],
2009 hint=inpart.params.get('hint'))
2012 hint=inpart.params.get('hint'))
2010
2013
2011 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
2014 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
2012 'in-reply-to'))
2015 'in-reply-to'))
2013 def handleerrorpushkey(op, inpart):
2016 def handleerrorpushkey(op, inpart):
2014 """Used to transmit failure of a mandatory pushkey over the wire"""
2017 """Used to transmit failure of a mandatory pushkey over the wire"""
2015 kwargs = {}
2018 kwargs = {}
2016 for name in ('namespace', 'key', 'new', 'old', 'ret'):
2019 for name in ('namespace', 'key', 'new', 'old', 'ret'):
2017 value = inpart.params.get(name)
2020 value = inpart.params.get(name)
2018 if value is not None:
2021 if value is not None:
2019 kwargs[name] = value
2022 kwargs[name] = value
2020 raise error.PushkeyFailed(inpart.params['in-reply-to'],
2023 raise error.PushkeyFailed(inpart.params['in-reply-to'],
2021 **pycompat.strkwargs(kwargs))
2024 **pycompat.strkwargs(kwargs))
2022
2025
2023 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
2026 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
2024 def handleerrorunsupportedcontent(op, inpart):
2027 def handleerrorunsupportedcontent(op, inpart):
2025 """Used to transmit unknown content error over the wire"""
2028 """Used to transmit unknown content error over the wire"""
2026 kwargs = {}
2029 kwargs = {}
2027 parttype = inpart.params.get('parttype')
2030 parttype = inpart.params.get('parttype')
2028 if parttype is not None:
2031 if parttype is not None:
2029 kwargs['parttype'] = parttype
2032 kwargs['parttype'] = parttype
2030 params = inpart.params.get('params')
2033 params = inpart.params.get('params')
2031 if params is not None:
2034 if params is not None:
2032 kwargs['params'] = params.split('\0')
2035 kwargs['params'] = params.split('\0')
2033
2036
2034 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2037 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2035
2038
2036 @parthandler('error:pushraced', ('message',))
2039 @parthandler('error:pushraced', ('message',))
2037 def handleerrorpushraced(op, inpart):
2040 def handleerrorpushraced(op, inpart):
2038 """Used to transmit push race error over the wire"""
2041 """Used to transmit push race error over the wire"""
2039 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2042 raise error.ResponseError(_('push failed:'), inpart.params['message'])
2040
2043
2041 @parthandler('listkeys', ('namespace',))
2044 @parthandler('listkeys', ('namespace',))
2042 def handlelistkeys(op, inpart):
2045 def handlelistkeys(op, inpart):
2043 """retrieve pushkey namespace content stored in a bundle2"""
2046 """retrieve pushkey namespace content stored in a bundle2"""
2044 namespace = inpart.params['namespace']
2047 namespace = inpart.params['namespace']
2045 r = pushkey.decodekeys(inpart.read())
2048 r = pushkey.decodekeys(inpart.read())
2046 op.records.add('listkeys', (namespace, r))
2049 op.records.add('listkeys', (namespace, r))
2047
2050
2048 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2051 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
2049 def handlepushkey(op, inpart):
2052 def handlepushkey(op, inpart):
2050 """process a pushkey request"""
2053 """process a pushkey request"""
2051 dec = pushkey.decode
2054 dec = pushkey.decode
2052 namespace = dec(inpart.params['namespace'])
2055 namespace = dec(inpart.params['namespace'])
2053 key = dec(inpart.params['key'])
2056 key = dec(inpart.params['key'])
2054 old = dec(inpart.params['old'])
2057 old = dec(inpart.params['old'])
2055 new = dec(inpart.params['new'])
2058 new = dec(inpart.params['new'])
2056 # Grab the transaction to ensure that we have the lock before performing the
2059 # Grab the transaction to ensure that we have the lock before performing the
2057 # pushkey.
2060 # pushkey.
2058 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2061 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2059 op.gettransaction()
2062 op.gettransaction()
2060 ret = op.repo.pushkey(namespace, key, old, new)
2063 ret = op.repo.pushkey(namespace, key, old, new)
2061 record = {'namespace': namespace,
2064 record = {'namespace': namespace,
2062 'key': key,
2065 'key': key,
2063 'old': old,
2066 'old': old,
2064 'new': new}
2067 'new': new}
2065 op.records.add('pushkey', record)
2068 op.records.add('pushkey', record)
2066 if op.reply is not None:
2069 if op.reply is not None:
2067 rpart = op.reply.newpart('reply:pushkey')
2070 rpart = op.reply.newpart('reply:pushkey')
2068 rpart.addparam(
2071 rpart.addparam(
2069 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2072 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2070 rpart.addparam('return', '%i' % ret, mandatory=False)
2073 rpart.addparam('return', '%i' % ret, mandatory=False)
2071 if inpart.mandatory and not ret:
2074 if inpart.mandatory and not ret:
2072 kwargs = {}
2075 kwargs = {}
2073 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2076 for key in ('namespace', 'key', 'new', 'old', 'ret'):
2074 if key in inpart.params:
2077 if key in inpart.params:
2075 kwargs[key] = inpart.params[key]
2078 kwargs[key] = inpart.params[key]
2076 raise error.PushkeyFailed(partid='%d' % inpart.id,
2079 raise error.PushkeyFailed(partid='%d' % inpart.id,
2077 **pycompat.strkwargs(kwargs))
2080 **pycompat.strkwargs(kwargs))
2078
2081
2079 @parthandler('bookmarks')
2082 @parthandler('bookmarks')
2080 def handlebookmark(op, inpart):
2083 def handlebookmark(op, inpart):
2081 """transmit bookmark information
2084 """transmit bookmark information
2082
2085
2083 The part contains binary encoded bookmark information.
2086 The part contains binary encoded bookmark information.
2084
2087
2085 The exact behavior of this part can be controlled by the 'bookmarks' mode
2088 The exact behavior of this part can be controlled by the 'bookmarks' mode
2086 on the bundle operation.
2089 on the bundle operation.
2087
2090
2088 When mode is 'apply' (the default) the bookmark information is applied as
2091 When mode is 'apply' (the default) the bookmark information is applied as
2089 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2092 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2090 issued earlier to check for push races in such update. This behavior is
2093 issued earlier to check for push races in such update. This behavior is
2091 suitable for pushing.
2094 suitable for pushing.
2092
2095
2093 When mode is 'records', the information is recorded into the 'bookmarks'
2096 When mode is 'records', the information is recorded into the 'bookmarks'
2094 records of the bundle operation. This behavior is suitable for pulling.
2097 records of the bundle operation. This behavior is suitable for pulling.
2095 """
2098 """
2096 changes = bookmarks.binarydecode(inpart)
2099 changes = bookmarks.binarydecode(inpart)
2097
2100
2098 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2101 pushkeycompat = op.repo.ui.configbool('server', 'bookmarks-pushkey-compat')
2099 bookmarksmode = op.modes.get('bookmarks', 'apply')
2102 bookmarksmode = op.modes.get('bookmarks', 'apply')
2100
2103
2101 if bookmarksmode == 'apply':
2104 if bookmarksmode == 'apply':
2102 tr = op.gettransaction()
2105 tr = op.gettransaction()
2103 bookstore = op.repo._bookmarks
2106 bookstore = op.repo._bookmarks
2104 if pushkeycompat:
2107 if pushkeycompat:
2105 allhooks = []
2108 allhooks = []
2106 for book, node in changes:
2109 for book, node in changes:
2107 hookargs = tr.hookargs.copy()
2110 hookargs = tr.hookargs.copy()
2108 hookargs['pushkeycompat'] = '1'
2111 hookargs['pushkeycompat'] = '1'
2109 hookargs['namespace'] = 'bookmarks'
2112 hookargs['namespace'] = 'bookmarks'
2110 hookargs['key'] = book
2113 hookargs['key'] = book
2111 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2114 hookargs['old'] = nodemod.hex(bookstore.get(book, ''))
2112 hookargs['new'] = nodemod.hex(node if node is not None else '')
2115 hookargs['new'] = nodemod.hex(node if node is not None else '')
2113 allhooks.append(hookargs)
2116 allhooks.append(hookargs)
2114
2117
2115 for hookargs in allhooks:
2118 for hookargs in allhooks:
2116 op.repo.hook('prepushkey', throw=True,
2119 op.repo.hook('prepushkey', throw=True,
2117 **pycompat.strkwargs(hookargs))
2120 **pycompat.strkwargs(hookargs))
2118
2121
2119 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2122 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2120
2123
2121 if pushkeycompat:
2124 if pushkeycompat:
2122 def runhook():
2125 def runhook():
2123 for hookargs in allhooks:
2126 for hookargs in allhooks:
2124 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2127 op.repo.hook('pushkey', **pycompat.strkwargs(hookargs))
2125 op.repo._afterlock(runhook)
2128 op.repo._afterlock(runhook)
2126
2129
2127 elif bookmarksmode == 'records':
2130 elif bookmarksmode == 'records':
2128 for book, node in changes:
2131 for book, node in changes:
2129 record = {'bookmark': book, 'node': node}
2132 record = {'bookmark': book, 'node': node}
2130 op.records.add('bookmarks', record)
2133 op.records.add('bookmarks', record)
2131 else:
2134 else:
2132 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2135 raise error.ProgrammingError('unkown bookmark mode: %s' % bookmarksmode)
2133
2136
2134 @parthandler('phase-heads')
2137 @parthandler('phase-heads')
2135 def handlephases(op, inpart):
2138 def handlephases(op, inpart):
2136 """apply phases from bundle part to repo"""
2139 """apply phases from bundle part to repo"""
2137 headsbyphase = phases.binarydecode(inpart)
2140 headsbyphase = phases.binarydecode(inpart)
2138 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2141 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2139
2142
2140 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2143 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
2141 def handlepushkeyreply(op, inpart):
2144 def handlepushkeyreply(op, inpart):
2142 """retrieve the result of a pushkey request"""
2145 """retrieve the result of a pushkey request"""
2143 ret = int(inpart.params['return'])
2146 ret = int(inpart.params['return'])
2144 partid = int(inpart.params['in-reply-to'])
2147 partid = int(inpart.params['in-reply-to'])
2145 op.records.add('pushkey', {'return': ret}, partid)
2148 op.records.add('pushkey', {'return': ret}, partid)
2146
2149
2147 @parthandler('obsmarkers')
2150 @parthandler('obsmarkers')
2148 def handleobsmarker(op, inpart):
2151 def handleobsmarker(op, inpart):
2149 """add a stream of obsmarkers to the repo"""
2152 """add a stream of obsmarkers to the repo"""
2150 tr = op.gettransaction()
2153 tr = op.gettransaction()
2151 markerdata = inpart.read()
2154 markerdata = inpart.read()
2152 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2155 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
2153 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2156 op.ui.write(('obsmarker-exchange: %i bytes received\n')
2154 % len(markerdata))
2157 % len(markerdata))
2155 # The mergemarkers call will crash if marker creation is not enabled.
2158 # The mergemarkers call will crash if marker creation is not enabled.
2156 # we want to avoid this if the part is advisory.
2159 # we want to avoid this if the part is advisory.
2157 if not inpart.mandatory and op.repo.obsstore.readonly:
2160 if not inpart.mandatory and op.repo.obsstore.readonly:
2158 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2161 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled\n')
2159 return
2162 return
2160 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2163 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2161 op.repo.invalidatevolatilesets()
2164 op.repo.invalidatevolatilesets()
2162 if new:
2165 if new:
2163 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2166 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2164 op.records.add('obsmarkers', {'new': new})
2167 op.records.add('obsmarkers', {'new': new})
2165 if op.reply is not None:
2168 if op.reply is not None:
2166 rpart = op.reply.newpart('reply:obsmarkers')
2169 rpart = op.reply.newpart('reply:obsmarkers')
2167 rpart.addparam(
2170 rpart.addparam(
2168 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2171 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2169 rpart.addparam('new', '%i' % new, mandatory=False)
2172 rpart.addparam('new', '%i' % new, mandatory=False)
2170
2173
2171
2174
2172 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2175 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2173 def handleobsmarkerreply(op, inpart):
2176 def handleobsmarkerreply(op, inpart):
2174 """retrieve the result of a pushkey request"""
2177 """retrieve the result of a pushkey request"""
2175 ret = int(inpart.params['new'])
2178 ret = int(inpart.params['new'])
2176 partid = int(inpart.params['in-reply-to'])
2179 partid = int(inpart.params['in-reply-to'])
2177 op.records.add('obsmarkers', {'new': ret}, partid)
2180 op.records.add('obsmarkers', {'new': ret}, partid)
2178
2181
2179 @parthandler('hgtagsfnodes')
2182 @parthandler('hgtagsfnodes')
2180 def handlehgtagsfnodes(op, inpart):
2183 def handlehgtagsfnodes(op, inpart):
2181 """Applies .hgtags fnodes cache entries to the local repo.
2184 """Applies .hgtags fnodes cache entries to the local repo.
2182
2185
2183 Payload is pairs of 20 byte changeset nodes and filenodes.
2186 Payload is pairs of 20 byte changeset nodes and filenodes.
2184 """
2187 """
2185 # Grab the transaction so we ensure that we have the lock at this point.
2188 # Grab the transaction so we ensure that we have the lock at this point.
2186 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2189 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2187 op.gettransaction()
2190 op.gettransaction()
2188 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2191 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2189
2192
2190 count = 0
2193 count = 0
2191 while True:
2194 while True:
2192 node = inpart.read(20)
2195 node = inpart.read(20)
2193 fnode = inpart.read(20)
2196 fnode = inpart.read(20)
2194 if len(node) < 20 or len(fnode) < 20:
2197 if len(node) < 20 or len(fnode) < 20:
2195 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2198 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2196 break
2199 break
2197 cache.setfnode(node, fnode)
2200 cache.setfnode(node, fnode)
2198 count += 1
2201 count += 1
2199
2202
2200 cache.write()
2203 cache.write()
2201 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2204 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2202
2205
2203 rbcstruct = struct.Struct('>III')
2206 rbcstruct = struct.Struct('>III')
2204
2207
2205 @parthandler('cache:rev-branch-cache')
2208 @parthandler('cache:rev-branch-cache')
2206 def handlerbc(op, inpart):
2209 def handlerbc(op, inpart):
2207 """receive a rev-branch-cache payload and update the local cache
2210 """receive a rev-branch-cache payload and update the local cache
2208
2211
2209 The payload is a series of data related to each branch
2212 The payload is a series of data related to each branch
2210
2213
2211 1) branch name length
2214 1) branch name length
2212 2) number of open heads
2215 2) number of open heads
2213 3) number of closed heads
2216 3) number of closed heads
2214 4) open heads nodes
2217 4) open heads nodes
2215 5) closed heads nodes
2218 5) closed heads nodes
2216 """
2219 """
2217 total = 0
2220 total = 0
2218 rawheader = inpart.read(rbcstruct.size)
2221 rawheader = inpart.read(rbcstruct.size)
2219 cache = op.repo.revbranchcache()
2222 cache = op.repo.revbranchcache()
2220 cl = op.repo.unfiltered().changelog
2223 cl = op.repo.unfiltered().changelog
2221 while rawheader:
2224 while rawheader:
2222 header = rbcstruct.unpack(rawheader)
2225 header = rbcstruct.unpack(rawheader)
2223 total += header[1] + header[2]
2226 total += header[1] + header[2]
2224 utf8branch = inpart.read(header[0])
2227 utf8branch = inpart.read(header[0])
2225 branch = encoding.tolocal(utf8branch)
2228 branch = encoding.tolocal(utf8branch)
2226 for x in pycompat.xrange(header[1]):
2229 for x in pycompat.xrange(header[1]):
2227 node = inpart.read(20)
2230 node = inpart.read(20)
2228 rev = cl.rev(node)
2231 rev = cl.rev(node)
2229 cache.setdata(branch, rev, node, False)
2232 cache.setdata(branch, rev, node, False)
2230 for x in pycompat.xrange(header[2]):
2233 for x in pycompat.xrange(header[2]):
2231 node = inpart.read(20)
2234 node = inpart.read(20)
2232 rev = cl.rev(node)
2235 rev = cl.rev(node)
2233 cache.setdata(branch, rev, node, True)
2236 cache.setdata(branch, rev, node, True)
2234 rawheader = inpart.read(rbcstruct.size)
2237 rawheader = inpart.read(rbcstruct.size)
2235 cache.write()
2238 cache.write()
2236
2239
2237 @parthandler('pushvars')
2240 @parthandler('pushvars')
2238 def bundle2getvars(op, part):
2241 def bundle2getvars(op, part):
2239 '''unbundle a bundle2 containing shellvars on the server'''
2242 '''unbundle a bundle2 containing shellvars on the server'''
2240 # An option to disable unbundling on server-side for security reasons
2243 # An option to disable unbundling on server-side for security reasons
2241 if op.ui.configbool('push', 'pushvars.server'):
2244 if op.ui.configbool('push', 'pushvars.server'):
2242 hookargs = {}
2245 hookargs = {}
2243 for key, value in part.advisoryparams:
2246 for key, value in part.advisoryparams:
2244 key = key.upper()
2247 key = key.upper()
2245 # We want pushed variables to have USERVAR_ prepended so we know
2248 # We want pushed variables to have USERVAR_ prepended so we know
2246 # they came from the --pushvar flag.
2249 # they came from the --pushvar flag.
2247 key = "USERVAR_" + key
2250 key = "USERVAR_" + key
2248 hookargs[key] = value
2251 hookargs[key] = value
2249 op.addhookargs(hookargs)
2252 op.addhookargs(hookargs)
2250
2253
2251 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2254 @parthandler('stream2', ('requirements', 'filecount', 'bytecount'))
2252 def handlestreamv2bundle(op, part):
2255 def handlestreamv2bundle(op, part):
2253
2256
2254 requirements = urlreq.unquote(part.params['requirements']).split(',')
2257 requirements = urlreq.unquote(part.params['requirements']).split(',')
2255 filecount = int(part.params['filecount'])
2258 filecount = int(part.params['filecount'])
2256 bytecount = int(part.params['bytecount'])
2259 bytecount = int(part.params['bytecount'])
2257
2260
2258 repo = op.repo
2261 repo = op.repo
2259 if len(repo):
2262 if len(repo):
2260 msg = _('cannot apply stream clone to non empty repository')
2263 msg = _('cannot apply stream clone to non empty repository')
2261 raise error.Abort(msg)
2264 raise error.Abort(msg)
2262
2265
2263 repo.ui.debug('applying stream bundle\n')
2266 repo.ui.debug('applying stream bundle\n')
2264 streamclone.applybundlev2(repo, part, filecount, bytecount,
2267 streamclone.applybundlev2(repo, part, filecount, bytecount,
2265 requirements)
2268 requirements)
@@ -1,2714 +1,2746
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12 import os
12 import os
13 import random
13 import random
14 import sys
14 import sys
15 import time
15 import time
16 import weakref
16 import weakref
17
17
18 from .i18n import _
18 from .i18n import _
19 from .node import (
19 from .node import (
20 hex,
20 hex,
21 nullid,
21 nullid,
22 short,
22 short,
23 )
23 )
24 from . import (
24 from . import (
25 bookmarks,
25 bookmarks,
26 branchmap,
26 branchmap,
27 bundle2,
27 bundle2,
28 changegroup,
28 changegroup,
29 changelog,
29 changelog,
30 color,
30 color,
31 context,
31 context,
32 dirstate,
32 dirstate,
33 dirstateguard,
33 dirstateguard,
34 discovery,
34 discovery,
35 encoding,
35 encoding,
36 error,
36 error,
37 exchange,
37 exchange,
38 extensions,
38 extensions,
39 filelog,
39 filelog,
40 hook,
40 hook,
41 lock as lockmod,
41 lock as lockmod,
42 manifest,
42 manifest,
43 match as matchmod,
43 match as matchmod,
44 merge as mergemod,
44 merge as mergemod,
45 mergeutil,
45 mergeutil,
46 namespaces,
46 namespaces,
47 narrowspec,
47 narrowspec,
48 obsolete,
48 obsolete,
49 pathutil,
49 pathutil,
50 phases,
50 phases,
51 pushkey,
51 pushkey,
52 pycompat,
52 pycompat,
53 repository,
53 repository,
54 repoview,
54 repoview,
55 revset,
55 revset,
56 revsetlang,
56 revsetlang,
57 scmutil,
57 scmutil,
58 sparse,
58 sparse,
59 store as storemod,
59 store as storemod,
60 subrepoutil,
60 subrepoutil,
61 tags as tagsmod,
61 tags as tagsmod,
62 transaction,
62 transaction,
63 txnutil,
63 txnutil,
64 util,
64 util,
65 vfs as vfsmod,
65 vfs as vfsmod,
66 )
66 )
67 from .utils import (
67 from .utils import (
68 interfaceutil,
68 interfaceutil,
69 procutil,
69 procutil,
70 stringutil,
70 stringutil,
71 )
71 )
72
72
73 from .revlogutils import (
73 from .revlogutils import (
74 constants as revlogconst,
74 constants as revlogconst,
75 )
75 )
76
76
77 release = lockmod.release
77 release = lockmod.release
78 urlerr = util.urlerr
78 urlerr = util.urlerr
79 urlreq = util.urlreq
79 urlreq = util.urlreq
80
80
81 # set of (path, vfs-location) tuples. vfs-location is:
81 # set of (path, vfs-location) tuples. vfs-location is:
82 # - 'plain for vfs relative paths
82 # - 'plain for vfs relative paths
83 # - '' for svfs relative paths
83 # - '' for svfs relative paths
84 _cachedfiles = set()
84 _cachedfiles = set()
85
85
86 class _basefilecache(scmutil.filecache):
86 class _basefilecache(scmutil.filecache):
87 """All filecache usage on repo are done for logic that should be unfiltered
87 """All filecache usage on repo are done for logic that should be unfiltered
88 """
88 """
89 def __get__(self, repo, type=None):
89 def __get__(self, repo, type=None):
90 if repo is None:
90 if repo is None:
91 return self
91 return self
92 return super(_basefilecache, self).__get__(repo.unfiltered(), type)
92 return super(_basefilecache, self).__get__(repo.unfiltered(), type)
93 def __set__(self, repo, value):
93 def __set__(self, repo, value):
94 return super(_basefilecache, self).__set__(repo.unfiltered(), value)
94 return super(_basefilecache, self).__set__(repo.unfiltered(), value)
95 def __delete__(self, repo):
95 def __delete__(self, repo):
96 return super(_basefilecache, self).__delete__(repo.unfiltered())
96 return super(_basefilecache, self).__delete__(repo.unfiltered())
97
97
98 class repofilecache(_basefilecache):
98 class repofilecache(_basefilecache):
99 """filecache for files in .hg but outside of .hg/store"""
99 """filecache for files in .hg but outside of .hg/store"""
100 def __init__(self, *paths):
100 def __init__(self, *paths):
101 super(repofilecache, self).__init__(*paths)
101 super(repofilecache, self).__init__(*paths)
102 for path in paths:
102 for path in paths:
103 _cachedfiles.add((path, 'plain'))
103 _cachedfiles.add((path, 'plain'))
104
104
105 def join(self, obj, fname):
105 def join(self, obj, fname):
106 return obj.vfs.join(fname)
106 return obj.vfs.join(fname)
107
107
108 class storecache(_basefilecache):
108 class storecache(_basefilecache):
109 """filecache for files in the store"""
109 """filecache for files in the store"""
110 def __init__(self, *paths):
110 def __init__(self, *paths):
111 super(storecache, self).__init__(*paths)
111 super(storecache, self).__init__(*paths)
112 for path in paths:
112 for path in paths:
113 _cachedfiles.add((path, ''))
113 _cachedfiles.add((path, ''))
114
114
115 def join(self, obj, fname):
115 def join(self, obj, fname):
116 return obj.sjoin(fname)
116 return obj.sjoin(fname)
117
117
118 def isfilecached(repo, name):
118 def isfilecached(repo, name):
119 """check if a repo has already cached "name" filecache-ed property
119 """check if a repo has already cached "name" filecache-ed property
120
120
121 This returns (cachedobj-or-None, iscached) tuple.
121 This returns (cachedobj-or-None, iscached) tuple.
122 """
122 """
123 cacheentry = repo.unfiltered()._filecache.get(name, None)
123 cacheentry = repo.unfiltered()._filecache.get(name, None)
124 if not cacheentry:
124 if not cacheentry:
125 return None, False
125 return None, False
126 return cacheentry.obj, True
126 return cacheentry.obj, True
127
127
128 class unfilteredpropertycache(util.propertycache):
128 class unfilteredpropertycache(util.propertycache):
129 """propertycache that apply to unfiltered repo only"""
129 """propertycache that apply to unfiltered repo only"""
130
130
131 def __get__(self, repo, type=None):
131 def __get__(self, repo, type=None):
132 unfi = repo.unfiltered()
132 unfi = repo.unfiltered()
133 if unfi is repo:
133 if unfi is repo:
134 return super(unfilteredpropertycache, self).__get__(unfi)
134 return super(unfilteredpropertycache, self).__get__(unfi)
135 return getattr(unfi, self.name)
135 return getattr(unfi, self.name)
136
136
137 class filteredpropertycache(util.propertycache):
137 class filteredpropertycache(util.propertycache):
138 """propertycache that must take filtering in account"""
138 """propertycache that must take filtering in account"""
139
139
140 def cachevalue(self, obj, value):
140 def cachevalue(self, obj, value):
141 object.__setattr__(obj, self.name, value)
141 object.__setattr__(obj, self.name, value)
142
142
143
143
144 def hasunfilteredcache(repo, name):
144 def hasunfilteredcache(repo, name):
145 """check if a repo has an unfilteredpropertycache value for <name>"""
145 """check if a repo has an unfilteredpropertycache value for <name>"""
146 return name in vars(repo.unfiltered())
146 return name in vars(repo.unfiltered())
147
147
148 def unfilteredmethod(orig):
148 def unfilteredmethod(orig):
149 """decorate method that always need to be run on unfiltered version"""
149 """decorate method that always need to be run on unfiltered version"""
150 def wrapper(repo, *args, **kwargs):
150 def wrapper(repo, *args, **kwargs):
151 return orig(repo.unfiltered(), *args, **kwargs)
151 return orig(repo.unfiltered(), *args, **kwargs)
152 return wrapper
152 return wrapper
153
153
154 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
154 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
155 'unbundle'}
155 'unbundle'}
156 legacycaps = moderncaps.union({'changegroupsubset'})
156 legacycaps = moderncaps.union({'changegroupsubset'})
157
157
158 @interfaceutil.implementer(repository.ipeercommandexecutor)
158 @interfaceutil.implementer(repository.ipeercommandexecutor)
159 class localcommandexecutor(object):
159 class localcommandexecutor(object):
160 def __init__(self, peer):
160 def __init__(self, peer):
161 self._peer = peer
161 self._peer = peer
162 self._sent = False
162 self._sent = False
163 self._closed = False
163 self._closed = False
164
164
165 def __enter__(self):
165 def __enter__(self):
166 return self
166 return self
167
167
168 def __exit__(self, exctype, excvalue, exctb):
168 def __exit__(self, exctype, excvalue, exctb):
169 self.close()
169 self.close()
170
170
171 def callcommand(self, command, args):
171 def callcommand(self, command, args):
172 if self._sent:
172 if self._sent:
173 raise error.ProgrammingError('callcommand() cannot be used after '
173 raise error.ProgrammingError('callcommand() cannot be used after '
174 'sendcommands()')
174 'sendcommands()')
175
175
176 if self._closed:
176 if self._closed:
177 raise error.ProgrammingError('callcommand() cannot be used after '
177 raise error.ProgrammingError('callcommand() cannot be used after '
178 'close()')
178 'close()')
179
179
180 # We don't need to support anything fancy. Just call the named
180 # We don't need to support anything fancy. Just call the named
181 # method on the peer and return a resolved future.
181 # method on the peer and return a resolved future.
182 fn = getattr(self._peer, pycompat.sysstr(command))
182 fn = getattr(self._peer, pycompat.sysstr(command))
183
183
184 f = pycompat.futures.Future()
184 f = pycompat.futures.Future()
185
185
186 try:
186 try:
187 result = fn(**pycompat.strkwargs(args))
187 result = fn(**pycompat.strkwargs(args))
188 except Exception:
188 except Exception:
189 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
189 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
190 else:
190 else:
191 f.set_result(result)
191 f.set_result(result)
192
192
193 return f
193 return f
194
194
195 def sendcommands(self):
195 def sendcommands(self):
196 self._sent = True
196 self._sent = True
197
197
198 def close(self):
198 def close(self):
199 self._closed = True
199 self._closed = True
200
200
201 @interfaceutil.implementer(repository.ipeercommands)
201 @interfaceutil.implementer(repository.ipeercommands)
202 class localpeer(repository.peer):
202 class localpeer(repository.peer):
203 '''peer for a local repo; reflects only the most recent API'''
203 '''peer for a local repo; reflects only the most recent API'''
204
204
205 def __init__(self, repo, caps=None):
205 def __init__(self, repo, caps=None):
206 super(localpeer, self).__init__()
206 super(localpeer, self).__init__()
207
207
208 if caps is None:
208 if caps is None:
209 caps = moderncaps.copy()
209 caps = moderncaps.copy()
210 self._repo = repo.filtered('served')
210 self._repo = repo.filtered('served')
211 self.ui = repo.ui
211 self.ui = repo.ui
212 self._caps = repo._restrictcapabilities(caps)
212 self._caps = repo._restrictcapabilities(caps)
213
213
214 # Begin of _basepeer interface.
214 # Begin of _basepeer interface.
215
215
216 def url(self):
216 def url(self):
217 return self._repo.url()
217 return self._repo.url()
218
218
219 def local(self):
219 def local(self):
220 return self._repo
220 return self._repo
221
221
222 def peer(self):
222 def peer(self):
223 return self
223 return self
224
224
225 def canpush(self):
225 def canpush(self):
226 return True
226 return True
227
227
228 def close(self):
228 def close(self):
229 self._repo.close()
229 self._repo.close()
230
230
231 # End of _basepeer interface.
231 # End of _basepeer interface.
232
232
233 # Begin of _basewirecommands interface.
233 # Begin of _basewirecommands interface.
234
234
235 def branchmap(self):
235 def branchmap(self):
236 return self._repo.branchmap()
236 return self._repo.branchmap()
237
237
238 def capabilities(self):
238 def capabilities(self):
239 return self._caps
239 return self._caps
240
240
241 def clonebundles(self):
241 def clonebundles(self):
242 return self._repo.tryread('clonebundles.manifest')
242 return self._repo.tryread('clonebundles.manifest')
243
243
244 def debugwireargs(self, one, two, three=None, four=None, five=None):
244 def debugwireargs(self, one, two, three=None, four=None, five=None):
245 """Used to test argument passing over the wire"""
245 """Used to test argument passing over the wire"""
246 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
246 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
247 pycompat.bytestr(four),
247 pycompat.bytestr(four),
248 pycompat.bytestr(five))
248 pycompat.bytestr(five))
249
249
250 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
250 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
251 **kwargs):
251 **kwargs):
252 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
252 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
253 common=common, bundlecaps=bundlecaps,
253 common=common, bundlecaps=bundlecaps,
254 **kwargs)[1]
254 **kwargs)[1]
255 cb = util.chunkbuffer(chunks)
255 cb = util.chunkbuffer(chunks)
256
256
257 if exchange.bundle2requested(bundlecaps):
257 if exchange.bundle2requested(bundlecaps):
258 # When requesting a bundle2, getbundle returns a stream to make the
258 # When requesting a bundle2, getbundle returns a stream to make the
259 # wire level function happier. We need to build a proper object
259 # wire level function happier. We need to build a proper object
260 # from it in local peer.
260 # from it in local peer.
261 return bundle2.getunbundler(self.ui, cb)
261 return bundle2.getunbundler(self.ui, cb)
262 else:
262 else:
263 return changegroup.getunbundler('01', cb, None)
263 return changegroup.getunbundler('01', cb, None)
264
264
265 def heads(self):
265 def heads(self):
266 return self._repo.heads()
266 return self._repo.heads()
267
267
268 def known(self, nodes):
268 def known(self, nodes):
269 return self._repo.known(nodes)
269 return self._repo.known(nodes)
270
270
271 def listkeys(self, namespace):
271 def listkeys(self, namespace):
272 return self._repo.listkeys(namespace)
272 return self._repo.listkeys(namespace)
273
273
274 def lookup(self, key):
274 def lookup(self, key):
275 return self._repo.lookup(key)
275 return self._repo.lookup(key)
276
276
277 def pushkey(self, namespace, key, old, new):
277 def pushkey(self, namespace, key, old, new):
278 return self._repo.pushkey(namespace, key, old, new)
278 return self._repo.pushkey(namespace, key, old, new)
279
279
280 def stream_out(self):
280 def stream_out(self):
281 raise error.Abort(_('cannot perform stream clone against local '
281 raise error.Abort(_('cannot perform stream clone against local '
282 'peer'))
282 'peer'))
283
283
284 def unbundle(self, bundle, heads, url):
284 def unbundle(self, bundle, heads, url):
285 """apply a bundle on a repo
285 """apply a bundle on a repo
286
286
287 This function handles the repo locking itself."""
287 This function handles the repo locking itself."""
288 try:
288 try:
289 try:
289 try:
290 bundle = exchange.readbundle(self.ui, bundle, None)
290 bundle = exchange.readbundle(self.ui, bundle, None)
291 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
291 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
292 if util.safehasattr(ret, 'getchunks'):
292 if util.safehasattr(ret, 'getchunks'):
293 # This is a bundle20 object, turn it into an unbundler.
293 # This is a bundle20 object, turn it into an unbundler.
294 # This little dance should be dropped eventually when the
294 # This little dance should be dropped eventually when the
295 # API is finally improved.
295 # API is finally improved.
296 stream = util.chunkbuffer(ret.getchunks())
296 stream = util.chunkbuffer(ret.getchunks())
297 ret = bundle2.getunbundler(self.ui, stream)
297 ret = bundle2.getunbundler(self.ui, stream)
298 return ret
298 return ret
299 except Exception as exc:
299 except Exception as exc:
300 # If the exception contains output salvaged from a bundle2
300 # If the exception contains output salvaged from a bundle2
301 # reply, we need to make sure it is printed before continuing
301 # reply, we need to make sure it is printed before continuing
302 # to fail. So we build a bundle2 with such output and consume
302 # to fail. So we build a bundle2 with such output and consume
303 # it directly.
303 # it directly.
304 #
304 #
305 # This is not very elegant but allows a "simple" solution for
305 # This is not very elegant but allows a "simple" solution for
306 # issue4594
306 # issue4594
307 output = getattr(exc, '_bundle2salvagedoutput', ())
307 output = getattr(exc, '_bundle2salvagedoutput', ())
308 if output:
308 if output:
309 bundler = bundle2.bundle20(self._repo.ui)
309 bundler = bundle2.bundle20(self._repo.ui)
310 for out in output:
310 for out in output:
311 bundler.addpart(out)
311 bundler.addpart(out)
312 stream = util.chunkbuffer(bundler.getchunks())
312 stream = util.chunkbuffer(bundler.getchunks())
313 b = bundle2.getunbundler(self.ui, stream)
313 b = bundle2.getunbundler(self.ui, stream)
314 bundle2.processbundle(self._repo, b)
314 bundle2.processbundle(self._repo, b)
315 raise
315 raise
316 except error.PushRaced as exc:
316 except error.PushRaced as exc:
317 raise error.ResponseError(_('push failed:'),
317 raise error.ResponseError(_('push failed:'),
318 stringutil.forcebytestr(exc))
318 stringutil.forcebytestr(exc))
319
319
320 # End of _basewirecommands interface.
320 # End of _basewirecommands interface.
321
321
322 # Begin of peer interface.
322 # Begin of peer interface.
323
323
324 def commandexecutor(self):
324 def commandexecutor(self):
325 return localcommandexecutor(self)
325 return localcommandexecutor(self)
326
326
327 # End of peer interface.
327 # End of peer interface.
328
328
329 @interfaceutil.implementer(repository.ipeerlegacycommands)
329 @interfaceutil.implementer(repository.ipeerlegacycommands)
330 class locallegacypeer(localpeer):
330 class locallegacypeer(localpeer):
331 '''peer extension which implements legacy methods too; used for tests with
331 '''peer extension which implements legacy methods too; used for tests with
332 restricted capabilities'''
332 restricted capabilities'''
333
333
334 def __init__(self, repo):
334 def __init__(self, repo):
335 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
335 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
336
336
337 # Begin of baselegacywirecommands interface.
337 # Begin of baselegacywirecommands interface.
338
338
339 def between(self, pairs):
339 def between(self, pairs):
340 return self._repo.between(pairs)
340 return self._repo.between(pairs)
341
341
342 def branches(self, nodes):
342 def branches(self, nodes):
343 return self._repo.branches(nodes)
343 return self._repo.branches(nodes)
344
344
345 def changegroup(self, nodes, source):
345 def changegroup(self, nodes, source):
346 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
346 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
347 missingheads=self._repo.heads())
347 missingheads=self._repo.heads())
348 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
348 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
349
349
350 def changegroupsubset(self, bases, heads, source):
350 def changegroupsubset(self, bases, heads, source):
351 outgoing = discovery.outgoing(self._repo, missingroots=bases,
351 outgoing = discovery.outgoing(self._repo, missingroots=bases,
352 missingheads=heads)
352 missingheads=heads)
353 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
353 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
354
354
355 # End of baselegacywirecommands interface.
355 # End of baselegacywirecommands interface.
356
356
357 # Increment the sub-version when the revlog v2 format changes to lock out old
357 # Increment the sub-version when the revlog v2 format changes to lock out old
358 # clients.
358 # clients.
359 REVLOGV2_REQUIREMENT = 'exp-revlogv2.0'
359 REVLOGV2_REQUIREMENT = 'exp-revlogv2.0'
360
360
361 # A repository with the sparserevlog feature will have delta chains that
361 # A repository with the sparserevlog feature will have delta chains that
362 # can spread over a larger span. Sparse reading cuts these large spans into
362 # can spread over a larger span. Sparse reading cuts these large spans into
363 # pieces, so that each piece isn't too big.
363 # pieces, so that each piece isn't too big.
364 # Without the sparserevlog capability, reading from the repository could use
364 # Without the sparserevlog capability, reading from the repository could use
365 # huge amounts of memory, because the whole span would be read at once,
365 # huge amounts of memory, because the whole span would be read at once,
366 # including all the intermediate revisions that aren't pertinent for the chain.
366 # including all the intermediate revisions that aren't pertinent for the chain.
367 # This is why once a repository has enabled sparse-read, it becomes required.
367 # This is why once a repository has enabled sparse-read, it becomes required.
368 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
368 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
369
369
370 # Functions receiving (ui, features) that extensions can register to impact
370 # Functions receiving (ui, features) that extensions can register to impact
371 # the ability to load repositories with custom requirements. Only
371 # the ability to load repositories with custom requirements. Only
372 # functions defined in loaded extensions are called.
372 # functions defined in loaded extensions are called.
373 #
373 #
374 # The function receives a set of requirement strings that the repository
374 # The function receives a set of requirement strings that the repository
375 # is capable of opening. Functions will typically add elements to the
375 # is capable of opening. Functions will typically add elements to the
376 # set to reflect that the extension knows how to handle that requirements.
376 # set to reflect that the extension knows how to handle that requirements.
377 featuresetupfuncs = set()
377 featuresetupfuncs = set()
378
378
379 def makelocalrepository(baseui, path, intents=None):
379 def makelocalrepository(baseui, path, intents=None):
380 """Create a local repository object.
380 """Create a local repository object.
381
381
382 Given arguments needed to construct a local repository, this function
382 Given arguments needed to construct a local repository, this function
383 derives a type suitable for representing that repository and returns an
383 derives a type suitable for representing that repository and returns an
384 instance of it.
384 instance of it.
385
385
386 The returned object conforms to the ``repository.completelocalrepository``
386 The returned object conforms to the ``repository.completelocalrepository``
387 interface.
387 interface.
388 """
388 """
389 ui = baseui.copy()
389 ui = baseui.copy()
390 # Prevent copying repo configuration.
390 # Prevent copying repo configuration.
391 ui.copy = baseui.copy
391 ui.copy = baseui.copy
392
392
393 # Working directory VFS rooted at repository root.
393 # Working directory VFS rooted at repository root.
394 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
394 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
395
395
396 # Main VFS for .hg/ directory.
396 # Main VFS for .hg/ directory.
397 hgpath = wdirvfs.join(b'.hg')
397 hgpath = wdirvfs.join(b'.hg')
398 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
398 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
399
399
400 # The .hg/ path should exist and should be a directory. All other
400 # The .hg/ path should exist and should be a directory. All other
401 # cases are errors.
401 # cases are errors.
402 if not hgvfs.isdir():
402 if not hgvfs.isdir():
403 try:
403 try:
404 hgvfs.stat()
404 hgvfs.stat()
405 except OSError as e:
405 except OSError as e:
406 if e.errno != errno.ENOENT:
406 if e.errno != errno.ENOENT:
407 raise
407 raise
408
408
409 raise error.RepoError(_(b'repository %s not found') % path)
409 raise error.RepoError(_(b'repository %s not found') % path)
410
410
411 # .hg/requires file contains a newline-delimited list of
411 # .hg/requires file contains a newline-delimited list of
412 # features/capabilities the opener (us) must have in order to use
412 # features/capabilities the opener (us) must have in order to use
413 # the repository. This file was introduced in Mercurial 0.9.2,
413 # the repository. This file was introduced in Mercurial 0.9.2,
414 # which means very old repositories may not have one. We assume
414 # which means very old repositories may not have one. We assume
415 # a missing file translates to no requirements.
415 # a missing file translates to no requirements.
416 try:
416 try:
417 requirements = set(hgvfs.read(b'requires').splitlines())
417 requirements = set(hgvfs.read(b'requires').splitlines())
418 except IOError as e:
418 except IOError as e:
419 if e.errno != errno.ENOENT:
419 if e.errno != errno.ENOENT:
420 raise
420 raise
421 requirements = set()
421 requirements = set()
422
422
423 # The .hg/hgrc file may load extensions or contain config options
423 # The .hg/hgrc file may load extensions or contain config options
424 # that influence repository construction. Attempt to load it and
424 # that influence repository construction. Attempt to load it and
425 # process any new extensions that it may have pulled in.
425 # process any new extensions that it may have pulled in.
426 try:
426 try:
427 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
427 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
428 except IOError:
428 except IOError:
429 pass
429 pass
430 else:
430 else:
431 extensions.loadall(ui)
431 extensions.loadall(ui)
432
432
433 supportedrequirements = gathersupportedrequirements(ui)
433 supportedrequirements = gathersupportedrequirements(ui)
434
434
435 # We first validate the requirements are known.
435 # We first validate the requirements are known.
436 ensurerequirementsrecognized(requirements, supportedrequirements)
436 ensurerequirementsrecognized(requirements, supportedrequirements)
437
437
438 # Then we validate that the known set is reasonable to use together.
438 # Then we validate that the known set is reasonable to use together.
439 ensurerequirementscompatible(ui, requirements)
439 ensurerequirementscompatible(ui, requirements)
440
440
441 # TODO there are unhandled edge cases related to opening repositories with
441 # TODO there are unhandled edge cases related to opening repositories with
442 # shared storage. If storage is shared, we should also test for requirements
442 # shared storage. If storage is shared, we should also test for requirements
443 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
443 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
444 # that repo, as that repo may load extensions needed to open it. This is a
444 # that repo, as that repo may load extensions needed to open it. This is a
445 # bit complicated because we don't want the other hgrc to overwrite settings
445 # bit complicated because we don't want the other hgrc to overwrite settings
446 # in this hgrc.
446 # in this hgrc.
447 #
447 #
448 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
448 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
449 # file when sharing repos. But if a requirement is added after the share is
449 # file when sharing repos. But if a requirement is added after the share is
450 # performed, thereby introducing a new requirement for the opener, we may
450 # performed, thereby introducing a new requirement for the opener, we may
451 # will not see that and could encounter a run-time error interacting with
451 # will not see that and could encounter a run-time error interacting with
452 # that shared store since it has an unknown-to-us requirement.
452 # that shared store since it has an unknown-to-us requirement.
453
453
454 # At this point, we know we should be capable of opening the repository.
454 # At this point, we know we should be capable of opening the repository.
455 # Now get on with doing that.
455 # Now get on with doing that.
456
456
457 # The "store" part of the repository holds versioned data. How it is
457 # The "store" part of the repository holds versioned data. How it is
458 # accessed is determined by various requirements. The ``shared`` or
458 # accessed is determined by various requirements. The ``shared`` or
459 # ``relshared`` requirements indicate the store lives in the path contained
459 # ``relshared`` requirements indicate the store lives in the path contained
460 # in the ``.hg/sharedpath`` file. This is an absolute path for
460 # in the ``.hg/sharedpath`` file. This is an absolute path for
461 # ``shared`` and relative to ``.hg/`` for ``relshared``.
461 # ``shared`` and relative to ``.hg/`` for ``relshared``.
462 if b'shared' in requirements or b'relshared' in requirements:
462 if b'shared' in requirements or b'relshared' in requirements:
463 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
463 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
464 if b'relshared' in requirements:
464 if b'relshared' in requirements:
465 sharedpath = hgvfs.join(sharedpath)
465 sharedpath = hgvfs.join(sharedpath)
466
466
467 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
467 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
468
468
469 if not sharedvfs.exists():
469 if not sharedvfs.exists():
470 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
470 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
471 b'directory %s') % sharedvfs.base)
471 b'directory %s') % sharedvfs.base)
472
472
473 storebasepath = sharedvfs.base
473 storebasepath = sharedvfs.base
474 cachepath = sharedvfs.join(b'cache')
474 cachepath = sharedvfs.join(b'cache')
475 else:
475 else:
476 storebasepath = hgvfs.base
476 storebasepath = hgvfs.base
477 cachepath = hgvfs.join(b'cache')
477 cachepath = hgvfs.join(b'cache')
478
478
479 # The store has changed over time and the exact layout is dictated by
479 # The store has changed over time and the exact layout is dictated by
480 # requirements. The store interface abstracts differences across all
480 # requirements. The store interface abstracts differences across all
481 # of them.
481 # of them.
482 store = makestore(requirements, storebasepath,
482 store = makestore(requirements, storebasepath,
483 lambda base: vfsmod.vfs(base, cacheaudited=True))
483 lambda base: vfsmod.vfs(base, cacheaudited=True))
484
485 hgvfs.createmode = store.createmode
484 hgvfs.createmode = store.createmode
486
485
486 storevfs = store.vfs
487 storevfs.options = resolvestorevfsoptions(ui, requirements)
488
487 # The cache vfs is used to manage cache files.
489 # The cache vfs is used to manage cache files.
488 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
490 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
489 cachevfs.createmode = store.createmode
491 cachevfs.createmode = store.createmode
490
492
491 return localrepository(
493 return localrepository(
492 baseui=baseui,
494 baseui=baseui,
493 ui=ui,
495 ui=ui,
494 origroot=path,
496 origroot=path,
495 wdirvfs=wdirvfs,
497 wdirvfs=wdirvfs,
496 hgvfs=hgvfs,
498 hgvfs=hgvfs,
497 requirements=requirements,
499 requirements=requirements,
498 supportedrequirements=supportedrequirements,
500 supportedrequirements=supportedrequirements,
499 sharedpath=storebasepath,
501 sharedpath=storebasepath,
500 store=store,
502 store=store,
501 cachevfs=cachevfs,
503 cachevfs=cachevfs,
502 intents=intents)
504 intents=intents)
503
505
504 def gathersupportedrequirements(ui):
506 def gathersupportedrequirements(ui):
505 """Determine the complete set of recognized requirements."""
507 """Determine the complete set of recognized requirements."""
506 # Start with all requirements supported by this file.
508 # Start with all requirements supported by this file.
507 supported = set(localrepository._basesupported)
509 supported = set(localrepository._basesupported)
508
510
509 # Execute ``featuresetupfuncs`` entries if they belong to an extension
511 # Execute ``featuresetupfuncs`` entries if they belong to an extension
510 # relevant to this ui instance.
512 # relevant to this ui instance.
511 modules = {m.__name__ for n, m in extensions.extensions(ui)}
513 modules = {m.__name__ for n, m in extensions.extensions(ui)}
512
514
513 for fn in featuresetupfuncs:
515 for fn in featuresetupfuncs:
514 if fn.__module__ in modules:
516 if fn.__module__ in modules:
515 fn(ui, supported)
517 fn(ui, supported)
516
518
517 # Add derived requirements from registered compression engines.
519 # Add derived requirements from registered compression engines.
518 for name in util.compengines:
520 for name in util.compengines:
519 engine = util.compengines[name]
521 engine = util.compengines[name]
520 if engine.revlogheader():
522 if engine.revlogheader():
521 supported.add(b'exp-compression-%s' % name)
523 supported.add(b'exp-compression-%s' % name)
522
524
523 return supported
525 return supported
524
526
525 def ensurerequirementsrecognized(requirements, supported):
527 def ensurerequirementsrecognized(requirements, supported):
526 """Validate that a set of local requirements is recognized.
528 """Validate that a set of local requirements is recognized.
527
529
528 Receives a set of requirements. Raises an ``error.RepoError`` if there
530 Receives a set of requirements. Raises an ``error.RepoError`` if there
529 exists any requirement in that set that currently loaded code doesn't
531 exists any requirement in that set that currently loaded code doesn't
530 recognize.
532 recognize.
531
533
532 Returns a set of supported requirements.
534 Returns a set of supported requirements.
533 """
535 """
534 missing = set()
536 missing = set()
535
537
536 for requirement in requirements:
538 for requirement in requirements:
537 if requirement in supported:
539 if requirement in supported:
538 continue
540 continue
539
541
540 if not requirement or not requirement[0:1].isalnum():
542 if not requirement or not requirement[0:1].isalnum():
541 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
543 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
542
544
543 missing.add(requirement)
545 missing.add(requirement)
544
546
545 if missing:
547 if missing:
546 raise error.RequirementError(
548 raise error.RequirementError(
547 _(b'repository requires features unknown to this Mercurial: %s') %
549 _(b'repository requires features unknown to this Mercurial: %s') %
548 b' '.join(sorted(missing)),
550 b' '.join(sorted(missing)),
549 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
551 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
550 b'for more information'))
552 b'for more information'))
551
553
552 def ensurerequirementscompatible(ui, requirements):
554 def ensurerequirementscompatible(ui, requirements):
553 """Validates that a set of recognized requirements is mutually compatible.
555 """Validates that a set of recognized requirements is mutually compatible.
554
556
555 Some requirements may not be compatible with others or require
557 Some requirements may not be compatible with others or require
556 config options that aren't enabled. This function is called during
558 config options that aren't enabled. This function is called during
557 repository opening to ensure that the set of requirements needed
559 repository opening to ensure that the set of requirements needed
558 to open a repository is sane and compatible with config options.
560 to open a repository is sane and compatible with config options.
559
561
560 Extensions can monkeypatch this function to perform additional
562 Extensions can monkeypatch this function to perform additional
561 checking.
563 checking.
562
564
563 ``error.RepoError`` should be raised on failure.
565 ``error.RepoError`` should be raised on failure.
564 """
566 """
565 if b'exp-sparse' in requirements and not sparse.enabled:
567 if b'exp-sparse' in requirements and not sparse.enabled:
566 raise error.RepoError(_(b'repository is using sparse feature but '
568 raise error.RepoError(_(b'repository is using sparse feature but '
567 b'sparse is not enabled; enable the '
569 b'sparse is not enabled; enable the '
568 b'"sparse" extensions to access'))
570 b'"sparse" extensions to access'))
569
571
570 def makestore(requirements, path, vfstype):
572 def makestore(requirements, path, vfstype):
571 """Construct a storage object for a repository."""
573 """Construct a storage object for a repository."""
572 if b'store' in requirements:
574 if b'store' in requirements:
573 if b'fncache' in requirements:
575 if b'fncache' in requirements:
574 return storemod.fncachestore(path, vfstype,
576 return storemod.fncachestore(path, vfstype,
575 b'dotencode' in requirements)
577 b'dotencode' in requirements)
576
578
577 return storemod.encodedstore(path, vfstype)
579 return storemod.encodedstore(path, vfstype)
578
580
579 return storemod.basicstore(path, vfstype)
581 return storemod.basicstore(path, vfstype)
580
582
583 def resolvestorevfsoptions(ui, requirements):
584 """Resolve the options to pass to the store vfs opener.
585
586 The returned dict is used to influence behavior of the storage layer.
587 """
588 options = {}
589
590 if b'treemanifest' in requirements:
591 options[b'treemanifest'] = True
592
593 # experimental config: format.manifestcachesize
594 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
595 if manifestcachesize is not None:
596 options[b'manifestcachesize'] = manifestcachesize
597
598 # In the absence of another requirement superseding a revlog-related
599 # requirement, we have to assume the repo is using revlog version 0.
600 # This revlog format is super old and we don't bother trying to parse
601 # opener options for it because those options wouldn't do anything
602 # meaningful on such old repos.
603 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
604 options.update(resolverevlogstorevfsoptions(ui, requirements))
605
606 return options
607
608 def resolverevlogstorevfsoptions(ui, requirements):
609 """Resolve opener options specific to revlogs."""
610
611 options = {}
612
613 if b'revlogv1' in requirements:
614 options[b'revlogv1'] = True
615 if REVLOGV2_REQUIREMENT in requirements:
616 options[b'revlogv2'] = True
617
618 if b'generaldelta' in requirements:
619 options[b'generaldelta'] = True
620
621 # experimental config: format.chunkcachesize
622 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
623 if chunkcachesize is not None:
624 options[b'chunkcachesize'] = chunkcachesize
625
626 deltabothparents = ui.configbool(b'storage',
627 b'revlog.optimize-delta-parent-choice')
628 options[b'deltabothparents'] = deltabothparents
629
630 options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
631
632 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
633 if 0 <= chainspan:
634 options[b'maxdeltachainspan'] = chainspan
635
636 mmapindexthreshold = ui.configbytes(b'experimental',
637 b'mmapindexthreshold')
638 if mmapindexthreshold is not None:
639 options[b'mmapindexthreshold'] = mmapindexthreshold
640
641 withsparseread = ui.configbool(b'experimental', b'sparse-read')
642 srdensitythres = float(ui.config(b'experimental',
643 b'sparse-read.density-threshold'))
644 srmingapsize = ui.configbytes(b'experimental',
645 b'sparse-read.min-gap-size')
646 options[b'with-sparse-read'] = withsparseread
647 options[b'sparse-read-density-threshold'] = srdensitythres
648 options[b'sparse-read-min-gap-size'] = srmingapsize
649
650 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
651 options[b'sparse-revlog'] = sparserevlog
652 if sparserevlog:
653 options[b'generaldelta'] = True
654
655 maxchainlen = None
656 if sparserevlog:
657 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
658 # experimental config: format.maxchainlen
659 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
660 if maxchainlen is not None:
661 options[b'maxchainlen'] = maxchainlen
662
663 for r in requirements:
664 if r.startswith(b'exp-compression-'):
665 options[b'compengine'] = r[len(b'exp-compression-'):]
666
667 return options
668
581 @interfaceutil.implementer(repository.completelocalrepository)
669 @interfaceutil.implementer(repository.completelocalrepository)
582 class localrepository(object):
670 class localrepository(object):
583
671
584 # obsolete experimental requirements:
672 # obsolete experimental requirements:
585 # - manifestv2: An experimental new manifest format that allowed
673 # - manifestv2: An experimental new manifest format that allowed
586 # for stem compression of long paths. Experiment ended up not
674 # for stem compression of long paths. Experiment ended up not
587 # being successful (repository sizes went up due to worse delta
675 # being successful (repository sizes went up due to worse delta
588 # chains), and the code was deleted in 4.6.
676 # chains), and the code was deleted in 4.6.
589 supportedformats = {
677 supportedformats = {
590 'revlogv1',
678 'revlogv1',
591 'generaldelta',
679 'generaldelta',
592 'treemanifest',
680 'treemanifest',
593 REVLOGV2_REQUIREMENT,
681 REVLOGV2_REQUIREMENT,
594 SPARSEREVLOG_REQUIREMENT,
682 SPARSEREVLOG_REQUIREMENT,
595 }
683 }
596 _basesupported = supportedformats | {
684 _basesupported = supportedformats | {
597 'store',
685 'store',
598 'fncache',
686 'fncache',
599 'shared',
687 'shared',
600 'relshared',
688 'relshared',
601 'dotencode',
689 'dotencode',
602 'exp-sparse',
690 'exp-sparse',
603 'internal-phase'
691 'internal-phase'
604 }
692 }
605 openerreqs = {
606 'revlogv1',
607 'generaldelta',
608 'treemanifest',
609 }
610
693
611 # list of prefix for file which can be written without 'wlock'
694 # list of prefix for file which can be written without 'wlock'
612 # Extensions should extend this list when needed
695 # Extensions should extend this list when needed
613 _wlockfreeprefix = {
696 _wlockfreeprefix = {
614 # We migh consider requiring 'wlock' for the next
697 # We migh consider requiring 'wlock' for the next
615 # two, but pretty much all the existing code assume
698 # two, but pretty much all the existing code assume
616 # wlock is not needed so we keep them excluded for
699 # wlock is not needed so we keep them excluded for
617 # now.
700 # now.
618 'hgrc',
701 'hgrc',
619 'requires',
702 'requires',
620 # XXX cache is a complicatged business someone
703 # XXX cache is a complicatged business someone
621 # should investigate this in depth at some point
704 # should investigate this in depth at some point
622 'cache/',
705 'cache/',
623 # XXX shouldn't be dirstate covered by the wlock?
706 # XXX shouldn't be dirstate covered by the wlock?
624 'dirstate',
707 'dirstate',
625 # XXX bisect was still a bit too messy at the time
708 # XXX bisect was still a bit too messy at the time
626 # this changeset was introduced. Someone should fix
709 # this changeset was introduced. Someone should fix
627 # the remainig bit and drop this line
710 # the remainig bit and drop this line
628 'bisect.state',
711 'bisect.state',
629 }
712 }
630
713
631 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
714 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
632 supportedrequirements, sharedpath, store, cachevfs,
715 supportedrequirements, sharedpath, store, cachevfs,
633 intents=None):
716 intents=None):
634 """Create a new local repository instance.
717 """Create a new local repository instance.
635
718
636 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
719 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
637 or ``localrepo.makelocalrepository()`` for obtaining a new repository
720 or ``localrepo.makelocalrepository()`` for obtaining a new repository
638 object.
721 object.
639
722
640 Arguments:
723 Arguments:
641
724
642 baseui
725 baseui
643 ``ui.ui`` instance that ``ui`` argument was based off of.
726 ``ui.ui`` instance that ``ui`` argument was based off of.
644
727
645 ui
728 ui
646 ``ui.ui`` instance for use by the repository.
729 ``ui.ui`` instance for use by the repository.
647
730
648 origroot
731 origroot
649 ``bytes`` path to working directory root of this repository.
732 ``bytes`` path to working directory root of this repository.
650
733
651 wdirvfs
734 wdirvfs
652 ``vfs.vfs`` rooted at the working directory.
735 ``vfs.vfs`` rooted at the working directory.
653
736
654 hgvfs
737 hgvfs
655 ``vfs.vfs`` rooted at .hg/
738 ``vfs.vfs`` rooted at .hg/
656
739
657 requirements
740 requirements
658 ``set`` of bytestrings representing repository opening requirements.
741 ``set`` of bytestrings representing repository opening requirements.
659
742
660 supportedrequirements
743 supportedrequirements
661 ``set`` of bytestrings representing repository requirements that we
744 ``set`` of bytestrings representing repository requirements that we
662 know how to open. May be a supetset of ``requirements``.
745 know how to open. May be a supetset of ``requirements``.
663
746
664 sharedpath
747 sharedpath
665 ``bytes`` Defining path to storage base directory. Points to a
748 ``bytes`` Defining path to storage base directory. Points to a
666 ``.hg/`` directory somewhere.
749 ``.hg/`` directory somewhere.
667
750
668 store
751 store
669 ``store.basicstore`` (or derived) instance providing access to
752 ``store.basicstore`` (or derived) instance providing access to
670 versioned storage.
753 versioned storage.
671
754
672 cachevfs
755 cachevfs
673 ``vfs.vfs`` used for cache files.
756 ``vfs.vfs`` used for cache files.
674
757
675 intents
758 intents
676 ``set`` of system strings indicating what this repo will be used
759 ``set`` of system strings indicating what this repo will be used
677 for.
760 for.
678 """
761 """
679 self.baseui = baseui
762 self.baseui = baseui
680 self.ui = ui
763 self.ui = ui
681 self.origroot = origroot
764 self.origroot = origroot
682 # vfs rooted at working directory.
765 # vfs rooted at working directory.
683 self.wvfs = wdirvfs
766 self.wvfs = wdirvfs
684 self.root = wdirvfs.base
767 self.root = wdirvfs.base
685 # vfs rooted at .hg/. Used to access most non-store paths.
768 # vfs rooted at .hg/. Used to access most non-store paths.
686 self.vfs = hgvfs
769 self.vfs = hgvfs
687 self.path = hgvfs.base
770 self.path = hgvfs.base
688 self.requirements = requirements
771 self.requirements = requirements
689 self.supported = supportedrequirements
772 self.supported = supportedrequirements
690 self.sharedpath = sharedpath
773 self.sharedpath = sharedpath
691 self.store = store
774 self.store = store
692 self.cachevfs = cachevfs
775 self.cachevfs = cachevfs
693
776
694 self.filtername = None
777 self.filtername = None
695
778
696 if (self.ui.configbool('devel', 'all-warnings') or
779 if (self.ui.configbool('devel', 'all-warnings') or
697 self.ui.configbool('devel', 'check-locks')):
780 self.ui.configbool('devel', 'check-locks')):
698 self.vfs.audit = self._getvfsward(self.vfs.audit)
781 self.vfs.audit = self._getvfsward(self.vfs.audit)
699 # A list of callback to shape the phase if no data were found.
782 # A list of callback to shape the phase if no data were found.
700 # Callback are in the form: func(repo, roots) --> processed root.
783 # Callback are in the form: func(repo, roots) --> processed root.
701 # This list it to be filled by extension during repo setup
784 # This list it to be filled by extension during repo setup
702 self._phasedefaults = []
785 self._phasedefaults = []
703
786
704 color.setup(self.ui)
787 color.setup(self.ui)
705
788
706 self.spath = self.store.path
789 self.spath = self.store.path
707 self.svfs = self.store.vfs
790 self.svfs = self.store.vfs
708 self.sjoin = self.store.join
791 self.sjoin = self.store.join
709 if (self.ui.configbool('devel', 'all-warnings') or
792 if (self.ui.configbool('devel', 'all-warnings') or
710 self.ui.configbool('devel', 'check-locks')):
793 self.ui.configbool('devel', 'check-locks')):
711 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
794 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
712 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
795 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
713 else: # standard vfs
796 else: # standard vfs
714 self.svfs.audit = self._getsvfsward(self.svfs.audit)
797 self.svfs.audit = self._getsvfsward(self.svfs.audit)
715 self._applyopenerreqs()
716
798
717 self._dirstatevalidatewarned = False
799 self._dirstatevalidatewarned = False
718
800
719 self._branchcaches = {}
801 self._branchcaches = {}
720 self._revbranchcache = None
802 self._revbranchcache = None
721 self._filterpats = {}
803 self._filterpats = {}
722 self._datafilters = {}
804 self._datafilters = {}
723 self._transref = self._lockref = self._wlockref = None
805 self._transref = self._lockref = self._wlockref = None
724
806
725 # A cache for various files under .hg/ that tracks file changes,
807 # A cache for various files under .hg/ that tracks file changes,
726 # (used by the filecache decorator)
808 # (used by the filecache decorator)
727 #
809 #
728 # Maps a property name to its util.filecacheentry
810 # Maps a property name to its util.filecacheentry
729 self._filecache = {}
811 self._filecache = {}
730
812
731 # hold sets of revision to be filtered
813 # hold sets of revision to be filtered
732 # should be cleared when something might have changed the filter value:
814 # should be cleared when something might have changed the filter value:
733 # - new changesets,
815 # - new changesets,
734 # - phase change,
816 # - phase change,
735 # - new obsolescence marker,
817 # - new obsolescence marker,
736 # - working directory parent change,
818 # - working directory parent change,
737 # - bookmark changes
819 # - bookmark changes
738 self.filteredrevcache = {}
820 self.filteredrevcache = {}
739
821
740 # post-dirstate-status hooks
822 # post-dirstate-status hooks
741 self._postdsstatus = []
823 self._postdsstatus = []
742
824
743 # generic mapping between names and nodes
825 # generic mapping between names and nodes
744 self.names = namespaces.namespaces()
826 self.names = namespaces.namespaces()
745
827
746 # Key to signature value.
828 # Key to signature value.
747 self._sparsesignaturecache = {}
829 self._sparsesignaturecache = {}
748 # Signature to cached matcher instance.
830 # Signature to cached matcher instance.
749 self._sparsematchercache = {}
831 self._sparsematchercache = {}
750
832
751 def _getvfsward(self, origfunc):
833 def _getvfsward(self, origfunc):
752 """build a ward for self.vfs"""
834 """build a ward for self.vfs"""
753 rref = weakref.ref(self)
835 rref = weakref.ref(self)
754 def checkvfs(path, mode=None):
836 def checkvfs(path, mode=None):
755 ret = origfunc(path, mode=mode)
837 ret = origfunc(path, mode=mode)
756 repo = rref()
838 repo = rref()
757 if (repo is None
839 if (repo is None
758 or not util.safehasattr(repo, '_wlockref')
840 or not util.safehasattr(repo, '_wlockref')
759 or not util.safehasattr(repo, '_lockref')):
841 or not util.safehasattr(repo, '_lockref')):
760 return
842 return
761 if mode in (None, 'r', 'rb'):
843 if mode in (None, 'r', 'rb'):
762 return
844 return
763 if path.startswith(repo.path):
845 if path.startswith(repo.path):
764 # truncate name relative to the repository (.hg)
846 # truncate name relative to the repository (.hg)
765 path = path[len(repo.path) + 1:]
847 path = path[len(repo.path) + 1:]
766 if path.startswith('cache/'):
848 if path.startswith('cache/'):
767 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
849 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
768 repo.ui.develwarn(msg % path, stacklevel=2, config="cache-vfs")
850 repo.ui.develwarn(msg % path, stacklevel=2, config="cache-vfs")
769 if path.startswith('journal.'):
851 if path.startswith('journal.'):
770 # journal is covered by 'lock'
852 # journal is covered by 'lock'
771 if repo._currentlock(repo._lockref) is None:
853 if repo._currentlock(repo._lockref) is None:
772 repo.ui.develwarn('write with no lock: "%s"' % path,
854 repo.ui.develwarn('write with no lock: "%s"' % path,
773 stacklevel=2, config='check-locks')
855 stacklevel=2, config='check-locks')
774 elif repo._currentlock(repo._wlockref) is None:
856 elif repo._currentlock(repo._wlockref) is None:
775 # rest of vfs files are covered by 'wlock'
857 # rest of vfs files are covered by 'wlock'
776 #
858 #
777 # exclude special files
859 # exclude special files
778 for prefix in self._wlockfreeprefix:
860 for prefix in self._wlockfreeprefix:
779 if path.startswith(prefix):
861 if path.startswith(prefix):
780 return
862 return
781 repo.ui.develwarn('write with no wlock: "%s"' % path,
863 repo.ui.develwarn('write with no wlock: "%s"' % path,
782 stacklevel=2, config='check-locks')
864 stacklevel=2, config='check-locks')
783 return ret
865 return ret
784 return checkvfs
866 return checkvfs
785
867
786 def _getsvfsward(self, origfunc):
868 def _getsvfsward(self, origfunc):
787 """build a ward for self.svfs"""
869 """build a ward for self.svfs"""
788 rref = weakref.ref(self)
870 rref = weakref.ref(self)
789 def checksvfs(path, mode=None):
871 def checksvfs(path, mode=None):
790 ret = origfunc(path, mode=mode)
872 ret = origfunc(path, mode=mode)
791 repo = rref()
873 repo = rref()
792 if repo is None or not util.safehasattr(repo, '_lockref'):
874 if repo is None or not util.safehasattr(repo, '_lockref'):
793 return
875 return
794 if mode in (None, 'r', 'rb'):
876 if mode in (None, 'r', 'rb'):
795 return
877 return
796 if path.startswith(repo.sharedpath):
878 if path.startswith(repo.sharedpath):
797 # truncate name relative to the repository (.hg)
879 # truncate name relative to the repository (.hg)
798 path = path[len(repo.sharedpath) + 1:]
880 path = path[len(repo.sharedpath) + 1:]
799 if repo._currentlock(repo._lockref) is None:
881 if repo._currentlock(repo._lockref) is None:
800 repo.ui.develwarn('write with no lock: "%s"' % path,
882 repo.ui.develwarn('write with no lock: "%s"' % path,
801 stacklevel=3)
883 stacklevel=3)
802 return ret
884 return ret
803 return checksvfs
885 return checksvfs
804
886
805 def close(self):
887 def close(self):
806 self._writecaches()
888 self._writecaches()
807
889
808 def _writecaches(self):
890 def _writecaches(self):
809 if self._revbranchcache:
891 if self._revbranchcache:
810 self._revbranchcache.write()
892 self._revbranchcache.write()
811
893
812 def _restrictcapabilities(self, caps):
894 def _restrictcapabilities(self, caps):
813 if self.ui.configbool('experimental', 'bundle2-advertise'):
895 if self.ui.configbool('experimental', 'bundle2-advertise'):
814 caps = set(caps)
896 caps = set(caps)
815 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
897 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
816 role='client'))
898 role='client'))
817 caps.add('bundle2=' + urlreq.quote(capsblob))
899 caps.add('bundle2=' + urlreq.quote(capsblob))
818 return caps
900 return caps
819
901
820 def _applyopenerreqs(self):
821 self.svfs.options = {r: True for r in self.requirements
822 if r in self.openerreqs}
823 # experimental config: format.chunkcachesize
824 chunkcachesize = self.ui.configint('format', 'chunkcachesize')
825 if chunkcachesize is not None:
826 self.svfs.options['chunkcachesize'] = chunkcachesize
827 # experimental config: format.manifestcachesize
828 manifestcachesize = self.ui.configint('format', 'manifestcachesize')
829 if manifestcachesize is not None:
830 self.svfs.options['manifestcachesize'] = manifestcachesize
831 deltabothparents = self.ui.configbool('storage',
832 'revlog.optimize-delta-parent-choice')
833 self.svfs.options['deltabothparents'] = deltabothparents
834 self.svfs.options['lazydeltabase'] = not scmutil.gddeltaconfig(self.ui)
835 chainspan = self.ui.configbytes('experimental', 'maxdeltachainspan')
836 if 0 <= chainspan:
837 self.svfs.options['maxdeltachainspan'] = chainspan
838 mmapindexthreshold = self.ui.configbytes('experimental',
839 'mmapindexthreshold')
840 if mmapindexthreshold is not None:
841 self.svfs.options['mmapindexthreshold'] = mmapindexthreshold
842 withsparseread = self.ui.configbool('experimental', 'sparse-read')
843 srdensitythres = float(self.ui.config('experimental',
844 'sparse-read.density-threshold'))
845 srmingapsize = self.ui.configbytes('experimental',
846 'sparse-read.min-gap-size')
847 self.svfs.options['with-sparse-read'] = withsparseread
848 self.svfs.options['sparse-read-density-threshold'] = srdensitythres
849 self.svfs.options['sparse-read-min-gap-size'] = srmingapsize
850 sparserevlog = SPARSEREVLOG_REQUIREMENT in self.requirements
851 self.svfs.options['sparse-revlog'] = sparserevlog
852 if sparserevlog:
853 self.svfs.options['generaldelta'] = True
854 maxchainlen = None
855 if sparserevlog:
856 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
857 # experimental config: format.maxchainlen
858 maxchainlen = self.ui.configint('format', 'maxchainlen', maxchainlen)
859 if maxchainlen is not None:
860 self.svfs.options['maxchainlen'] = maxchainlen
861
862 for r in self.requirements:
863 if r.startswith('exp-compression-'):
864 self.svfs.options['compengine'] = r[len('exp-compression-'):]
865
866 # TODO move "revlogv2" to openerreqs once finalized.
867 if REVLOGV2_REQUIREMENT in self.requirements:
868 self.svfs.options['revlogv2'] = True
869
870 def _writerequirements(self):
902 def _writerequirements(self):
871 scmutil.writerequires(self.vfs, self.requirements)
903 scmutil.writerequires(self.vfs, self.requirements)
872
904
873 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
905 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
874 # self -> auditor -> self._checknested -> self
906 # self -> auditor -> self._checknested -> self
875
907
876 @property
908 @property
877 def auditor(self):
909 def auditor(self):
878 # This is only used by context.workingctx.match in order to
910 # This is only used by context.workingctx.match in order to
879 # detect files in subrepos.
911 # detect files in subrepos.
880 return pathutil.pathauditor(self.root, callback=self._checknested)
912 return pathutil.pathauditor(self.root, callback=self._checknested)
881
913
882 @property
914 @property
883 def nofsauditor(self):
915 def nofsauditor(self):
884 # This is only used by context.basectx.match in order to detect
916 # This is only used by context.basectx.match in order to detect
885 # files in subrepos.
917 # files in subrepos.
886 return pathutil.pathauditor(self.root, callback=self._checknested,
918 return pathutil.pathauditor(self.root, callback=self._checknested,
887 realfs=False, cached=True)
919 realfs=False, cached=True)
888
920
889 def _checknested(self, path):
921 def _checknested(self, path):
890 """Determine if path is a legal nested repository."""
922 """Determine if path is a legal nested repository."""
891 if not path.startswith(self.root):
923 if not path.startswith(self.root):
892 return False
924 return False
893 subpath = path[len(self.root) + 1:]
925 subpath = path[len(self.root) + 1:]
894 normsubpath = util.pconvert(subpath)
926 normsubpath = util.pconvert(subpath)
895
927
896 # XXX: Checking against the current working copy is wrong in
928 # XXX: Checking against the current working copy is wrong in
897 # the sense that it can reject things like
929 # the sense that it can reject things like
898 #
930 #
899 # $ hg cat -r 10 sub/x.txt
931 # $ hg cat -r 10 sub/x.txt
900 #
932 #
901 # if sub/ is no longer a subrepository in the working copy
933 # if sub/ is no longer a subrepository in the working copy
902 # parent revision.
934 # parent revision.
903 #
935 #
904 # However, it can of course also allow things that would have
936 # However, it can of course also allow things that would have
905 # been rejected before, such as the above cat command if sub/
937 # been rejected before, such as the above cat command if sub/
906 # is a subrepository now, but was a normal directory before.
938 # is a subrepository now, but was a normal directory before.
907 # The old path auditor would have rejected by mistake since it
939 # The old path auditor would have rejected by mistake since it
908 # panics when it sees sub/.hg/.
940 # panics when it sees sub/.hg/.
909 #
941 #
910 # All in all, checking against the working copy seems sensible
942 # All in all, checking against the working copy seems sensible
911 # since we want to prevent access to nested repositories on
943 # since we want to prevent access to nested repositories on
912 # the filesystem *now*.
944 # the filesystem *now*.
913 ctx = self[None]
945 ctx = self[None]
914 parts = util.splitpath(subpath)
946 parts = util.splitpath(subpath)
915 while parts:
947 while parts:
916 prefix = '/'.join(parts)
948 prefix = '/'.join(parts)
917 if prefix in ctx.substate:
949 if prefix in ctx.substate:
918 if prefix == normsubpath:
950 if prefix == normsubpath:
919 return True
951 return True
920 else:
952 else:
921 sub = ctx.sub(prefix)
953 sub = ctx.sub(prefix)
922 return sub.checknested(subpath[len(prefix) + 1:])
954 return sub.checknested(subpath[len(prefix) + 1:])
923 else:
955 else:
924 parts.pop()
956 parts.pop()
925 return False
957 return False
926
958
927 def peer(self):
959 def peer(self):
928 return localpeer(self) # not cached to avoid reference cycle
960 return localpeer(self) # not cached to avoid reference cycle
929
961
930 def unfiltered(self):
962 def unfiltered(self):
931 """Return unfiltered version of the repository
963 """Return unfiltered version of the repository
932
964
933 Intended to be overwritten by filtered repo."""
965 Intended to be overwritten by filtered repo."""
934 return self
966 return self
935
967
936 def filtered(self, name, visibilityexceptions=None):
968 def filtered(self, name, visibilityexceptions=None):
937 """Return a filtered version of a repository"""
969 """Return a filtered version of a repository"""
938 cls = repoview.newtype(self.unfiltered().__class__)
970 cls = repoview.newtype(self.unfiltered().__class__)
939 return cls(self, name, visibilityexceptions)
971 return cls(self, name, visibilityexceptions)
940
972
941 @repofilecache('bookmarks', 'bookmarks.current')
973 @repofilecache('bookmarks', 'bookmarks.current')
942 def _bookmarks(self):
974 def _bookmarks(self):
943 return bookmarks.bmstore(self)
975 return bookmarks.bmstore(self)
944
976
945 @property
977 @property
946 def _activebookmark(self):
978 def _activebookmark(self):
947 return self._bookmarks.active
979 return self._bookmarks.active
948
980
949 # _phasesets depend on changelog. what we need is to call
981 # _phasesets depend on changelog. what we need is to call
950 # _phasecache.invalidate() if '00changelog.i' was changed, but it
982 # _phasecache.invalidate() if '00changelog.i' was changed, but it
951 # can't be easily expressed in filecache mechanism.
983 # can't be easily expressed in filecache mechanism.
952 @storecache('phaseroots', '00changelog.i')
984 @storecache('phaseroots', '00changelog.i')
953 def _phasecache(self):
985 def _phasecache(self):
954 return phases.phasecache(self, self._phasedefaults)
986 return phases.phasecache(self, self._phasedefaults)
955
987
956 @storecache('obsstore')
988 @storecache('obsstore')
957 def obsstore(self):
989 def obsstore(self):
958 return obsolete.makestore(self.ui, self)
990 return obsolete.makestore(self.ui, self)
959
991
960 @storecache('00changelog.i')
992 @storecache('00changelog.i')
961 def changelog(self):
993 def changelog(self):
962 return changelog.changelog(self.svfs,
994 return changelog.changelog(self.svfs,
963 trypending=txnutil.mayhavepending(self.root))
995 trypending=txnutil.mayhavepending(self.root))
964
996
965 def _constructmanifest(self):
997 def _constructmanifest(self):
966 # This is a temporary function while we migrate from manifest to
998 # This is a temporary function while we migrate from manifest to
967 # manifestlog. It allows bundlerepo and unionrepo to intercept the
999 # manifestlog. It allows bundlerepo and unionrepo to intercept the
968 # manifest creation.
1000 # manifest creation.
969 return manifest.manifestrevlog(self.svfs)
1001 return manifest.manifestrevlog(self.svfs)
970
1002
971 @storecache('00manifest.i')
1003 @storecache('00manifest.i')
972 def manifestlog(self):
1004 def manifestlog(self):
973 return manifest.manifestlog(self.svfs, self)
1005 return manifest.manifestlog(self.svfs, self)
974
1006
975 @repofilecache('dirstate')
1007 @repofilecache('dirstate')
976 def dirstate(self):
1008 def dirstate(self):
977 return self._makedirstate()
1009 return self._makedirstate()
978
1010
979 def _makedirstate(self):
1011 def _makedirstate(self):
980 """Extension point for wrapping the dirstate per-repo."""
1012 """Extension point for wrapping the dirstate per-repo."""
981 sparsematchfn = lambda: sparse.matcher(self)
1013 sparsematchfn = lambda: sparse.matcher(self)
982
1014
983 return dirstate.dirstate(self.vfs, self.ui, self.root,
1015 return dirstate.dirstate(self.vfs, self.ui, self.root,
984 self._dirstatevalidate, sparsematchfn)
1016 self._dirstatevalidate, sparsematchfn)
985
1017
986 def _dirstatevalidate(self, node):
1018 def _dirstatevalidate(self, node):
987 try:
1019 try:
988 self.changelog.rev(node)
1020 self.changelog.rev(node)
989 return node
1021 return node
990 except error.LookupError:
1022 except error.LookupError:
991 if not self._dirstatevalidatewarned:
1023 if not self._dirstatevalidatewarned:
992 self._dirstatevalidatewarned = True
1024 self._dirstatevalidatewarned = True
993 self.ui.warn(_("warning: ignoring unknown"
1025 self.ui.warn(_("warning: ignoring unknown"
994 " working parent %s!\n") % short(node))
1026 " working parent %s!\n") % short(node))
995 return nullid
1027 return nullid
996
1028
997 @storecache(narrowspec.FILENAME)
1029 @storecache(narrowspec.FILENAME)
998 def narrowpats(self):
1030 def narrowpats(self):
999 """matcher patterns for this repository's narrowspec
1031 """matcher patterns for this repository's narrowspec
1000
1032
1001 A tuple of (includes, excludes).
1033 A tuple of (includes, excludes).
1002 """
1034 """
1003 source = self
1035 source = self
1004 if self.shared():
1036 if self.shared():
1005 from . import hg
1037 from . import hg
1006 source = hg.sharedreposource(self)
1038 source = hg.sharedreposource(self)
1007 return narrowspec.load(source)
1039 return narrowspec.load(source)
1008
1040
1009 @storecache(narrowspec.FILENAME)
1041 @storecache(narrowspec.FILENAME)
1010 def _narrowmatch(self):
1042 def _narrowmatch(self):
1011 if repository.NARROW_REQUIREMENT not in self.requirements:
1043 if repository.NARROW_REQUIREMENT not in self.requirements:
1012 return matchmod.always(self.root, '')
1044 return matchmod.always(self.root, '')
1013 include, exclude = self.narrowpats
1045 include, exclude = self.narrowpats
1014 return narrowspec.match(self.root, include=include, exclude=exclude)
1046 return narrowspec.match(self.root, include=include, exclude=exclude)
1015
1047
1016 # TODO(martinvonz): make this property-like instead?
1048 # TODO(martinvonz): make this property-like instead?
1017 def narrowmatch(self):
1049 def narrowmatch(self):
1018 return self._narrowmatch
1050 return self._narrowmatch
1019
1051
1020 def setnarrowpats(self, newincludes, newexcludes):
1052 def setnarrowpats(self, newincludes, newexcludes):
1021 narrowspec.save(self, newincludes, newexcludes)
1053 narrowspec.save(self, newincludes, newexcludes)
1022 self.invalidate(clearfilecache=True)
1054 self.invalidate(clearfilecache=True)
1023
1055
1024 def __getitem__(self, changeid):
1056 def __getitem__(self, changeid):
1025 if changeid is None:
1057 if changeid is None:
1026 return context.workingctx(self)
1058 return context.workingctx(self)
1027 if isinstance(changeid, context.basectx):
1059 if isinstance(changeid, context.basectx):
1028 return changeid
1060 return changeid
1029 if isinstance(changeid, slice):
1061 if isinstance(changeid, slice):
1030 # wdirrev isn't contiguous so the slice shouldn't include it
1062 # wdirrev isn't contiguous so the slice shouldn't include it
1031 return [context.changectx(self, i)
1063 return [context.changectx(self, i)
1032 for i in pycompat.xrange(*changeid.indices(len(self)))
1064 for i in pycompat.xrange(*changeid.indices(len(self)))
1033 if i not in self.changelog.filteredrevs]
1065 if i not in self.changelog.filteredrevs]
1034 try:
1066 try:
1035 return context.changectx(self, changeid)
1067 return context.changectx(self, changeid)
1036 except error.WdirUnsupported:
1068 except error.WdirUnsupported:
1037 return context.workingctx(self)
1069 return context.workingctx(self)
1038
1070
1039 def __contains__(self, changeid):
1071 def __contains__(self, changeid):
1040 """True if the given changeid exists
1072 """True if the given changeid exists
1041
1073
1042 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1074 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1043 specified.
1075 specified.
1044 """
1076 """
1045 try:
1077 try:
1046 self[changeid]
1078 self[changeid]
1047 return True
1079 return True
1048 except error.RepoLookupError:
1080 except error.RepoLookupError:
1049 return False
1081 return False
1050
1082
1051 def __nonzero__(self):
1083 def __nonzero__(self):
1052 return True
1084 return True
1053
1085
1054 __bool__ = __nonzero__
1086 __bool__ = __nonzero__
1055
1087
1056 def __len__(self):
1088 def __len__(self):
1057 # no need to pay the cost of repoview.changelog
1089 # no need to pay the cost of repoview.changelog
1058 unfi = self.unfiltered()
1090 unfi = self.unfiltered()
1059 return len(unfi.changelog)
1091 return len(unfi.changelog)
1060
1092
1061 def __iter__(self):
1093 def __iter__(self):
1062 return iter(self.changelog)
1094 return iter(self.changelog)
1063
1095
1064 def revs(self, expr, *args):
1096 def revs(self, expr, *args):
1065 '''Find revisions matching a revset.
1097 '''Find revisions matching a revset.
1066
1098
1067 The revset is specified as a string ``expr`` that may contain
1099 The revset is specified as a string ``expr`` that may contain
1068 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1100 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1069
1101
1070 Revset aliases from the configuration are not expanded. To expand
1102 Revset aliases from the configuration are not expanded. To expand
1071 user aliases, consider calling ``scmutil.revrange()`` or
1103 user aliases, consider calling ``scmutil.revrange()`` or
1072 ``repo.anyrevs([expr], user=True)``.
1104 ``repo.anyrevs([expr], user=True)``.
1073
1105
1074 Returns a revset.abstractsmartset, which is a list-like interface
1106 Returns a revset.abstractsmartset, which is a list-like interface
1075 that contains integer revisions.
1107 that contains integer revisions.
1076 '''
1108 '''
1077 expr = revsetlang.formatspec(expr, *args)
1109 expr = revsetlang.formatspec(expr, *args)
1078 m = revset.match(None, expr)
1110 m = revset.match(None, expr)
1079 return m(self)
1111 return m(self)
1080
1112
1081 def set(self, expr, *args):
1113 def set(self, expr, *args):
1082 '''Find revisions matching a revset and emit changectx instances.
1114 '''Find revisions matching a revset and emit changectx instances.
1083
1115
1084 This is a convenience wrapper around ``revs()`` that iterates the
1116 This is a convenience wrapper around ``revs()`` that iterates the
1085 result and is a generator of changectx instances.
1117 result and is a generator of changectx instances.
1086
1118
1087 Revset aliases from the configuration are not expanded. To expand
1119 Revset aliases from the configuration are not expanded. To expand
1088 user aliases, consider calling ``scmutil.revrange()``.
1120 user aliases, consider calling ``scmutil.revrange()``.
1089 '''
1121 '''
1090 for r in self.revs(expr, *args):
1122 for r in self.revs(expr, *args):
1091 yield self[r]
1123 yield self[r]
1092
1124
1093 def anyrevs(self, specs, user=False, localalias=None):
1125 def anyrevs(self, specs, user=False, localalias=None):
1094 '''Find revisions matching one of the given revsets.
1126 '''Find revisions matching one of the given revsets.
1095
1127
1096 Revset aliases from the configuration are not expanded by default. To
1128 Revset aliases from the configuration are not expanded by default. To
1097 expand user aliases, specify ``user=True``. To provide some local
1129 expand user aliases, specify ``user=True``. To provide some local
1098 definitions overriding user aliases, set ``localalias`` to
1130 definitions overriding user aliases, set ``localalias`` to
1099 ``{name: definitionstring}``.
1131 ``{name: definitionstring}``.
1100 '''
1132 '''
1101 if user:
1133 if user:
1102 m = revset.matchany(self.ui, specs,
1134 m = revset.matchany(self.ui, specs,
1103 lookup=revset.lookupfn(self),
1135 lookup=revset.lookupfn(self),
1104 localalias=localalias)
1136 localalias=localalias)
1105 else:
1137 else:
1106 m = revset.matchany(None, specs, localalias=localalias)
1138 m = revset.matchany(None, specs, localalias=localalias)
1107 return m(self)
1139 return m(self)
1108
1140
1109 def url(self):
1141 def url(self):
1110 return 'file:' + self.root
1142 return 'file:' + self.root
1111
1143
1112 def hook(self, name, throw=False, **args):
1144 def hook(self, name, throw=False, **args):
1113 """Call a hook, passing this repo instance.
1145 """Call a hook, passing this repo instance.
1114
1146
1115 This a convenience method to aid invoking hooks. Extensions likely
1147 This a convenience method to aid invoking hooks. Extensions likely
1116 won't call this unless they have registered a custom hook or are
1148 won't call this unless they have registered a custom hook or are
1117 replacing code that is expected to call a hook.
1149 replacing code that is expected to call a hook.
1118 """
1150 """
1119 return hook.hook(self.ui, self, name, throw, **args)
1151 return hook.hook(self.ui, self, name, throw, **args)
1120
1152
1121 @filteredpropertycache
1153 @filteredpropertycache
1122 def _tagscache(self):
1154 def _tagscache(self):
1123 '''Returns a tagscache object that contains various tags related
1155 '''Returns a tagscache object that contains various tags related
1124 caches.'''
1156 caches.'''
1125
1157
1126 # This simplifies its cache management by having one decorated
1158 # This simplifies its cache management by having one decorated
1127 # function (this one) and the rest simply fetch things from it.
1159 # function (this one) and the rest simply fetch things from it.
1128 class tagscache(object):
1160 class tagscache(object):
1129 def __init__(self):
1161 def __init__(self):
1130 # These two define the set of tags for this repository. tags
1162 # These two define the set of tags for this repository. tags
1131 # maps tag name to node; tagtypes maps tag name to 'global' or
1163 # maps tag name to node; tagtypes maps tag name to 'global' or
1132 # 'local'. (Global tags are defined by .hgtags across all
1164 # 'local'. (Global tags are defined by .hgtags across all
1133 # heads, and local tags are defined in .hg/localtags.)
1165 # heads, and local tags are defined in .hg/localtags.)
1134 # They constitute the in-memory cache of tags.
1166 # They constitute the in-memory cache of tags.
1135 self.tags = self.tagtypes = None
1167 self.tags = self.tagtypes = None
1136
1168
1137 self.nodetagscache = self.tagslist = None
1169 self.nodetagscache = self.tagslist = None
1138
1170
1139 cache = tagscache()
1171 cache = tagscache()
1140 cache.tags, cache.tagtypes = self._findtags()
1172 cache.tags, cache.tagtypes = self._findtags()
1141
1173
1142 return cache
1174 return cache
1143
1175
1144 def tags(self):
1176 def tags(self):
1145 '''return a mapping of tag to node'''
1177 '''return a mapping of tag to node'''
1146 t = {}
1178 t = {}
1147 if self.changelog.filteredrevs:
1179 if self.changelog.filteredrevs:
1148 tags, tt = self._findtags()
1180 tags, tt = self._findtags()
1149 else:
1181 else:
1150 tags = self._tagscache.tags
1182 tags = self._tagscache.tags
1151 for k, v in tags.iteritems():
1183 for k, v in tags.iteritems():
1152 try:
1184 try:
1153 # ignore tags to unknown nodes
1185 # ignore tags to unknown nodes
1154 self.changelog.rev(v)
1186 self.changelog.rev(v)
1155 t[k] = v
1187 t[k] = v
1156 except (error.LookupError, ValueError):
1188 except (error.LookupError, ValueError):
1157 pass
1189 pass
1158 return t
1190 return t
1159
1191
1160 def _findtags(self):
1192 def _findtags(self):
1161 '''Do the hard work of finding tags. Return a pair of dicts
1193 '''Do the hard work of finding tags. Return a pair of dicts
1162 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1194 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1163 maps tag name to a string like \'global\' or \'local\'.
1195 maps tag name to a string like \'global\' or \'local\'.
1164 Subclasses or extensions are free to add their own tags, but
1196 Subclasses or extensions are free to add their own tags, but
1165 should be aware that the returned dicts will be retained for the
1197 should be aware that the returned dicts will be retained for the
1166 duration of the localrepo object.'''
1198 duration of the localrepo object.'''
1167
1199
1168 # XXX what tagtype should subclasses/extensions use? Currently
1200 # XXX what tagtype should subclasses/extensions use? Currently
1169 # mq and bookmarks add tags, but do not set the tagtype at all.
1201 # mq and bookmarks add tags, but do not set the tagtype at all.
1170 # Should each extension invent its own tag type? Should there
1202 # Should each extension invent its own tag type? Should there
1171 # be one tagtype for all such "virtual" tags? Or is the status
1203 # be one tagtype for all such "virtual" tags? Or is the status
1172 # quo fine?
1204 # quo fine?
1173
1205
1174
1206
1175 # map tag name to (node, hist)
1207 # map tag name to (node, hist)
1176 alltags = tagsmod.findglobaltags(self.ui, self)
1208 alltags = tagsmod.findglobaltags(self.ui, self)
1177 # map tag name to tag type
1209 # map tag name to tag type
1178 tagtypes = dict((tag, 'global') for tag in alltags)
1210 tagtypes = dict((tag, 'global') for tag in alltags)
1179
1211
1180 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1212 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1181
1213
1182 # Build the return dicts. Have to re-encode tag names because
1214 # Build the return dicts. Have to re-encode tag names because
1183 # the tags module always uses UTF-8 (in order not to lose info
1215 # the tags module always uses UTF-8 (in order not to lose info
1184 # writing to the cache), but the rest of Mercurial wants them in
1216 # writing to the cache), but the rest of Mercurial wants them in
1185 # local encoding.
1217 # local encoding.
1186 tags = {}
1218 tags = {}
1187 for (name, (node, hist)) in alltags.iteritems():
1219 for (name, (node, hist)) in alltags.iteritems():
1188 if node != nullid:
1220 if node != nullid:
1189 tags[encoding.tolocal(name)] = node
1221 tags[encoding.tolocal(name)] = node
1190 tags['tip'] = self.changelog.tip()
1222 tags['tip'] = self.changelog.tip()
1191 tagtypes = dict([(encoding.tolocal(name), value)
1223 tagtypes = dict([(encoding.tolocal(name), value)
1192 for (name, value) in tagtypes.iteritems()])
1224 for (name, value) in tagtypes.iteritems()])
1193 return (tags, tagtypes)
1225 return (tags, tagtypes)
1194
1226
1195 def tagtype(self, tagname):
1227 def tagtype(self, tagname):
1196 '''
1228 '''
1197 return the type of the given tag. result can be:
1229 return the type of the given tag. result can be:
1198
1230
1199 'local' : a local tag
1231 'local' : a local tag
1200 'global' : a global tag
1232 'global' : a global tag
1201 None : tag does not exist
1233 None : tag does not exist
1202 '''
1234 '''
1203
1235
1204 return self._tagscache.tagtypes.get(tagname)
1236 return self._tagscache.tagtypes.get(tagname)
1205
1237
1206 def tagslist(self):
1238 def tagslist(self):
1207 '''return a list of tags ordered by revision'''
1239 '''return a list of tags ordered by revision'''
1208 if not self._tagscache.tagslist:
1240 if not self._tagscache.tagslist:
1209 l = []
1241 l = []
1210 for t, n in self.tags().iteritems():
1242 for t, n in self.tags().iteritems():
1211 l.append((self.changelog.rev(n), t, n))
1243 l.append((self.changelog.rev(n), t, n))
1212 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1244 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1213
1245
1214 return self._tagscache.tagslist
1246 return self._tagscache.tagslist
1215
1247
1216 def nodetags(self, node):
1248 def nodetags(self, node):
1217 '''return the tags associated with a node'''
1249 '''return the tags associated with a node'''
1218 if not self._tagscache.nodetagscache:
1250 if not self._tagscache.nodetagscache:
1219 nodetagscache = {}
1251 nodetagscache = {}
1220 for t, n in self._tagscache.tags.iteritems():
1252 for t, n in self._tagscache.tags.iteritems():
1221 nodetagscache.setdefault(n, []).append(t)
1253 nodetagscache.setdefault(n, []).append(t)
1222 for tags in nodetagscache.itervalues():
1254 for tags in nodetagscache.itervalues():
1223 tags.sort()
1255 tags.sort()
1224 self._tagscache.nodetagscache = nodetagscache
1256 self._tagscache.nodetagscache = nodetagscache
1225 return self._tagscache.nodetagscache.get(node, [])
1257 return self._tagscache.nodetagscache.get(node, [])
1226
1258
1227 def nodebookmarks(self, node):
1259 def nodebookmarks(self, node):
1228 """return the list of bookmarks pointing to the specified node"""
1260 """return the list of bookmarks pointing to the specified node"""
1229 return self._bookmarks.names(node)
1261 return self._bookmarks.names(node)
1230
1262
1231 def branchmap(self):
1263 def branchmap(self):
1232 '''returns a dictionary {branch: [branchheads]} with branchheads
1264 '''returns a dictionary {branch: [branchheads]} with branchheads
1233 ordered by increasing revision number'''
1265 ordered by increasing revision number'''
1234 branchmap.updatecache(self)
1266 branchmap.updatecache(self)
1235 return self._branchcaches[self.filtername]
1267 return self._branchcaches[self.filtername]
1236
1268
1237 @unfilteredmethod
1269 @unfilteredmethod
1238 def revbranchcache(self):
1270 def revbranchcache(self):
1239 if not self._revbranchcache:
1271 if not self._revbranchcache:
1240 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1272 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1241 return self._revbranchcache
1273 return self._revbranchcache
1242
1274
1243 def branchtip(self, branch, ignoremissing=False):
1275 def branchtip(self, branch, ignoremissing=False):
1244 '''return the tip node for a given branch
1276 '''return the tip node for a given branch
1245
1277
1246 If ignoremissing is True, then this method will not raise an error.
1278 If ignoremissing is True, then this method will not raise an error.
1247 This is helpful for callers that only expect None for a missing branch
1279 This is helpful for callers that only expect None for a missing branch
1248 (e.g. namespace).
1280 (e.g. namespace).
1249
1281
1250 '''
1282 '''
1251 try:
1283 try:
1252 return self.branchmap().branchtip(branch)
1284 return self.branchmap().branchtip(branch)
1253 except KeyError:
1285 except KeyError:
1254 if not ignoremissing:
1286 if not ignoremissing:
1255 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1287 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1256 else:
1288 else:
1257 pass
1289 pass
1258
1290
1259 def lookup(self, key):
1291 def lookup(self, key):
1260 return scmutil.revsymbol(self, key).node()
1292 return scmutil.revsymbol(self, key).node()
1261
1293
1262 def lookupbranch(self, key):
1294 def lookupbranch(self, key):
1263 if key in self.branchmap():
1295 if key in self.branchmap():
1264 return key
1296 return key
1265
1297
1266 return scmutil.revsymbol(self, key).branch()
1298 return scmutil.revsymbol(self, key).branch()
1267
1299
1268 def known(self, nodes):
1300 def known(self, nodes):
1269 cl = self.changelog
1301 cl = self.changelog
1270 nm = cl.nodemap
1302 nm = cl.nodemap
1271 filtered = cl.filteredrevs
1303 filtered = cl.filteredrevs
1272 result = []
1304 result = []
1273 for n in nodes:
1305 for n in nodes:
1274 r = nm.get(n)
1306 r = nm.get(n)
1275 resp = not (r is None or r in filtered)
1307 resp = not (r is None or r in filtered)
1276 result.append(resp)
1308 result.append(resp)
1277 return result
1309 return result
1278
1310
1279 def local(self):
1311 def local(self):
1280 return self
1312 return self
1281
1313
1282 def publishing(self):
1314 def publishing(self):
1283 # it's safe (and desirable) to trust the publish flag unconditionally
1315 # it's safe (and desirable) to trust the publish flag unconditionally
1284 # so that we don't finalize changes shared between users via ssh or nfs
1316 # so that we don't finalize changes shared between users via ssh or nfs
1285 return self.ui.configbool('phases', 'publish', untrusted=True)
1317 return self.ui.configbool('phases', 'publish', untrusted=True)
1286
1318
1287 def cancopy(self):
1319 def cancopy(self):
1288 # so statichttprepo's override of local() works
1320 # so statichttprepo's override of local() works
1289 if not self.local():
1321 if not self.local():
1290 return False
1322 return False
1291 if not self.publishing():
1323 if not self.publishing():
1292 return True
1324 return True
1293 # if publishing we can't copy if there is filtered content
1325 # if publishing we can't copy if there is filtered content
1294 return not self.filtered('visible').changelog.filteredrevs
1326 return not self.filtered('visible').changelog.filteredrevs
1295
1327
1296 def shared(self):
1328 def shared(self):
1297 '''the type of shared repository (None if not shared)'''
1329 '''the type of shared repository (None if not shared)'''
1298 if self.sharedpath != self.path:
1330 if self.sharedpath != self.path:
1299 return 'store'
1331 return 'store'
1300 return None
1332 return None
1301
1333
1302 def wjoin(self, f, *insidef):
1334 def wjoin(self, f, *insidef):
1303 return self.vfs.reljoin(self.root, f, *insidef)
1335 return self.vfs.reljoin(self.root, f, *insidef)
1304
1336
1305 def file(self, f):
1337 def file(self, f):
1306 if f[0] == '/':
1338 if f[0] == '/':
1307 f = f[1:]
1339 f = f[1:]
1308 return filelog.filelog(self.svfs, f)
1340 return filelog.filelog(self.svfs, f)
1309
1341
1310 def setparents(self, p1, p2=nullid):
1342 def setparents(self, p1, p2=nullid):
1311 with self.dirstate.parentchange():
1343 with self.dirstate.parentchange():
1312 copies = self.dirstate.setparents(p1, p2)
1344 copies = self.dirstate.setparents(p1, p2)
1313 pctx = self[p1]
1345 pctx = self[p1]
1314 if copies:
1346 if copies:
1315 # Adjust copy records, the dirstate cannot do it, it
1347 # Adjust copy records, the dirstate cannot do it, it
1316 # requires access to parents manifests. Preserve them
1348 # requires access to parents manifests. Preserve them
1317 # only for entries added to first parent.
1349 # only for entries added to first parent.
1318 for f in copies:
1350 for f in copies:
1319 if f not in pctx and copies[f] in pctx:
1351 if f not in pctx and copies[f] in pctx:
1320 self.dirstate.copy(copies[f], f)
1352 self.dirstate.copy(copies[f], f)
1321 if p2 == nullid:
1353 if p2 == nullid:
1322 for f, s in sorted(self.dirstate.copies().items()):
1354 for f, s in sorted(self.dirstate.copies().items()):
1323 if f not in pctx and s not in pctx:
1355 if f not in pctx and s not in pctx:
1324 self.dirstate.copy(None, f)
1356 self.dirstate.copy(None, f)
1325
1357
1326 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1358 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1327 """changeid can be a changeset revision, node, or tag.
1359 """changeid can be a changeset revision, node, or tag.
1328 fileid can be a file revision or node."""
1360 fileid can be a file revision or node."""
1329 return context.filectx(self, path, changeid, fileid,
1361 return context.filectx(self, path, changeid, fileid,
1330 changectx=changectx)
1362 changectx=changectx)
1331
1363
1332 def getcwd(self):
1364 def getcwd(self):
1333 return self.dirstate.getcwd()
1365 return self.dirstate.getcwd()
1334
1366
1335 def pathto(self, f, cwd=None):
1367 def pathto(self, f, cwd=None):
1336 return self.dirstate.pathto(f, cwd)
1368 return self.dirstate.pathto(f, cwd)
1337
1369
1338 def _loadfilter(self, filter):
1370 def _loadfilter(self, filter):
1339 if filter not in self._filterpats:
1371 if filter not in self._filterpats:
1340 l = []
1372 l = []
1341 for pat, cmd in self.ui.configitems(filter):
1373 for pat, cmd in self.ui.configitems(filter):
1342 if cmd == '!':
1374 if cmd == '!':
1343 continue
1375 continue
1344 mf = matchmod.match(self.root, '', [pat])
1376 mf = matchmod.match(self.root, '', [pat])
1345 fn = None
1377 fn = None
1346 params = cmd
1378 params = cmd
1347 for name, filterfn in self._datafilters.iteritems():
1379 for name, filterfn in self._datafilters.iteritems():
1348 if cmd.startswith(name):
1380 if cmd.startswith(name):
1349 fn = filterfn
1381 fn = filterfn
1350 params = cmd[len(name):].lstrip()
1382 params = cmd[len(name):].lstrip()
1351 break
1383 break
1352 if not fn:
1384 if not fn:
1353 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1385 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1354 # Wrap old filters not supporting keyword arguments
1386 # Wrap old filters not supporting keyword arguments
1355 if not pycompat.getargspec(fn)[2]:
1387 if not pycompat.getargspec(fn)[2]:
1356 oldfn = fn
1388 oldfn = fn
1357 fn = lambda s, c, **kwargs: oldfn(s, c)
1389 fn = lambda s, c, **kwargs: oldfn(s, c)
1358 l.append((mf, fn, params))
1390 l.append((mf, fn, params))
1359 self._filterpats[filter] = l
1391 self._filterpats[filter] = l
1360 return self._filterpats[filter]
1392 return self._filterpats[filter]
1361
1393
1362 def _filter(self, filterpats, filename, data):
1394 def _filter(self, filterpats, filename, data):
1363 for mf, fn, cmd in filterpats:
1395 for mf, fn, cmd in filterpats:
1364 if mf(filename):
1396 if mf(filename):
1365 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1397 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1366 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1398 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1367 break
1399 break
1368
1400
1369 return data
1401 return data
1370
1402
1371 @unfilteredpropertycache
1403 @unfilteredpropertycache
1372 def _encodefilterpats(self):
1404 def _encodefilterpats(self):
1373 return self._loadfilter('encode')
1405 return self._loadfilter('encode')
1374
1406
1375 @unfilteredpropertycache
1407 @unfilteredpropertycache
1376 def _decodefilterpats(self):
1408 def _decodefilterpats(self):
1377 return self._loadfilter('decode')
1409 return self._loadfilter('decode')
1378
1410
1379 def adddatafilter(self, name, filter):
1411 def adddatafilter(self, name, filter):
1380 self._datafilters[name] = filter
1412 self._datafilters[name] = filter
1381
1413
1382 def wread(self, filename):
1414 def wread(self, filename):
1383 if self.wvfs.islink(filename):
1415 if self.wvfs.islink(filename):
1384 data = self.wvfs.readlink(filename)
1416 data = self.wvfs.readlink(filename)
1385 else:
1417 else:
1386 data = self.wvfs.read(filename)
1418 data = self.wvfs.read(filename)
1387 return self._filter(self._encodefilterpats, filename, data)
1419 return self._filter(self._encodefilterpats, filename, data)
1388
1420
1389 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1421 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1390 """write ``data`` into ``filename`` in the working directory
1422 """write ``data`` into ``filename`` in the working directory
1391
1423
1392 This returns length of written (maybe decoded) data.
1424 This returns length of written (maybe decoded) data.
1393 """
1425 """
1394 data = self._filter(self._decodefilterpats, filename, data)
1426 data = self._filter(self._decodefilterpats, filename, data)
1395 if 'l' in flags:
1427 if 'l' in flags:
1396 self.wvfs.symlink(data, filename)
1428 self.wvfs.symlink(data, filename)
1397 else:
1429 else:
1398 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1430 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1399 **kwargs)
1431 **kwargs)
1400 if 'x' in flags:
1432 if 'x' in flags:
1401 self.wvfs.setflags(filename, False, True)
1433 self.wvfs.setflags(filename, False, True)
1402 else:
1434 else:
1403 self.wvfs.setflags(filename, False, False)
1435 self.wvfs.setflags(filename, False, False)
1404 return len(data)
1436 return len(data)
1405
1437
1406 def wwritedata(self, filename, data):
1438 def wwritedata(self, filename, data):
1407 return self._filter(self._decodefilterpats, filename, data)
1439 return self._filter(self._decodefilterpats, filename, data)
1408
1440
1409 def currenttransaction(self):
1441 def currenttransaction(self):
1410 """return the current transaction or None if non exists"""
1442 """return the current transaction or None if non exists"""
1411 if self._transref:
1443 if self._transref:
1412 tr = self._transref()
1444 tr = self._transref()
1413 else:
1445 else:
1414 tr = None
1446 tr = None
1415
1447
1416 if tr and tr.running():
1448 if tr and tr.running():
1417 return tr
1449 return tr
1418 return None
1450 return None
1419
1451
1420 def transaction(self, desc, report=None):
1452 def transaction(self, desc, report=None):
1421 if (self.ui.configbool('devel', 'all-warnings')
1453 if (self.ui.configbool('devel', 'all-warnings')
1422 or self.ui.configbool('devel', 'check-locks')):
1454 or self.ui.configbool('devel', 'check-locks')):
1423 if self._currentlock(self._lockref) is None:
1455 if self._currentlock(self._lockref) is None:
1424 raise error.ProgrammingError('transaction requires locking')
1456 raise error.ProgrammingError('transaction requires locking')
1425 tr = self.currenttransaction()
1457 tr = self.currenttransaction()
1426 if tr is not None:
1458 if tr is not None:
1427 return tr.nest(name=desc)
1459 return tr.nest(name=desc)
1428
1460
1429 # abort here if the journal already exists
1461 # abort here if the journal already exists
1430 if self.svfs.exists("journal"):
1462 if self.svfs.exists("journal"):
1431 raise error.RepoError(
1463 raise error.RepoError(
1432 _("abandoned transaction found"),
1464 _("abandoned transaction found"),
1433 hint=_("run 'hg recover' to clean up transaction"))
1465 hint=_("run 'hg recover' to clean up transaction"))
1434
1466
1435 idbase = "%.40f#%f" % (random.random(), time.time())
1467 idbase = "%.40f#%f" % (random.random(), time.time())
1436 ha = hex(hashlib.sha1(idbase).digest())
1468 ha = hex(hashlib.sha1(idbase).digest())
1437 txnid = 'TXN:' + ha
1469 txnid = 'TXN:' + ha
1438 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1470 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1439
1471
1440 self._writejournal(desc)
1472 self._writejournal(desc)
1441 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1473 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1442 if report:
1474 if report:
1443 rp = report
1475 rp = report
1444 else:
1476 else:
1445 rp = self.ui.warn
1477 rp = self.ui.warn
1446 vfsmap = {'plain': self.vfs} # root of .hg/
1478 vfsmap = {'plain': self.vfs} # root of .hg/
1447 # we must avoid cyclic reference between repo and transaction.
1479 # we must avoid cyclic reference between repo and transaction.
1448 reporef = weakref.ref(self)
1480 reporef = weakref.ref(self)
1449 # Code to track tag movement
1481 # Code to track tag movement
1450 #
1482 #
1451 # Since tags are all handled as file content, it is actually quite hard
1483 # Since tags are all handled as file content, it is actually quite hard
1452 # to track these movement from a code perspective. So we fallback to a
1484 # to track these movement from a code perspective. So we fallback to a
1453 # tracking at the repository level. One could envision to track changes
1485 # tracking at the repository level. One could envision to track changes
1454 # to the '.hgtags' file through changegroup apply but that fails to
1486 # to the '.hgtags' file through changegroup apply but that fails to
1455 # cope with case where transaction expose new heads without changegroup
1487 # cope with case where transaction expose new heads without changegroup
1456 # being involved (eg: phase movement).
1488 # being involved (eg: phase movement).
1457 #
1489 #
1458 # For now, We gate the feature behind a flag since this likely comes
1490 # For now, We gate the feature behind a flag since this likely comes
1459 # with performance impacts. The current code run more often than needed
1491 # with performance impacts. The current code run more often than needed
1460 # and do not use caches as much as it could. The current focus is on
1492 # and do not use caches as much as it could. The current focus is on
1461 # the behavior of the feature so we disable it by default. The flag
1493 # the behavior of the feature so we disable it by default. The flag
1462 # will be removed when we are happy with the performance impact.
1494 # will be removed when we are happy with the performance impact.
1463 #
1495 #
1464 # Once this feature is no longer experimental move the following
1496 # Once this feature is no longer experimental move the following
1465 # documentation to the appropriate help section:
1497 # documentation to the appropriate help section:
1466 #
1498 #
1467 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1499 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1468 # tags (new or changed or deleted tags). In addition the details of
1500 # tags (new or changed or deleted tags). In addition the details of
1469 # these changes are made available in a file at:
1501 # these changes are made available in a file at:
1470 # ``REPOROOT/.hg/changes/tags.changes``.
1502 # ``REPOROOT/.hg/changes/tags.changes``.
1471 # Make sure you check for HG_TAG_MOVED before reading that file as it
1503 # Make sure you check for HG_TAG_MOVED before reading that file as it
1472 # might exist from a previous transaction even if no tag were touched
1504 # might exist from a previous transaction even if no tag were touched
1473 # in this one. Changes are recorded in a line base format::
1505 # in this one. Changes are recorded in a line base format::
1474 #
1506 #
1475 # <action> <hex-node> <tag-name>\n
1507 # <action> <hex-node> <tag-name>\n
1476 #
1508 #
1477 # Actions are defined as follow:
1509 # Actions are defined as follow:
1478 # "-R": tag is removed,
1510 # "-R": tag is removed,
1479 # "+A": tag is added,
1511 # "+A": tag is added,
1480 # "-M": tag is moved (old value),
1512 # "-M": tag is moved (old value),
1481 # "+M": tag is moved (new value),
1513 # "+M": tag is moved (new value),
1482 tracktags = lambda x: None
1514 tracktags = lambda x: None
1483 # experimental config: experimental.hook-track-tags
1515 # experimental config: experimental.hook-track-tags
1484 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1516 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1485 if desc != 'strip' and shouldtracktags:
1517 if desc != 'strip' and shouldtracktags:
1486 oldheads = self.changelog.headrevs()
1518 oldheads = self.changelog.headrevs()
1487 def tracktags(tr2):
1519 def tracktags(tr2):
1488 repo = reporef()
1520 repo = reporef()
1489 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1521 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1490 newheads = repo.changelog.headrevs()
1522 newheads = repo.changelog.headrevs()
1491 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1523 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1492 # notes: we compare lists here.
1524 # notes: we compare lists here.
1493 # As we do it only once buiding set would not be cheaper
1525 # As we do it only once buiding set would not be cheaper
1494 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1526 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1495 if changes:
1527 if changes:
1496 tr2.hookargs['tag_moved'] = '1'
1528 tr2.hookargs['tag_moved'] = '1'
1497 with repo.vfs('changes/tags.changes', 'w',
1529 with repo.vfs('changes/tags.changes', 'w',
1498 atomictemp=True) as changesfile:
1530 atomictemp=True) as changesfile:
1499 # note: we do not register the file to the transaction
1531 # note: we do not register the file to the transaction
1500 # because we needs it to still exist on the transaction
1532 # because we needs it to still exist on the transaction
1501 # is close (for txnclose hooks)
1533 # is close (for txnclose hooks)
1502 tagsmod.writediff(changesfile, changes)
1534 tagsmod.writediff(changesfile, changes)
1503 def validate(tr2):
1535 def validate(tr2):
1504 """will run pre-closing hooks"""
1536 """will run pre-closing hooks"""
1505 # XXX the transaction API is a bit lacking here so we take a hacky
1537 # XXX the transaction API is a bit lacking here so we take a hacky
1506 # path for now
1538 # path for now
1507 #
1539 #
1508 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1540 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1509 # dict is copied before these run. In addition we needs the data
1541 # dict is copied before these run. In addition we needs the data
1510 # available to in memory hooks too.
1542 # available to in memory hooks too.
1511 #
1543 #
1512 # Moreover, we also need to make sure this runs before txnclose
1544 # Moreover, we also need to make sure this runs before txnclose
1513 # hooks and there is no "pending" mechanism that would execute
1545 # hooks and there is no "pending" mechanism that would execute
1514 # logic only if hooks are about to run.
1546 # logic only if hooks are about to run.
1515 #
1547 #
1516 # Fixing this limitation of the transaction is also needed to track
1548 # Fixing this limitation of the transaction is also needed to track
1517 # other families of changes (bookmarks, phases, obsolescence).
1549 # other families of changes (bookmarks, phases, obsolescence).
1518 #
1550 #
1519 # This will have to be fixed before we remove the experimental
1551 # This will have to be fixed before we remove the experimental
1520 # gating.
1552 # gating.
1521 tracktags(tr2)
1553 tracktags(tr2)
1522 repo = reporef()
1554 repo = reporef()
1523 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1555 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1524 scmutil.enforcesinglehead(repo, tr2, desc)
1556 scmutil.enforcesinglehead(repo, tr2, desc)
1525 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1557 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1526 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1558 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1527 args = tr.hookargs.copy()
1559 args = tr.hookargs.copy()
1528 args.update(bookmarks.preparehookargs(name, old, new))
1560 args.update(bookmarks.preparehookargs(name, old, new))
1529 repo.hook('pretxnclose-bookmark', throw=True,
1561 repo.hook('pretxnclose-bookmark', throw=True,
1530 txnname=desc,
1562 txnname=desc,
1531 **pycompat.strkwargs(args))
1563 **pycompat.strkwargs(args))
1532 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1564 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1533 cl = repo.unfiltered().changelog
1565 cl = repo.unfiltered().changelog
1534 for rev, (old, new) in tr.changes['phases'].items():
1566 for rev, (old, new) in tr.changes['phases'].items():
1535 args = tr.hookargs.copy()
1567 args = tr.hookargs.copy()
1536 node = hex(cl.node(rev))
1568 node = hex(cl.node(rev))
1537 args.update(phases.preparehookargs(node, old, new))
1569 args.update(phases.preparehookargs(node, old, new))
1538 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1570 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1539 **pycompat.strkwargs(args))
1571 **pycompat.strkwargs(args))
1540
1572
1541 repo.hook('pretxnclose', throw=True,
1573 repo.hook('pretxnclose', throw=True,
1542 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1574 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1543 def releasefn(tr, success):
1575 def releasefn(tr, success):
1544 repo = reporef()
1576 repo = reporef()
1545 if success:
1577 if success:
1546 # this should be explicitly invoked here, because
1578 # this should be explicitly invoked here, because
1547 # in-memory changes aren't written out at closing
1579 # in-memory changes aren't written out at closing
1548 # transaction, if tr.addfilegenerator (via
1580 # transaction, if tr.addfilegenerator (via
1549 # dirstate.write or so) isn't invoked while
1581 # dirstate.write or so) isn't invoked while
1550 # transaction running
1582 # transaction running
1551 repo.dirstate.write(None)
1583 repo.dirstate.write(None)
1552 else:
1584 else:
1553 # discard all changes (including ones already written
1585 # discard all changes (including ones already written
1554 # out) in this transaction
1586 # out) in this transaction
1555 narrowspec.restorebackup(self, 'journal.narrowspec')
1587 narrowspec.restorebackup(self, 'journal.narrowspec')
1556 repo.dirstate.restorebackup(None, 'journal.dirstate')
1588 repo.dirstate.restorebackup(None, 'journal.dirstate')
1557
1589
1558 repo.invalidate(clearfilecache=True)
1590 repo.invalidate(clearfilecache=True)
1559
1591
1560 tr = transaction.transaction(rp, self.svfs, vfsmap,
1592 tr = transaction.transaction(rp, self.svfs, vfsmap,
1561 "journal",
1593 "journal",
1562 "undo",
1594 "undo",
1563 aftertrans(renames),
1595 aftertrans(renames),
1564 self.store.createmode,
1596 self.store.createmode,
1565 validator=validate,
1597 validator=validate,
1566 releasefn=releasefn,
1598 releasefn=releasefn,
1567 checkambigfiles=_cachedfiles,
1599 checkambigfiles=_cachedfiles,
1568 name=desc)
1600 name=desc)
1569 tr.changes['origrepolen'] = len(self)
1601 tr.changes['origrepolen'] = len(self)
1570 tr.changes['obsmarkers'] = set()
1602 tr.changes['obsmarkers'] = set()
1571 tr.changes['phases'] = {}
1603 tr.changes['phases'] = {}
1572 tr.changes['bookmarks'] = {}
1604 tr.changes['bookmarks'] = {}
1573
1605
1574 tr.hookargs['txnid'] = txnid
1606 tr.hookargs['txnid'] = txnid
1575 # note: writing the fncache only during finalize mean that the file is
1607 # note: writing the fncache only during finalize mean that the file is
1576 # outdated when running hooks. As fncache is used for streaming clone,
1608 # outdated when running hooks. As fncache is used for streaming clone,
1577 # this is not expected to break anything that happen during the hooks.
1609 # this is not expected to break anything that happen during the hooks.
1578 tr.addfinalize('flush-fncache', self.store.write)
1610 tr.addfinalize('flush-fncache', self.store.write)
1579 def txnclosehook(tr2):
1611 def txnclosehook(tr2):
1580 """To be run if transaction is successful, will schedule a hook run
1612 """To be run if transaction is successful, will schedule a hook run
1581 """
1613 """
1582 # Don't reference tr2 in hook() so we don't hold a reference.
1614 # Don't reference tr2 in hook() so we don't hold a reference.
1583 # This reduces memory consumption when there are multiple
1615 # This reduces memory consumption when there are multiple
1584 # transactions per lock. This can likely go away if issue5045
1616 # transactions per lock. This can likely go away if issue5045
1585 # fixes the function accumulation.
1617 # fixes the function accumulation.
1586 hookargs = tr2.hookargs
1618 hookargs = tr2.hookargs
1587
1619
1588 def hookfunc():
1620 def hookfunc():
1589 repo = reporef()
1621 repo = reporef()
1590 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1622 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1591 bmchanges = sorted(tr.changes['bookmarks'].items())
1623 bmchanges = sorted(tr.changes['bookmarks'].items())
1592 for name, (old, new) in bmchanges:
1624 for name, (old, new) in bmchanges:
1593 args = tr.hookargs.copy()
1625 args = tr.hookargs.copy()
1594 args.update(bookmarks.preparehookargs(name, old, new))
1626 args.update(bookmarks.preparehookargs(name, old, new))
1595 repo.hook('txnclose-bookmark', throw=False,
1627 repo.hook('txnclose-bookmark', throw=False,
1596 txnname=desc, **pycompat.strkwargs(args))
1628 txnname=desc, **pycompat.strkwargs(args))
1597
1629
1598 if hook.hashook(repo.ui, 'txnclose-phase'):
1630 if hook.hashook(repo.ui, 'txnclose-phase'):
1599 cl = repo.unfiltered().changelog
1631 cl = repo.unfiltered().changelog
1600 phasemv = sorted(tr.changes['phases'].items())
1632 phasemv = sorted(tr.changes['phases'].items())
1601 for rev, (old, new) in phasemv:
1633 for rev, (old, new) in phasemv:
1602 args = tr.hookargs.copy()
1634 args = tr.hookargs.copy()
1603 node = hex(cl.node(rev))
1635 node = hex(cl.node(rev))
1604 args.update(phases.preparehookargs(node, old, new))
1636 args.update(phases.preparehookargs(node, old, new))
1605 repo.hook('txnclose-phase', throw=False, txnname=desc,
1637 repo.hook('txnclose-phase', throw=False, txnname=desc,
1606 **pycompat.strkwargs(args))
1638 **pycompat.strkwargs(args))
1607
1639
1608 repo.hook('txnclose', throw=False, txnname=desc,
1640 repo.hook('txnclose', throw=False, txnname=desc,
1609 **pycompat.strkwargs(hookargs))
1641 **pycompat.strkwargs(hookargs))
1610 reporef()._afterlock(hookfunc)
1642 reporef()._afterlock(hookfunc)
1611 tr.addfinalize('txnclose-hook', txnclosehook)
1643 tr.addfinalize('txnclose-hook', txnclosehook)
1612 # Include a leading "-" to make it happen before the transaction summary
1644 # Include a leading "-" to make it happen before the transaction summary
1613 # reports registered via scmutil.registersummarycallback() whose names
1645 # reports registered via scmutil.registersummarycallback() whose names
1614 # are 00-txnreport etc. That way, the caches will be warm when the
1646 # are 00-txnreport etc. That way, the caches will be warm when the
1615 # callbacks run.
1647 # callbacks run.
1616 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1648 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1617 def txnaborthook(tr2):
1649 def txnaborthook(tr2):
1618 """To be run if transaction is aborted
1650 """To be run if transaction is aborted
1619 """
1651 """
1620 reporef().hook('txnabort', throw=False, txnname=desc,
1652 reporef().hook('txnabort', throw=False, txnname=desc,
1621 **pycompat.strkwargs(tr2.hookargs))
1653 **pycompat.strkwargs(tr2.hookargs))
1622 tr.addabort('txnabort-hook', txnaborthook)
1654 tr.addabort('txnabort-hook', txnaborthook)
1623 # avoid eager cache invalidation. in-memory data should be identical
1655 # avoid eager cache invalidation. in-memory data should be identical
1624 # to stored data if transaction has no error.
1656 # to stored data if transaction has no error.
1625 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1657 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1626 self._transref = weakref.ref(tr)
1658 self._transref = weakref.ref(tr)
1627 scmutil.registersummarycallback(self, tr, desc)
1659 scmutil.registersummarycallback(self, tr, desc)
1628 return tr
1660 return tr
1629
1661
1630 def _journalfiles(self):
1662 def _journalfiles(self):
1631 return ((self.svfs, 'journal'),
1663 return ((self.svfs, 'journal'),
1632 (self.vfs, 'journal.dirstate'),
1664 (self.vfs, 'journal.dirstate'),
1633 (self.vfs, 'journal.branch'),
1665 (self.vfs, 'journal.branch'),
1634 (self.vfs, 'journal.desc'),
1666 (self.vfs, 'journal.desc'),
1635 (self.vfs, 'journal.bookmarks'),
1667 (self.vfs, 'journal.bookmarks'),
1636 (self.svfs, 'journal.phaseroots'))
1668 (self.svfs, 'journal.phaseroots'))
1637
1669
1638 def undofiles(self):
1670 def undofiles(self):
1639 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1671 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1640
1672
1641 @unfilteredmethod
1673 @unfilteredmethod
1642 def _writejournal(self, desc):
1674 def _writejournal(self, desc):
1643 self.dirstate.savebackup(None, 'journal.dirstate')
1675 self.dirstate.savebackup(None, 'journal.dirstate')
1644 narrowspec.savebackup(self, 'journal.narrowspec')
1676 narrowspec.savebackup(self, 'journal.narrowspec')
1645 self.vfs.write("journal.branch",
1677 self.vfs.write("journal.branch",
1646 encoding.fromlocal(self.dirstate.branch()))
1678 encoding.fromlocal(self.dirstate.branch()))
1647 self.vfs.write("journal.desc",
1679 self.vfs.write("journal.desc",
1648 "%d\n%s\n" % (len(self), desc))
1680 "%d\n%s\n" % (len(self), desc))
1649 self.vfs.write("journal.bookmarks",
1681 self.vfs.write("journal.bookmarks",
1650 self.vfs.tryread("bookmarks"))
1682 self.vfs.tryread("bookmarks"))
1651 self.svfs.write("journal.phaseroots",
1683 self.svfs.write("journal.phaseroots",
1652 self.svfs.tryread("phaseroots"))
1684 self.svfs.tryread("phaseroots"))
1653
1685
1654 def recover(self):
1686 def recover(self):
1655 with self.lock():
1687 with self.lock():
1656 if self.svfs.exists("journal"):
1688 if self.svfs.exists("journal"):
1657 self.ui.status(_("rolling back interrupted transaction\n"))
1689 self.ui.status(_("rolling back interrupted transaction\n"))
1658 vfsmap = {'': self.svfs,
1690 vfsmap = {'': self.svfs,
1659 'plain': self.vfs,}
1691 'plain': self.vfs,}
1660 transaction.rollback(self.svfs, vfsmap, "journal",
1692 transaction.rollback(self.svfs, vfsmap, "journal",
1661 self.ui.warn,
1693 self.ui.warn,
1662 checkambigfiles=_cachedfiles)
1694 checkambigfiles=_cachedfiles)
1663 self.invalidate()
1695 self.invalidate()
1664 return True
1696 return True
1665 else:
1697 else:
1666 self.ui.warn(_("no interrupted transaction available\n"))
1698 self.ui.warn(_("no interrupted transaction available\n"))
1667 return False
1699 return False
1668
1700
1669 def rollback(self, dryrun=False, force=False):
1701 def rollback(self, dryrun=False, force=False):
1670 wlock = lock = dsguard = None
1702 wlock = lock = dsguard = None
1671 try:
1703 try:
1672 wlock = self.wlock()
1704 wlock = self.wlock()
1673 lock = self.lock()
1705 lock = self.lock()
1674 if self.svfs.exists("undo"):
1706 if self.svfs.exists("undo"):
1675 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1707 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1676
1708
1677 return self._rollback(dryrun, force, dsguard)
1709 return self._rollback(dryrun, force, dsguard)
1678 else:
1710 else:
1679 self.ui.warn(_("no rollback information available\n"))
1711 self.ui.warn(_("no rollback information available\n"))
1680 return 1
1712 return 1
1681 finally:
1713 finally:
1682 release(dsguard, lock, wlock)
1714 release(dsguard, lock, wlock)
1683
1715
1684 @unfilteredmethod # Until we get smarter cache management
1716 @unfilteredmethod # Until we get smarter cache management
1685 def _rollback(self, dryrun, force, dsguard):
1717 def _rollback(self, dryrun, force, dsguard):
1686 ui = self.ui
1718 ui = self.ui
1687 try:
1719 try:
1688 args = self.vfs.read('undo.desc').splitlines()
1720 args = self.vfs.read('undo.desc').splitlines()
1689 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1721 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1690 if len(args) >= 3:
1722 if len(args) >= 3:
1691 detail = args[2]
1723 detail = args[2]
1692 oldtip = oldlen - 1
1724 oldtip = oldlen - 1
1693
1725
1694 if detail and ui.verbose:
1726 if detail and ui.verbose:
1695 msg = (_('repository tip rolled back to revision %d'
1727 msg = (_('repository tip rolled back to revision %d'
1696 ' (undo %s: %s)\n')
1728 ' (undo %s: %s)\n')
1697 % (oldtip, desc, detail))
1729 % (oldtip, desc, detail))
1698 else:
1730 else:
1699 msg = (_('repository tip rolled back to revision %d'
1731 msg = (_('repository tip rolled back to revision %d'
1700 ' (undo %s)\n')
1732 ' (undo %s)\n')
1701 % (oldtip, desc))
1733 % (oldtip, desc))
1702 except IOError:
1734 except IOError:
1703 msg = _('rolling back unknown transaction\n')
1735 msg = _('rolling back unknown transaction\n')
1704 desc = None
1736 desc = None
1705
1737
1706 if not force and self['.'] != self['tip'] and desc == 'commit':
1738 if not force and self['.'] != self['tip'] and desc == 'commit':
1707 raise error.Abort(
1739 raise error.Abort(
1708 _('rollback of last commit while not checked out '
1740 _('rollback of last commit while not checked out '
1709 'may lose data'), hint=_('use -f to force'))
1741 'may lose data'), hint=_('use -f to force'))
1710
1742
1711 ui.status(msg)
1743 ui.status(msg)
1712 if dryrun:
1744 if dryrun:
1713 return 0
1745 return 0
1714
1746
1715 parents = self.dirstate.parents()
1747 parents = self.dirstate.parents()
1716 self.destroying()
1748 self.destroying()
1717 vfsmap = {'plain': self.vfs, '': self.svfs}
1749 vfsmap = {'plain': self.vfs, '': self.svfs}
1718 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
1750 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
1719 checkambigfiles=_cachedfiles)
1751 checkambigfiles=_cachedfiles)
1720 if self.vfs.exists('undo.bookmarks'):
1752 if self.vfs.exists('undo.bookmarks'):
1721 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
1753 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
1722 if self.svfs.exists('undo.phaseroots'):
1754 if self.svfs.exists('undo.phaseroots'):
1723 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
1755 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
1724 self.invalidate()
1756 self.invalidate()
1725
1757
1726 parentgone = (parents[0] not in self.changelog.nodemap or
1758 parentgone = (parents[0] not in self.changelog.nodemap or
1727 parents[1] not in self.changelog.nodemap)
1759 parents[1] not in self.changelog.nodemap)
1728 if parentgone:
1760 if parentgone:
1729 # prevent dirstateguard from overwriting already restored one
1761 # prevent dirstateguard from overwriting already restored one
1730 dsguard.close()
1762 dsguard.close()
1731
1763
1732 narrowspec.restorebackup(self, 'undo.narrowspec')
1764 narrowspec.restorebackup(self, 'undo.narrowspec')
1733 self.dirstate.restorebackup(None, 'undo.dirstate')
1765 self.dirstate.restorebackup(None, 'undo.dirstate')
1734 try:
1766 try:
1735 branch = self.vfs.read('undo.branch')
1767 branch = self.vfs.read('undo.branch')
1736 self.dirstate.setbranch(encoding.tolocal(branch))
1768 self.dirstate.setbranch(encoding.tolocal(branch))
1737 except IOError:
1769 except IOError:
1738 ui.warn(_('named branch could not be reset: '
1770 ui.warn(_('named branch could not be reset: '
1739 'current branch is still \'%s\'\n')
1771 'current branch is still \'%s\'\n')
1740 % self.dirstate.branch())
1772 % self.dirstate.branch())
1741
1773
1742 parents = tuple([p.rev() for p in self[None].parents()])
1774 parents = tuple([p.rev() for p in self[None].parents()])
1743 if len(parents) > 1:
1775 if len(parents) > 1:
1744 ui.status(_('working directory now based on '
1776 ui.status(_('working directory now based on '
1745 'revisions %d and %d\n') % parents)
1777 'revisions %d and %d\n') % parents)
1746 else:
1778 else:
1747 ui.status(_('working directory now based on '
1779 ui.status(_('working directory now based on '
1748 'revision %d\n') % parents)
1780 'revision %d\n') % parents)
1749 mergemod.mergestate.clean(self, self['.'].node())
1781 mergemod.mergestate.clean(self, self['.'].node())
1750
1782
1751 # TODO: if we know which new heads may result from this rollback, pass
1783 # TODO: if we know which new heads may result from this rollback, pass
1752 # them to destroy(), which will prevent the branchhead cache from being
1784 # them to destroy(), which will prevent the branchhead cache from being
1753 # invalidated.
1785 # invalidated.
1754 self.destroyed()
1786 self.destroyed()
1755 return 0
1787 return 0
1756
1788
1757 def _buildcacheupdater(self, newtransaction):
1789 def _buildcacheupdater(self, newtransaction):
1758 """called during transaction to build the callback updating cache
1790 """called during transaction to build the callback updating cache
1759
1791
1760 Lives on the repository to help extension who might want to augment
1792 Lives on the repository to help extension who might want to augment
1761 this logic. For this purpose, the created transaction is passed to the
1793 this logic. For this purpose, the created transaction is passed to the
1762 method.
1794 method.
1763 """
1795 """
1764 # we must avoid cyclic reference between repo and transaction.
1796 # we must avoid cyclic reference between repo and transaction.
1765 reporef = weakref.ref(self)
1797 reporef = weakref.ref(self)
1766 def updater(tr):
1798 def updater(tr):
1767 repo = reporef()
1799 repo = reporef()
1768 repo.updatecaches(tr)
1800 repo.updatecaches(tr)
1769 return updater
1801 return updater
1770
1802
1771 @unfilteredmethod
1803 @unfilteredmethod
1772 def updatecaches(self, tr=None, full=False):
1804 def updatecaches(self, tr=None, full=False):
1773 """warm appropriate caches
1805 """warm appropriate caches
1774
1806
1775 If this function is called after a transaction closed. The transaction
1807 If this function is called after a transaction closed. The transaction
1776 will be available in the 'tr' argument. This can be used to selectively
1808 will be available in the 'tr' argument. This can be used to selectively
1777 update caches relevant to the changes in that transaction.
1809 update caches relevant to the changes in that transaction.
1778
1810
1779 If 'full' is set, make sure all caches the function knows about have
1811 If 'full' is set, make sure all caches the function knows about have
1780 up-to-date data. Even the ones usually loaded more lazily.
1812 up-to-date data. Even the ones usually loaded more lazily.
1781 """
1813 """
1782 if tr is not None and tr.hookargs.get('source') == 'strip':
1814 if tr is not None and tr.hookargs.get('source') == 'strip':
1783 # During strip, many caches are invalid but
1815 # During strip, many caches are invalid but
1784 # later call to `destroyed` will refresh them.
1816 # later call to `destroyed` will refresh them.
1785 return
1817 return
1786
1818
1787 if tr is None or tr.changes['origrepolen'] < len(self):
1819 if tr is None or tr.changes['origrepolen'] < len(self):
1788 # updating the unfiltered branchmap should refresh all the others,
1820 # updating the unfiltered branchmap should refresh all the others,
1789 self.ui.debug('updating the branch cache\n')
1821 self.ui.debug('updating the branch cache\n')
1790 branchmap.updatecache(self.filtered('served'))
1822 branchmap.updatecache(self.filtered('served'))
1791
1823
1792 if full:
1824 if full:
1793 rbc = self.revbranchcache()
1825 rbc = self.revbranchcache()
1794 for r in self.changelog:
1826 for r in self.changelog:
1795 rbc.branchinfo(r)
1827 rbc.branchinfo(r)
1796 rbc.write()
1828 rbc.write()
1797
1829
1798 # ensure the working copy parents are in the manifestfulltextcache
1830 # ensure the working copy parents are in the manifestfulltextcache
1799 for ctx in self['.'].parents():
1831 for ctx in self['.'].parents():
1800 ctx.manifest() # accessing the manifest is enough
1832 ctx.manifest() # accessing the manifest is enough
1801
1833
1802 def invalidatecaches(self):
1834 def invalidatecaches(self):
1803
1835
1804 if '_tagscache' in vars(self):
1836 if '_tagscache' in vars(self):
1805 # can't use delattr on proxy
1837 # can't use delattr on proxy
1806 del self.__dict__['_tagscache']
1838 del self.__dict__['_tagscache']
1807
1839
1808 self.unfiltered()._branchcaches.clear()
1840 self.unfiltered()._branchcaches.clear()
1809 self.invalidatevolatilesets()
1841 self.invalidatevolatilesets()
1810 self._sparsesignaturecache.clear()
1842 self._sparsesignaturecache.clear()
1811
1843
1812 def invalidatevolatilesets(self):
1844 def invalidatevolatilesets(self):
1813 self.filteredrevcache.clear()
1845 self.filteredrevcache.clear()
1814 obsolete.clearobscaches(self)
1846 obsolete.clearobscaches(self)
1815
1847
1816 def invalidatedirstate(self):
1848 def invalidatedirstate(self):
1817 '''Invalidates the dirstate, causing the next call to dirstate
1849 '''Invalidates the dirstate, causing the next call to dirstate
1818 to check if it was modified since the last time it was read,
1850 to check if it was modified since the last time it was read,
1819 rereading it if it has.
1851 rereading it if it has.
1820
1852
1821 This is different to dirstate.invalidate() that it doesn't always
1853 This is different to dirstate.invalidate() that it doesn't always
1822 rereads the dirstate. Use dirstate.invalidate() if you want to
1854 rereads the dirstate. Use dirstate.invalidate() if you want to
1823 explicitly read the dirstate again (i.e. restoring it to a previous
1855 explicitly read the dirstate again (i.e. restoring it to a previous
1824 known good state).'''
1856 known good state).'''
1825 if hasunfilteredcache(self, 'dirstate'):
1857 if hasunfilteredcache(self, 'dirstate'):
1826 for k in self.dirstate._filecache:
1858 for k in self.dirstate._filecache:
1827 try:
1859 try:
1828 delattr(self.dirstate, k)
1860 delattr(self.dirstate, k)
1829 except AttributeError:
1861 except AttributeError:
1830 pass
1862 pass
1831 delattr(self.unfiltered(), 'dirstate')
1863 delattr(self.unfiltered(), 'dirstate')
1832
1864
1833 def invalidate(self, clearfilecache=False):
1865 def invalidate(self, clearfilecache=False):
1834 '''Invalidates both store and non-store parts other than dirstate
1866 '''Invalidates both store and non-store parts other than dirstate
1835
1867
1836 If a transaction is running, invalidation of store is omitted,
1868 If a transaction is running, invalidation of store is omitted,
1837 because discarding in-memory changes might cause inconsistency
1869 because discarding in-memory changes might cause inconsistency
1838 (e.g. incomplete fncache causes unintentional failure, but
1870 (e.g. incomplete fncache causes unintentional failure, but
1839 redundant one doesn't).
1871 redundant one doesn't).
1840 '''
1872 '''
1841 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1873 unfiltered = self.unfiltered() # all file caches are stored unfiltered
1842 for k in list(self._filecache.keys()):
1874 for k in list(self._filecache.keys()):
1843 # dirstate is invalidated separately in invalidatedirstate()
1875 # dirstate is invalidated separately in invalidatedirstate()
1844 if k == 'dirstate':
1876 if k == 'dirstate':
1845 continue
1877 continue
1846 if (k == 'changelog' and
1878 if (k == 'changelog' and
1847 self.currenttransaction() and
1879 self.currenttransaction() and
1848 self.changelog._delayed):
1880 self.changelog._delayed):
1849 # The changelog object may store unwritten revisions. We don't
1881 # The changelog object may store unwritten revisions. We don't
1850 # want to lose them.
1882 # want to lose them.
1851 # TODO: Solve the problem instead of working around it.
1883 # TODO: Solve the problem instead of working around it.
1852 continue
1884 continue
1853
1885
1854 if clearfilecache:
1886 if clearfilecache:
1855 del self._filecache[k]
1887 del self._filecache[k]
1856 try:
1888 try:
1857 delattr(unfiltered, k)
1889 delattr(unfiltered, k)
1858 except AttributeError:
1890 except AttributeError:
1859 pass
1891 pass
1860 self.invalidatecaches()
1892 self.invalidatecaches()
1861 if not self.currenttransaction():
1893 if not self.currenttransaction():
1862 # TODO: Changing contents of store outside transaction
1894 # TODO: Changing contents of store outside transaction
1863 # causes inconsistency. We should make in-memory store
1895 # causes inconsistency. We should make in-memory store
1864 # changes detectable, and abort if changed.
1896 # changes detectable, and abort if changed.
1865 self.store.invalidatecaches()
1897 self.store.invalidatecaches()
1866
1898
1867 def invalidateall(self):
1899 def invalidateall(self):
1868 '''Fully invalidates both store and non-store parts, causing the
1900 '''Fully invalidates both store and non-store parts, causing the
1869 subsequent operation to reread any outside changes.'''
1901 subsequent operation to reread any outside changes.'''
1870 # extension should hook this to invalidate its caches
1902 # extension should hook this to invalidate its caches
1871 self.invalidate()
1903 self.invalidate()
1872 self.invalidatedirstate()
1904 self.invalidatedirstate()
1873
1905
1874 @unfilteredmethod
1906 @unfilteredmethod
1875 def _refreshfilecachestats(self, tr):
1907 def _refreshfilecachestats(self, tr):
1876 """Reload stats of cached files so that they are flagged as valid"""
1908 """Reload stats of cached files so that they are flagged as valid"""
1877 for k, ce in self._filecache.items():
1909 for k, ce in self._filecache.items():
1878 k = pycompat.sysstr(k)
1910 k = pycompat.sysstr(k)
1879 if k == r'dirstate' or k not in self.__dict__:
1911 if k == r'dirstate' or k not in self.__dict__:
1880 continue
1912 continue
1881 ce.refresh()
1913 ce.refresh()
1882
1914
1883 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
1915 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
1884 inheritchecker=None, parentenvvar=None):
1916 inheritchecker=None, parentenvvar=None):
1885 parentlock = None
1917 parentlock = None
1886 # the contents of parentenvvar are used by the underlying lock to
1918 # the contents of parentenvvar are used by the underlying lock to
1887 # determine whether it can be inherited
1919 # determine whether it can be inherited
1888 if parentenvvar is not None:
1920 if parentenvvar is not None:
1889 parentlock = encoding.environ.get(parentenvvar)
1921 parentlock = encoding.environ.get(parentenvvar)
1890
1922
1891 timeout = 0
1923 timeout = 0
1892 warntimeout = 0
1924 warntimeout = 0
1893 if wait:
1925 if wait:
1894 timeout = self.ui.configint("ui", "timeout")
1926 timeout = self.ui.configint("ui", "timeout")
1895 warntimeout = self.ui.configint("ui", "timeout.warn")
1927 warntimeout = self.ui.configint("ui", "timeout.warn")
1896 # internal config: ui.signal-safe-lock
1928 # internal config: ui.signal-safe-lock
1897 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
1929 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
1898
1930
1899 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
1931 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
1900 releasefn=releasefn,
1932 releasefn=releasefn,
1901 acquirefn=acquirefn, desc=desc,
1933 acquirefn=acquirefn, desc=desc,
1902 inheritchecker=inheritchecker,
1934 inheritchecker=inheritchecker,
1903 parentlock=parentlock,
1935 parentlock=parentlock,
1904 signalsafe=signalsafe)
1936 signalsafe=signalsafe)
1905 return l
1937 return l
1906
1938
1907 def _afterlock(self, callback):
1939 def _afterlock(self, callback):
1908 """add a callback to be run when the repository is fully unlocked
1940 """add a callback to be run when the repository is fully unlocked
1909
1941
1910 The callback will be executed when the outermost lock is released
1942 The callback will be executed when the outermost lock is released
1911 (with wlock being higher level than 'lock')."""
1943 (with wlock being higher level than 'lock')."""
1912 for ref in (self._wlockref, self._lockref):
1944 for ref in (self._wlockref, self._lockref):
1913 l = ref and ref()
1945 l = ref and ref()
1914 if l and l.held:
1946 if l and l.held:
1915 l.postrelease.append(callback)
1947 l.postrelease.append(callback)
1916 break
1948 break
1917 else: # no lock have been found.
1949 else: # no lock have been found.
1918 callback()
1950 callback()
1919
1951
1920 def lock(self, wait=True):
1952 def lock(self, wait=True):
1921 '''Lock the repository store (.hg/store) and return a weak reference
1953 '''Lock the repository store (.hg/store) and return a weak reference
1922 to the lock. Use this before modifying the store (e.g. committing or
1954 to the lock. Use this before modifying the store (e.g. committing or
1923 stripping). If you are opening a transaction, get a lock as well.)
1955 stripping). If you are opening a transaction, get a lock as well.)
1924
1956
1925 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
1957 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
1926 'wlock' first to avoid a dead-lock hazard.'''
1958 'wlock' first to avoid a dead-lock hazard.'''
1927 l = self._currentlock(self._lockref)
1959 l = self._currentlock(self._lockref)
1928 if l is not None:
1960 if l is not None:
1929 l.lock()
1961 l.lock()
1930 return l
1962 return l
1931
1963
1932 l = self._lock(self.svfs, "lock", wait, None,
1964 l = self._lock(self.svfs, "lock", wait, None,
1933 self.invalidate, _('repository %s') % self.origroot)
1965 self.invalidate, _('repository %s') % self.origroot)
1934 self._lockref = weakref.ref(l)
1966 self._lockref = weakref.ref(l)
1935 return l
1967 return l
1936
1968
1937 def _wlockchecktransaction(self):
1969 def _wlockchecktransaction(self):
1938 if self.currenttransaction() is not None:
1970 if self.currenttransaction() is not None:
1939 raise error.LockInheritanceContractViolation(
1971 raise error.LockInheritanceContractViolation(
1940 'wlock cannot be inherited in the middle of a transaction')
1972 'wlock cannot be inherited in the middle of a transaction')
1941
1973
1942 def wlock(self, wait=True):
1974 def wlock(self, wait=True):
1943 '''Lock the non-store parts of the repository (everything under
1975 '''Lock the non-store parts of the repository (everything under
1944 .hg except .hg/store) and return a weak reference to the lock.
1976 .hg except .hg/store) and return a weak reference to the lock.
1945
1977
1946 Use this before modifying files in .hg.
1978 Use this before modifying files in .hg.
1947
1979
1948 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
1980 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
1949 'wlock' first to avoid a dead-lock hazard.'''
1981 'wlock' first to avoid a dead-lock hazard.'''
1950 l = self._wlockref and self._wlockref()
1982 l = self._wlockref and self._wlockref()
1951 if l is not None and l.held:
1983 if l is not None and l.held:
1952 l.lock()
1984 l.lock()
1953 return l
1985 return l
1954
1986
1955 # We do not need to check for non-waiting lock acquisition. Such
1987 # We do not need to check for non-waiting lock acquisition. Such
1956 # acquisition would not cause dead-lock as they would just fail.
1988 # acquisition would not cause dead-lock as they would just fail.
1957 if wait and (self.ui.configbool('devel', 'all-warnings')
1989 if wait and (self.ui.configbool('devel', 'all-warnings')
1958 or self.ui.configbool('devel', 'check-locks')):
1990 or self.ui.configbool('devel', 'check-locks')):
1959 if self._currentlock(self._lockref) is not None:
1991 if self._currentlock(self._lockref) is not None:
1960 self.ui.develwarn('"wlock" acquired after "lock"')
1992 self.ui.develwarn('"wlock" acquired after "lock"')
1961
1993
1962 def unlock():
1994 def unlock():
1963 if self.dirstate.pendingparentchange():
1995 if self.dirstate.pendingparentchange():
1964 self.dirstate.invalidate()
1996 self.dirstate.invalidate()
1965 else:
1997 else:
1966 self.dirstate.write(None)
1998 self.dirstate.write(None)
1967
1999
1968 self._filecache['dirstate'].refresh()
2000 self._filecache['dirstate'].refresh()
1969
2001
1970 l = self._lock(self.vfs, "wlock", wait, unlock,
2002 l = self._lock(self.vfs, "wlock", wait, unlock,
1971 self.invalidatedirstate, _('working directory of %s') %
2003 self.invalidatedirstate, _('working directory of %s') %
1972 self.origroot,
2004 self.origroot,
1973 inheritchecker=self._wlockchecktransaction,
2005 inheritchecker=self._wlockchecktransaction,
1974 parentenvvar='HG_WLOCK_LOCKER')
2006 parentenvvar='HG_WLOCK_LOCKER')
1975 self._wlockref = weakref.ref(l)
2007 self._wlockref = weakref.ref(l)
1976 return l
2008 return l
1977
2009
1978 def _currentlock(self, lockref):
2010 def _currentlock(self, lockref):
1979 """Returns the lock if it's held, or None if it's not."""
2011 """Returns the lock if it's held, or None if it's not."""
1980 if lockref is None:
2012 if lockref is None:
1981 return None
2013 return None
1982 l = lockref()
2014 l = lockref()
1983 if l is None or not l.held:
2015 if l is None or not l.held:
1984 return None
2016 return None
1985 return l
2017 return l
1986
2018
1987 def currentwlock(self):
2019 def currentwlock(self):
1988 """Returns the wlock if it's held, or None if it's not."""
2020 """Returns the wlock if it's held, or None if it's not."""
1989 return self._currentlock(self._wlockref)
2021 return self._currentlock(self._wlockref)
1990
2022
1991 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
2023 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
1992 """
2024 """
1993 commit an individual file as part of a larger transaction
2025 commit an individual file as part of a larger transaction
1994 """
2026 """
1995
2027
1996 fname = fctx.path()
2028 fname = fctx.path()
1997 fparent1 = manifest1.get(fname, nullid)
2029 fparent1 = manifest1.get(fname, nullid)
1998 fparent2 = manifest2.get(fname, nullid)
2030 fparent2 = manifest2.get(fname, nullid)
1999 if isinstance(fctx, context.filectx):
2031 if isinstance(fctx, context.filectx):
2000 node = fctx.filenode()
2032 node = fctx.filenode()
2001 if node in [fparent1, fparent2]:
2033 if node in [fparent1, fparent2]:
2002 self.ui.debug('reusing %s filelog entry\n' % fname)
2034 self.ui.debug('reusing %s filelog entry\n' % fname)
2003 if manifest1.flags(fname) != fctx.flags():
2035 if manifest1.flags(fname) != fctx.flags():
2004 changelist.append(fname)
2036 changelist.append(fname)
2005 return node
2037 return node
2006
2038
2007 flog = self.file(fname)
2039 flog = self.file(fname)
2008 meta = {}
2040 meta = {}
2009 copy = fctx.renamed()
2041 copy = fctx.renamed()
2010 if copy and copy[0] != fname:
2042 if copy and copy[0] != fname:
2011 # Mark the new revision of this file as a copy of another
2043 # Mark the new revision of this file as a copy of another
2012 # file. This copy data will effectively act as a parent
2044 # file. This copy data will effectively act as a parent
2013 # of this new revision. If this is a merge, the first
2045 # of this new revision. If this is a merge, the first
2014 # parent will be the nullid (meaning "look up the copy data")
2046 # parent will be the nullid (meaning "look up the copy data")
2015 # and the second one will be the other parent. For example:
2047 # and the second one will be the other parent. For example:
2016 #
2048 #
2017 # 0 --- 1 --- 3 rev1 changes file foo
2049 # 0 --- 1 --- 3 rev1 changes file foo
2018 # \ / rev2 renames foo to bar and changes it
2050 # \ / rev2 renames foo to bar and changes it
2019 # \- 2 -/ rev3 should have bar with all changes and
2051 # \- 2 -/ rev3 should have bar with all changes and
2020 # should record that bar descends from
2052 # should record that bar descends from
2021 # bar in rev2 and foo in rev1
2053 # bar in rev2 and foo in rev1
2022 #
2054 #
2023 # this allows this merge to succeed:
2055 # this allows this merge to succeed:
2024 #
2056 #
2025 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2057 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2026 # \ / merging rev3 and rev4 should use bar@rev2
2058 # \ / merging rev3 and rev4 should use bar@rev2
2027 # \- 2 --- 4 as the merge base
2059 # \- 2 --- 4 as the merge base
2028 #
2060 #
2029
2061
2030 cfname = copy[0]
2062 cfname = copy[0]
2031 crev = manifest1.get(cfname)
2063 crev = manifest1.get(cfname)
2032 newfparent = fparent2
2064 newfparent = fparent2
2033
2065
2034 if manifest2: # branch merge
2066 if manifest2: # branch merge
2035 if fparent2 == nullid or crev is None: # copied on remote side
2067 if fparent2 == nullid or crev is None: # copied on remote side
2036 if cfname in manifest2:
2068 if cfname in manifest2:
2037 crev = manifest2[cfname]
2069 crev = manifest2[cfname]
2038 newfparent = fparent1
2070 newfparent = fparent1
2039
2071
2040 # Here, we used to search backwards through history to try to find
2072 # Here, we used to search backwards through history to try to find
2041 # where the file copy came from if the source of a copy was not in
2073 # where the file copy came from if the source of a copy was not in
2042 # the parent directory. However, this doesn't actually make sense to
2074 # the parent directory. However, this doesn't actually make sense to
2043 # do (what does a copy from something not in your working copy even
2075 # do (what does a copy from something not in your working copy even
2044 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2076 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2045 # the user that copy information was dropped, so if they didn't
2077 # the user that copy information was dropped, so if they didn't
2046 # expect this outcome it can be fixed, but this is the correct
2078 # expect this outcome it can be fixed, but this is the correct
2047 # behavior in this circumstance.
2079 # behavior in this circumstance.
2048
2080
2049 if crev:
2081 if crev:
2050 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2082 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2051 meta["copy"] = cfname
2083 meta["copy"] = cfname
2052 meta["copyrev"] = hex(crev)
2084 meta["copyrev"] = hex(crev)
2053 fparent1, fparent2 = nullid, newfparent
2085 fparent1, fparent2 = nullid, newfparent
2054 else:
2086 else:
2055 self.ui.warn(_("warning: can't find ancestor for '%s' "
2087 self.ui.warn(_("warning: can't find ancestor for '%s' "
2056 "copied from '%s'!\n") % (fname, cfname))
2088 "copied from '%s'!\n") % (fname, cfname))
2057
2089
2058 elif fparent1 == nullid:
2090 elif fparent1 == nullid:
2059 fparent1, fparent2 = fparent2, nullid
2091 fparent1, fparent2 = fparent2, nullid
2060 elif fparent2 != nullid:
2092 elif fparent2 != nullid:
2061 # is one parent an ancestor of the other?
2093 # is one parent an ancestor of the other?
2062 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2094 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2063 if fparent1 in fparentancestors:
2095 if fparent1 in fparentancestors:
2064 fparent1, fparent2 = fparent2, nullid
2096 fparent1, fparent2 = fparent2, nullid
2065 elif fparent2 in fparentancestors:
2097 elif fparent2 in fparentancestors:
2066 fparent2 = nullid
2098 fparent2 = nullid
2067
2099
2068 # is the file changed?
2100 # is the file changed?
2069 text = fctx.data()
2101 text = fctx.data()
2070 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2102 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2071 changelist.append(fname)
2103 changelist.append(fname)
2072 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2104 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2073 # are just the flags changed during merge?
2105 # are just the flags changed during merge?
2074 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2106 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2075 changelist.append(fname)
2107 changelist.append(fname)
2076
2108
2077 return fparent1
2109 return fparent1
2078
2110
2079 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2111 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2080 """check for commit arguments that aren't committable"""
2112 """check for commit arguments that aren't committable"""
2081 if match.isexact() or match.prefix():
2113 if match.isexact() or match.prefix():
2082 matched = set(status.modified + status.added + status.removed)
2114 matched = set(status.modified + status.added + status.removed)
2083
2115
2084 for f in match.files():
2116 for f in match.files():
2085 f = self.dirstate.normalize(f)
2117 f = self.dirstate.normalize(f)
2086 if f == '.' or f in matched or f in wctx.substate:
2118 if f == '.' or f in matched or f in wctx.substate:
2087 continue
2119 continue
2088 if f in status.deleted:
2120 if f in status.deleted:
2089 fail(f, _('file not found!'))
2121 fail(f, _('file not found!'))
2090 if f in vdirs: # visited directory
2122 if f in vdirs: # visited directory
2091 d = f + '/'
2123 d = f + '/'
2092 for mf in matched:
2124 for mf in matched:
2093 if mf.startswith(d):
2125 if mf.startswith(d):
2094 break
2126 break
2095 else:
2127 else:
2096 fail(f, _("no match under directory!"))
2128 fail(f, _("no match under directory!"))
2097 elif f not in self.dirstate:
2129 elif f not in self.dirstate:
2098 fail(f, _("file not tracked!"))
2130 fail(f, _("file not tracked!"))
2099
2131
2100 @unfilteredmethod
2132 @unfilteredmethod
2101 def commit(self, text="", user=None, date=None, match=None, force=False,
2133 def commit(self, text="", user=None, date=None, match=None, force=False,
2102 editor=False, extra=None):
2134 editor=False, extra=None):
2103 """Add a new revision to current repository.
2135 """Add a new revision to current repository.
2104
2136
2105 Revision information is gathered from the working directory,
2137 Revision information is gathered from the working directory,
2106 match can be used to filter the committed files. If editor is
2138 match can be used to filter the committed files. If editor is
2107 supplied, it is called to get a commit message.
2139 supplied, it is called to get a commit message.
2108 """
2140 """
2109 if extra is None:
2141 if extra is None:
2110 extra = {}
2142 extra = {}
2111
2143
2112 def fail(f, msg):
2144 def fail(f, msg):
2113 raise error.Abort('%s: %s' % (f, msg))
2145 raise error.Abort('%s: %s' % (f, msg))
2114
2146
2115 if not match:
2147 if not match:
2116 match = matchmod.always(self.root, '')
2148 match = matchmod.always(self.root, '')
2117
2149
2118 if not force:
2150 if not force:
2119 vdirs = []
2151 vdirs = []
2120 match.explicitdir = vdirs.append
2152 match.explicitdir = vdirs.append
2121 match.bad = fail
2153 match.bad = fail
2122
2154
2123 wlock = lock = tr = None
2155 wlock = lock = tr = None
2124 try:
2156 try:
2125 wlock = self.wlock()
2157 wlock = self.wlock()
2126 lock = self.lock() # for recent changelog (see issue4368)
2158 lock = self.lock() # for recent changelog (see issue4368)
2127
2159
2128 wctx = self[None]
2160 wctx = self[None]
2129 merge = len(wctx.parents()) > 1
2161 merge = len(wctx.parents()) > 1
2130
2162
2131 if not force and merge and not match.always():
2163 if not force and merge and not match.always():
2132 raise error.Abort(_('cannot partially commit a merge '
2164 raise error.Abort(_('cannot partially commit a merge '
2133 '(do not specify files or patterns)'))
2165 '(do not specify files or patterns)'))
2134
2166
2135 status = self.status(match=match, clean=force)
2167 status = self.status(match=match, clean=force)
2136 if force:
2168 if force:
2137 status.modified.extend(status.clean) # mq may commit clean files
2169 status.modified.extend(status.clean) # mq may commit clean files
2138
2170
2139 # check subrepos
2171 # check subrepos
2140 subs, commitsubs, newstate = subrepoutil.precommit(
2172 subs, commitsubs, newstate = subrepoutil.precommit(
2141 self.ui, wctx, status, match, force=force)
2173 self.ui, wctx, status, match, force=force)
2142
2174
2143 # make sure all explicit patterns are matched
2175 # make sure all explicit patterns are matched
2144 if not force:
2176 if not force:
2145 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2177 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2146
2178
2147 cctx = context.workingcommitctx(self, status,
2179 cctx = context.workingcommitctx(self, status,
2148 text, user, date, extra)
2180 text, user, date, extra)
2149
2181
2150 # internal config: ui.allowemptycommit
2182 # internal config: ui.allowemptycommit
2151 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2183 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2152 or extra.get('close') or merge or cctx.files()
2184 or extra.get('close') or merge or cctx.files()
2153 or self.ui.configbool('ui', 'allowemptycommit'))
2185 or self.ui.configbool('ui', 'allowemptycommit'))
2154 if not allowemptycommit:
2186 if not allowemptycommit:
2155 return None
2187 return None
2156
2188
2157 if merge and cctx.deleted():
2189 if merge and cctx.deleted():
2158 raise error.Abort(_("cannot commit merge with missing files"))
2190 raise error.Abort(_("cannot commit merge with missing files"))
2159
2191
2160 ms = mergemod.mergestate.read(self)
2192 ms = mergemod.mergestate.read(self)
2161 mergeutil.checkunresolved(ms)
2193 mergeutil.checkunresolved(ms)
2162
2194
2163 if editor:
2195 if editor:
2164 cctx._text = editor(self, cctx, subs)
2196 cctx._text = editor(self, cctx, subs)
2165 edited = (text != cctx._text)
2197 edited = (text != cctx._text)
2166
2198
2167 # Save commit message in case this transaction gets rolled back
2199 # Save commit message in case this transaction gets rolled back
2168 # (e.g. by a pretxncommit hook). Leave the content alone on
2200 # (e.g. by a pretxncommit hook). Leave the content alone on
2169 # the assumption that the user will use the same editor again.
2201 # the assumption that the user will use the same editor again.
2170 msgfn = self.savecommitmessage(cctx._text)
2202 msgfn = self.savecommitmessage(cctx._text)
2171
2203
2172 # commit subs and write new state
2204 # commit subs and write new state
2173 if subs:
2205 if subs:
2174 for s in sorted(commitsubs):
2206 for s in sorted(commitsubs):
2175 sub = wctx.sub(s)
2207 sub = wctx.sub(s)
2176 self.ui.status(_('committing subrepository %s\n') %
2208 self.ui.status(_('committing subrepository %s\n') %
2177 subrepoutil.subrelpath(sub))
2209 subrepoutil.subrelpath(sub))
2178 sr = sub.commit(cctx._text, user, date)
2210 sr = sub.commit(cctx._text, user, date)
2179 newstate[s] = (newstate[s][0], sr)
2211 newstate[s] = (newstate[s][0], sr)
2180 subrepoutil.writestate(self, newstate)
2212 subrepoutil.writestate(self, newstate)
2181
2213
2182 p1, p2 = self.dirstate.parents()
2214 p1, p2 = self.dirstate.parents()
2183 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2215 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2184 try:
2216 try:
2185 self.hook("precommit", throw=True, parent1=hookp1,
2217 self.hook("precommit", throw=True, parent1=hookp1,
2186 parent2=hookp2)
2218 parent2=hookp2)
2187 tr = self.transaction('commit')
2219 tr = self.transaction('commit')
2188 ret = self.commitctx(cctx, True)
2220 ret = self.commitctx(cctx, True)
2189 except: # re-raises
2221 except: # re-raises
2190 if edited:
2222 if edited:
2191 self.ui.write(
2223 self.ui.write(
2192 _('note: commit message saved in %s\n') % msgfn)
2224 _('note: commit message saved in %s\n') % msgfn)
2193 raise
2225 raise
2194 # update bookmarks, dirstate and mergestate
2226 # update bookmarks, dirstate and mergestate
2195 bookmarks.update(self, [p1, p2], ret)
2227 bookmarks.update(self, [p1, p2], ret)
2196 cctx.markcommitted(ret)
2228 cctx.markcommitted(ret)
2197 ms.reset()
2229 ms.reset()
2198 tr.close()
2230 tr.close()
2199
2231
2200 finally:
2232 finally:
2201 lockmod.release(tr, lock, wlock)
2233 lockmod.release(tr, lock, wlock)
2202
2234
2203 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2235 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2204 # hack for command that use a temporary commit (eg: histedit)
2236 # hack for command that use a temporary commit (eg: histedit)
2205 # temporary commit got stripped before hook release
2237 # temporary commit got stripped before hook release
2206 if self.changelog.hasnode(ret):
2238 if self.changelog.hasnode(ret):
2207 self.hook("commit", node=node, parent1=parent1,
2239 self.hook("commit", node=node, parent1=parent1,
2208 parent2=parent2)
2240 parent2=parent2)
2209 self._afterlock(commithook)
2241 self._afterlock(commithook)
2210 return ret
2242 return ret
2211
2243
2212 @unfilteredmethod
2244 @unfilteredmethod
2213 def commitctx(self, ctx, error=False):
2245 def commitctx(self, ctx, error=False):
2214 """Add a new revision to current repository.
2246 """Add a new revision to current repository.
2215 Revision information is passed via the context argument.
2247 Revision information is passed via the context argument.
2216
2248
2217 ctx.files() should list all files involved in this commit, i.e.
2249 ctx.files() should list all files involved in this commit, i.e.
2218 modified/added/removed files. On merge, it may be wider than the
2250 modified/added/removed files. On merge, it may be wider than the
2219 ctx.files() to be committed, since any file nodes derived directly
2251 ctx.files() to be committed, since any file nodes derived directly
2220 from p1 or p2 are excluded from the committed ctx.files().
2252 from p1 or p2 are excluded from the committed ctx.files().
2221 """
2253 """
2222
2254
2223 tr = None
2255 tr = None
2224 p1, p2 = ctx.p1(), ctx.p2()
2256 p1, p2 = ctx.p1(), ctx.p2()
2225 user = ctx.user()
2257 user = ctx.user()
2226
2258
2227 lock = self.lock()
2259 lock = self.lock()
2228 try:
2260 try:
2229 tr = self.transaction("commit")
2261 tr = self.transaction("commit")
2230 trp = weakref.proxy(tr)
2262 trp = weakref.proxy(tr)
2231
2263
2232 if ctx.manifestnode():
2264 if ctx.manifestnode():
2233 # reuse an existing manifest revision
2265 # reuse an existing manifest revision
2234 self.ui.debug('reusing known manifest\n')
2266 self.ui.debug('reusing known manifest\n')
2235 mn = ctx.manifestnode()
2267 mn = ctx.manifestnode()
2236 files = ctx.files()
2268 files = ctx.files()
2237 elif ctx.files():
2269 elif ctx.files():
2238 m1ctx = p1.manifestctx()
2270 m1ctx = p1.manifestctx()
2239 m2ctx = p2.manifestctx()
2271 m2ctx = p2.manifestctx()
2240 mctx = m1ctx.copy()
2272 mctx = m1ctx.copy()
2241
2273
2242 m = mctx.read()
2274 m = mctx.read()
2243 m1 = m1ctx.read()
2275 m1 = m1ctx.read()
2244 m2 = m2ctx.read()
2276 m2 = m2ctx.read()
2245
2277
2246 # check in files
2278 # check in files
2247 added = []
2279 added = []
2248 changed = []
2280 changed = []
2249 removed = list(ctx.removed())
2281 removed = list(ctx.removed())
2250 linkrev = len(self)
2282 linkrev = len(self)
2251 self.ui.note(_("committing files:\n"))
2283 self.ui.note(_("committing files:\n"))
2252 for f in sorted(ctx.modified() + ctx.added()):
2284 for f in sorted(ctx.modified() + ctx.added()):
2253 self.ui.note(f + "\n")
2285 self.ui.note(f + "\n")
2254 try:
2286 try:
2255 fctx = ctx[f]
2287 fctx = ctx[f]
2256 if fctx is None:
2288 if fctx is None:
2257 removed.append(f)
2289 removed.append(f)
2258 else:
2290 else:
2259 added.append(f)
2291 added.append(f)
2260 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2292 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2261 trp, changed)
2293 trp, changed)
2262 m.setflag(f, fctx.flags())
2294 m.setflag(f, fctx.flags())
2263 except OSError as inst:
2295 except OSError as inst:
2264 self.ui.warn(_("trouble committing %s!\n") % f)
2296 self.ui.warn(_("trouble committing %s!\n") % f)
2265 raise
2297 raise
2266 except IOError as inst:
2298 except IOError as inst:
2267 errcode = getattr(inst, 'errno', errno.ENOENT)
2299 errcode = getattr(inst, 'errno', errno.ENOENT)
2268 if error or errcode and errcode != errno.ENOENT:
2300 if error or errcode and errcode != errno.ENOENT:
2269 self.ui.warn(_("trouble committing %s!\n") % f)
2301 self.ui.warn(_("trouble committing %s!\n") % f)
2270 raise
2302 raise
2271
2303
2272 # update manifest
2304 # update manifest
2273 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2305 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2274 drop = [f for f in removed if f in m]
2306 drop = [f for f in removed if f in m]
2275 for f in drop:
2307 for f in drop:
2276 del m[f]
2308 del m[f]
2277 files = changed + removed
2309 files = changed + removed
2278 md = None
2310 md = None
2279 if not files:
2311 if not files:
2280 # if no "files" actually changed in terms of the changelog,
2312 # if no "files" actually changed in terms of the changelog,
2281 # try hard to detect unmodified manifest entry so that the
2313 # try hard to detect unmodified manifest entry so that the
2282 # exact same commit can be reproduced later on convert.
2314 # exact same commit can be reproduced later on convert.
2283 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2315 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2284 if not files and md:
2316 if not files and md:
2285 self.ui.debug('not reusing manifest (no file change in '
2317 self.ui.debug('not reusing manifest (no file change in '
2286 'changelog, but manifest differs)\n')
2318 'changelog, but manifest differs)\n')
2287 if files or md:
2319 if files or md:
2288 self.ui.note(_("committing manifest\n"))
2320 self.ui.note(_("committing manifest\n"))
2289 # we're using narrowmatch here since it's already applied at
2321 # we're using narrowmatch here since it's already applied at
2290 # other stages (such as dirstate.walk), so we're already
2322 # other stages (such as dirstate.walk), so we're already
2291 # ignoring things outside of narrowspec in most cases. The
2323 # ignoring things outside of narrowspec in most cases. The
2292 # one case where we might have files outside the narrowspec
2324 # one case where we might have files outside the narrowspec
2293 # at this point is merges, and we already error out in the
2325 # at this point is merges, and we already error out in the
2294 # case where the merge has files outside of the narrowspec,
2326 # case where the merge has files outside of the narrowspec,
2295 # so this is safe.
2327 # so this is safe.
2296 mn = mctx.write(trp, linkrev,
2328 mn = mctx.write(trp, linkrev,
2297 p1.manifestnode(), p2.manifestnode(),
2329 p1.manifestnode(), p2.manifestnode(),
2298 added, drop, match=self.narrowmatch())
2330 added, drop, match=self.narrowmatch())
2299 else:
2331 else:
2300 self.ui.debug('reusing manifest form p1 (listed files '
2332 self.ui.debug('reusing manifest form p1 (listed files '
2301 'actually unchanged)\n')
2333 'actually unchanged)\n')
2302 mn = p1.manifestnode()
2334 mn = p1.manifestnode()
2303 else:
2335 else:
2304 self.ui.debug('reusing manifest from p1 (no file change)\n')
2336 self.ui.debug('reusing manifest from p1 (no file change)\n')
2305 mn = p1.manifestnode()
2337 mn = p1.manifestnode()
2306 files = []
2338 files = []
2307
2339
2308 # update changelog
2340 # update changelog
2309 self.ui.note(_("committing changelog\n"))
2341 self.ui.note(_("committing changelog\n"))
2310 self.changelog.delayupdate(tr)
2342 self.changelog.delayupdate(tr)
2311 n = self.changelog.add(mn, files, ctx.description(),
2343 n = self.changelog.add(mn, files, ctx.description(),
2312 trp, p1.node(), p2.node(),
2344 trp, p1.node(), p2.node(),
2313 user, ctx.date(), ctx.extra().copy())
2345 user, ctx.date(), ctx.extra().copy())
2314 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2346 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2315 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2347 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2316 parent2=xp2)
2348 parent2=xp2)
2317 # set the new commit is proper phase
2349 # set the new commit is proper phase
2318 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2350 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2319 if targetphase:
2351 if targetphase:
2320 # retract boundary do not alter parent changeset.
2352 # retract boundary do not alter parent changeset.
2321 # if a parent have higher the resulting phase will
2353 # if a parent have higher the resulting phase will
2322 # be compliant anyway
2354 # be compliant anyway
2323 #
2355 #
2324 # if minimal phase was 0 we don't need to retract anything
2356 # if minimal phase was 0 we don't need to retract anything
2325 phases.registernew(self, tr, targetphase, [n])
2357 phases.registernew(self, tr, targetphase, [n])
2326 tr.close()
2358 tr.close()
2327 return n
2359 return n
2328 finally:
2360 finally:
2329 if tr:
2361 if tr:
2330 tr.release()
2362 tr.release()
2331 lock.release()
2363 lock.release()
2332
2364
2333 @unfilteredmethod
2365 @unfilteredmethod
2334 def destroying(self):
2366 def destroying(self):
2335 '''Inform the repository that nodes are about to be destroyed.
2367 '''Inform the repository that nodes are about to be destroyed.
2336 Intended for use by strip and rollback, so there's a common
2368 Intended for use by strip and rollback, so there's a common
2337 place for anything that has to be done before destroying history.
2369 place for anything that has to be done before destroying history.
2338
2370
2339 This is mostly useful for saving state that is in memory and waiting
2371 This is mostly useful for saving state that is in memory and waiting
2340 to be flushed when the current lock is released. Because a call to
2372 to be flushed when the current lock is released. Because a call to
2341 destroyed is imminent, the repo will be invalidated causing those
2373 destroyed is imminent, the repo will be invalidated causing those
2342 changes to stay in memory (waiting for the next unlock), or vanish
2374 changes to stay in memory (waiting for the next unlock), or vanish
2343 completely.
2375 completely.
2344 '''
2376 '''
2345 # When using the same lock to commit and strip, the phasecache is left
2377 # When using the same lock to commit and strip, the phasecache is left
2346 # dirty after committing. Then when we strip, the repo is invalidated,
2378 # dirty after committing. Then when we strip, the repo is invalidated,
2347 # causing those changes to disappear.
2379 # causing those changes to disappear.
2348 if '_phasecache' in vars(self):
2380 if '_phasecache' in vars(self):
2349 self._phasecache.write()
2381 self._phasecache.write()
2350
2382
2351 @unfilteredmethod
2383 @unfilteredmethod
2352 def destroyed(self):
2384 def destroyed(self):
2353 '''Inform the repository that nodes have been destroyed.
2385 '''Inform the repository that nodes have been destroyed.
2354 Intended for use by strip and rollback, so there's a common
2386 Intended for use by strip and rollback, so there's a common
2355 place for anything that has to be done after destroying history.
2387 place for anything that has to be done after destroying history.
2356 '''
2388 '''
2357 # When one tries to:
2389 # When one tries to:
2358 # 1) destroy nodes thus calling this method (e.g. strip)
2390 # 1) destroy nodes thus calling this method (e.g. strip)
2359 # 2) use phasecache somewhere (e.g. commit)
2391 # 2) use phasecache somewhere (e.g. commit)
2360 #
2392 #
2361 # then 2) will fail because the phasecache contains nodes that were
2393 # then 2) will fail because the phasecache contains nodes that were
2362 # removed. We can either remove phasecache from the filecache,
2394 # removed. We can either remove phasecache from the filecache,
2363 # causing it to reload next time it is accessed, or simply filter
2395 # causing it to reload next time it is accessed, or simply filter
2364 # the removed nodes now and write the updated cache.
2396 # the removed nodes now and write the updated cache.
2365 self._phasecache.filterunknown(self)
2397 self._phasecache.filterunknown(self)
2366 self._phasecache.write()
2398 self._phasecache.write()
2367
2399
2368 # refresh all repository caches
2400 # refresh all repository caches
2369 self.updatecaches()
2401 self.updatecaches()
2370
2402
2371 # Ensure the persistent tag cache is updated. Doing it now
2403 # Ensure the persistent tag cache is updated. Doing it now
2372 # means that the tag cache only has to worry about destroyed
2404 # means that the tag cache only has to worry about destroyed
2373 # heads immediately after a strip/rollback. That in turn
2405 # heads immediately after a strip/rollback. That in turn
2374 # guarantees that "cachetip == currenttip" (comparing both rev
2406 # guarantees that "cachetip == currenttip" (comparing both rev
2375 # and node) always means no nodes have been added or destroyed.
2407 # and node) always means no nodes have been added or destroyed.
2376
2408
2377 # XXX this is suboptimal when qrefresh'ing: we strip the current
2409 # XXX this is suboptimal when qrefresh'ing: we strip the current
2378 # head, refresh the tag cache, then immediately add a new head.
2410 # head, refresh the tag cache, then immediately add a new head.
2379 # But I think doing it this way is necessary for the "instant
2411 # But I think doing it this way is necessary for the "instant
2380 # tag cache retrieval" case to work.
2412 # tag cache retrieval" case to work.
2381 self.invalidate()
2413 self.invalidate()
2382
2414
2383 def status(self, node1='.', node2=None, match=None,
2415 def status(self, node1='.', node2=None, match=None,
2384 ignored=False, clean=False, unknown=False,
2416 ignored=False, clean=False, unknown=False,
2385 listsubrepos=False):
2417 listsubrepos=False):
2386 '''a convenience method that calls node1.status(node2)'''
2418 '''a convenience method that calls node1.status(node2)'''
2387 return self[node1].status(node2, match, ignored, clean, unknown,
2419 return self[node1].status(node2, match, ignored, clean, unknown,
2388 listsubrepos)
2420 listsubrepos)
2389
2421
2390 def addpostdsstatus(self, ps):
2422 def addpostdsstatus(self, ps):
2391 """Add a callback to run within the wlock, at the point at which status
2423 """Add a callback to run within the wlock, at the point at which status
2392 fixups happen.
2424 fixups happen.
2393
2425
2394 On status completion, callback(wctx, status) will be called with the
2426 On status completion, callback(wctx, status) will be called with the
2395 wlock held, unless the dirstate has changed from underneath or the wlock
2427 wlock held, unless the dirstate has changed from underneath or the wlock
2396 couldn't be grabbed.
2428 couldn't be grabbed.
2397
2429
2398 Callbacks should not capture and use a cached copy of the dirstate --
2430 Callbacks should not capture and use a cached copy of the dirstate --
2399 it might change in the meanwhile. Instead, they should access the
2431 it might change in the meanwhile. Instead, they should access the
2400 dirstate via wctx.repo().dirstate.
2432 dirstate via wctx.repo().dirstate.
2401
2433
2402 This list is emptied out after each status run -- extensions should
2434 This list is emptied out after each status run -- extensions should
2403 make sure it adds to this list each time dirstate.status is called.
2435 make sure it adds to this list each time dirstate.status is called.
2404 Extensions should also make sure they don't call this for statuses
2436 Extensions should also make sure they don't call this for statuses
2405 that don't involve the dirstate.
2437 that don't involve the dirstate.
2406 """
2438 """
2407
2439
2408 # The list is located here for uniqueness reasons -- it is actually
2440 # The list is located here for uniqueness reasons -- it is actually
2409 # managed by the workingctx, but that isn't unique per-repo.
2441 # managed by the workingctx, but that isn't unique per-repo.
2410 self._postdsstatus.append(ps)
2442 self._postdsstatus.append(ps)
2411
2443
2412 def postdsstatus(self):
2444 def postdsstatus(self):
2413 """Used by workingctx to get the list of post-dirstate-status hooks."""
2445 """Used by workingctx to get the list of post-dirstate-status hooks."""
2414 return self._postdsstatus
2446 return self._postdsstatus
2415
2447
2416 def clearpostdsstatus(self):
2448 def clearpostdsstatus(self):
2417 """Used by workingctx to clear post-dirstate-status hooks."""
2449 """Used by workingctx to clear post-dirstate-status hooks."""
2418 del self._postdsstatus[:]
2450 del self._postdsstatus[:]
2419
2451
2420 def heads(self, start=None):
2452 def heads(self, start=None):
2421 if start is None:
2453 if start is None:
2422 cl = self.changelog
2454 cl = self.changelog
2423 headrevs = reversed(cl.headrevs())
2455 headrevs = reversed(cl.headrevs())
2424 return [cl.node(rev) for rev in headrevs]
2456 return [cl.node(rev) for rev in headrevs]
2425
2457
2426 heads = self.changelog.heads(start)
2458 heads = self.changelog.heads(start)
2427 # sort the output in rev descending order
2459 # sort the output in rev descending order
2428 return sorted(heads, key=self.changelog.rev, reverse=True)
2460 return sorted(heads, key=self.changelog.rev, reverse=True)
2429
2461
2430 def branchheads(self, branch=None, start=None, closed=False):
2462 def branchheads(self, branch=None, start=None, closed=False):
2431 '''return a (possibly filtered) list of heads for the given branch
2463 '''return a (possibly filtered) list of heads for the given branch
2432
2464
2433 Heads are returned in topological order, from newest to oldest.
2465 Heads are returned in topological order, from newest to oldest.
2434 If branch is None, use the dirstate branch.
2466 If branch is None, use the dirstate branch.
2435 If start is not None, return only heads reachable from start.
2467 If start is not None, return only heads reachable from start.
2436 If closed is True, return heads that are marked as closed as well.
2468 If closed is True, return heads that are marked as closed as well.
2437 '''
2469 '''
2438 if branch is None:
2470 if branch is None:
2439 branch = self[None].branch()
2471 branch = self[None].branch()
2440 branches = self.branchmap()
2472 branches = self.branchmap()
2441 if branch not in branches:
2473 if branch not in branches:
2442 return []
2474 return []
2443 # the cache returns heads ordered lowest to highest
2475 # the cache returns heads ordered lowest to highest
2444 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2476 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2445 if start is not None:
2477 if start is not None:
2446 # filter out the heads that cannot be reached from startrev
2478 # filter out the heads that cannot be reached from startrev
2447 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2479 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2448 bheads = [h for h in bheads if h in fbheads]
2480 bheads = [h for h in bheads if h in fbheads]
2449 return bheads
2481 return bheads
2450
2482
2451 def branches(self, nodes):
2483 def branches(self, nodes):
2452 if not nodes:
2484 if not nodes:
2453 nodes = [self.changelog.tip()]
2485 nodes = [self.changelog.tip()]
2454 b = []
2486 b = []
2455 for n in nodes:
2487 for n in nodes:
2456 t = n
2488 t = n
2457 while True:
2489 while True:
2458 p = self.changelog.parents(n)
2490 p = self.changelog.parents(n)
2459 if p[1] != nullid or p[0] == nullid:
2491 if p[1] != nullid or p[0] == nullid:
2460 b.append((t, n, p[0], p[1]))
2492 b.append((t, n, p[0], p[1]))
2461 break
2493 break
2462 n = p[0]
2494 n = p[0]
2463 return b
2495 return b
2464
2496
2465 def between(self, pairs):
2497 def between(self, pairs):
2466 r = []
2498 r = []
2467
2499
2468 for top, bottom in pairs:
2500 for top, bottom in pairs:
2469 n, l, i = top, [], 0
2501 n, l, i = top, [], 0
2470 f = 1
2502 f = 1
2471
2503
2472 while n != bottom and n != nullid:
2504 while n != bottom and n != nullid:
2473 p = self.changelog.parents(n)[0]
2505 p = self.changelog.parents(n)[0]
2474 if i == f:
2506 if i == f:
2475 l.append(n)
2507 l.append(n)
2476 f = f * 2
2508 f = f * 2
2477 n = p
2509 n = p
2478 i += 1
2510 i += 1
2479
2511
2480 r.append(l)
2512 r.append(l)
2481
2513
2482 return r
2514 return r
2483
2515
2484 def checkpush(self, pushop):
2516 def checkpush(self, pushop):
2485 """Extensions can override this function if additional checks have
2517 """Extensions can override this function if additional checks have
2486 to be performed before pushing, or call it if they override push
2518 to be performed before pushing, or call it if they override push
2487 command.
2519 command.
2488 """
2520 """
2489
2521
2490 @unfilteredpropertycache
2522 @unfilteredpropertycache
2491 def prepushoutgoinghooks(self):
2523 def prepushoutgoinghooks(self):
2492 """Return util.hooks consists of a pushop with repo, remote, outgoing
2524 """Return util.hooks consists of a pushop with repo, remote, outgoing
2493 methods, which are called before pushing changesets.
2525 methods, which are called before pushing changesets.
2494 """
2526 """
2495 return util.hooks()
2527 return util.hooks()
2496
2528
2497 def pushkey(self, namespace, key, old, new):
2529 def pushkey(self, namespace, key, old, new):
2498 try:
2530 try:
2499 tr = self.currenttransaction()
2531 tr = self.currenttransaction()
2500 hookargs = {}
2532 hookargs = {}
2501 if tr is not None:
2533 if tr is not None:
2502 hookargs.update(tr.hookargs)
2534 hookargs.update(tr.hookargs)
2503 hookargs = pycompat.strkwargs(hookargs)
2535 hookargs = pycompat.strkwargs(hookargs)
2504 hookargs[r'namespace'] = namespace
2536 hookargs[r'namespace'] = namespace
2505 hookargs[r'key'] = key
2537 hookargs[r'key'] = key
2506 hookargs[r'old'] = old
2538 hookargs[r'old'] = old
2507 hookargs[r'new'] = new
2539 hookargs[r'new'] = new
2508 self.hook('prepushkey', throw=True, **hookargs)
2540 self.hook('prepushkey', throw=True, **hookargs)
2509 except error.HookAbort as exc:
2541 except error.HookAbort as exc:
2510 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2542 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2511 if exc.hint:
2543 if exc.hint:
2512 self.ui.write_err(_("(%s)\n") % exc.hint)
2544 self.ui.write_err(_("(%s)\n") % exc.hint)
2513 return False
2545 return False
2514 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2546 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2515 ret = pushkey.push(self, namespace, key, old, new)
2547 ret = pushkey.push(self, namespace, key, old, new)
2516 def runhook():
2548 def runhook():
2517 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2549 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2518 ret=ret)
2550 ret=ret)
2519 self._afterlock(runhook)
2551 self._afterlock(runhook)
2520 return ret
2552 return ret
2521
2553
2522 def listkeys(self, namespace):
2554 def listkeys(self, namespace):
2523 self.hook('prelistkeys', throw=True, namespace=namespace)
2555 self.hook('prelistkeys', throw=True, namespace=namespace)
2524 self.ui.debug('listing keys for "%s"\n' % namespace)
2556 self.ui.debug('listing keys for "%s"\n' % namespace)
2525 values = pushkey.list(self, namespace)
2557 values = pushkey.list(self, namespace)
2526 self.hook('listkeys', namespace=namespace, values=values)
2558 self.hook('listkeys', namespace=namespace, values=values)
2527 return values
2559 return values
2528
2560
2529 def debugwireargs(self, one, two, three=None, four=None, five=None):
2561 def debugwireargs(self, one, two, three=None, four=None, five=None):
2530 '''used to test argument passing over the wire'''
2562 '''used to test argument passing over the wire'''
2531 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2563 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2532 pycompat.bytestr(four),
2564 pycompat.bytestr(four),
2533 pycompat.bytestr(five))
2565 pycompat.bytestr(five))
2534
2566
2535 def savecommitmessage(self, text):
2567 def savecommitmessage(self, text):
2536 fp = self.vfs('last-message.txt', 'wb')
2568 fp = self.vfs('last-message.txt', 'wb')
2537 try:
2569 try:
2538 fp.write(text)
2570 fp.write(text)
2539 finally:
2571 finally:
2540 fp.close()
2572 fp.close()
2541 return self.pathto(fp.name[len(self.root) + 1:])
2573 return self.pathto(fp.name[len(self.root) + 1:])
2542
2574
2543 # used to avoid circular references so destructors work
2575 # used to avoid circular references so destructors work
2544 def aftertrans(files):
2576 def aftertrans(files):
2545 renamefiles = [tuple(t) for t in files]
2577 renamefiles = [tuple(t) for t in files]
2546 def a():
2578 def a():
2547 for vfs, src, dest in renamefiles:
2579 for vfs, src, dest in renamefiles:
2548 # if src and dest refer to a same file, vfs.rename is a no-op,
2580 # if src and dest refer to a same file, vfs.rename is a no-op,
2549 # leaving both src and dest on disk. delete dest to make sure
2581 # leaving both src and dest on disk. delete dest to make sure
2550 # the rename couldn't be such a no-op.
2582 # the rename couldn't be such a no-op.
2551 vfs.tryunlink(dest)
2583 vfs.tryunlink(dest)
2552 try:
2584 try:
2553 vfs.rename(src, dest)
2585 vfs.rename(src, dest)
2554 except OSError: # journal file does not yet exist
2586 except OSError: # journal file does not yet exist
2555 pass
2587 pass
2556 return a
2588 return a
2557
2589
2558 def undoname(fn):
2590 def undoname(fn):
2559 base, name = os.path.split(fn)
2591 base, name = os.path.split(fn)
2560 assert name.startswith('journal')
2592 assert name.startswith('journal')
2561 return os.path.join(base, name.replace('journal', 'undo', 1))
2593 return os.path.join(base, name.replace('journal', 'undo', 1))
2562
2594
2563 def instance(ui, path, create, intents=None, createopts=None):
2595 def instance(ui, path, create, intents=None, createopts=None):
2564 localpath = util.urllocalpath(path)
2596 localpath = util.urllocalpath(path)
2565 if create:
2597 if create:
2566 createrepository(ui, localpath, createopts=createopts)
2598 createrepository(ui, localpath, createopts=createopts)
2567
2599
2568 return makelocalrepository(ui, localpath, intents=intents)
2600 return makelocalrepository(ui, localpath, intents=intents)
2569
2601
2570 def islocal(path):
2602 def islocal(path):
2571 return True
2603 return True
2572
2604
2573 def newreporequirements(ui, createopts=None):
2605 def newreporequirements(ui, createopts=None):
2574 """Determine the set of requirements for a new local repository.
2606 """Determine the set of requirements for a new local repository.
2575
2607
2576 Extensions can wrap this function to specify custom requirements for
2608 Extensions can wrap this function to specify custom requirements for
2577 new repositories.
2609 new repositories.
2578 """
2610 """
2579 createopts = createopts or {}
2611 createopts = createopts or {}
2580
2612
2581 requirements = {'revlogv1'}
2613 requirements = {'revlogv1'}
2582 if ui.configbool('format', 'usestore'):
2614 if ui.configbool('format', 'usestore'):
2583 requirements.add('store')
2615 requirements.add('store')
2584 if ui.configbool('format', 'usefncache'):
2616 if ui.configbool('format', 'usefncache'):
2585 requirements.add('fncache')
2617 requirements.add('fncache')
2586 if ui.configbool('format', 'dotencode'):
2618 if ui.configbool('format', 'dotencode'):
2587 requirements.add('dotencode')
2619 requirements.add('dotencode')
2588
2620
2589 compengine = ui.config('experimental', 'format.compression')
2621 compengine = ui.config('experimental', 'format.compression')
2590 if compengine not in util.compengines:
2622 if compengine not in util.compengines:
2591 raise error.Abort(_('compression engine %s defined by '
2623 raise error.Abort(_('compression engine %s defined by '
2592 'experimental.format.compression not available') %
2624 'experimental.format.compression not available') %
2593 compengine,
2625 compengine,
2594 hint=_('run "hg debuginstall" to list available '
2626 hint=_('run "hg debuginstall" to list available '
2595 'compression engines'))
2627 'compression engines'))
2596
2628
2597 # zlib is the historical default and doesn't need an explicit requirement.
2629 # zlib is the historical default and doesn't need an explicit requirement.
2598 if compengine != 'zlib':
2630 if compengine != 'zlib':
2599 requirements.add('exp-compression-%s' % compengine)
2631 requirements.add('exp-compression-%s' % compengine)
2600
2632
2601 if scmutil.gdinitconfig(ui):
2633 if scmutil.gdinitconfig(ui):
2602 requirements.add('generaldelta')
2634 requirements.add('generaldelta')
2603 if ui.configbool('experimental', 'treemanifest'):
2635 if ui.configbool('experimental', 'treemanifest'):
2604 requirements.add('treemanifest')
2636 requirements.add('treemanifest')
2605 # experimental config: format.sparse-revlog
2637 # experimental config: format.sparse-revlog
2606 if ui.configbool('format', 'sparse-revlog'):
2638 if ui.configbool('format', 'sparse-revlog'):
2607 requirements.add(SPARSEREVLOG_REQUIREMENT)
2639 requirements.add(SPARSEREVLOG_REQUIREMENT)
2608
2640
2609 revlogv2 = ui.config('experimental', 'revlogv2')
2641 revlogv2 = ui.config('experimental', 'revlogv2')
2610 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2642 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2611 requirements.remove('revlogv1')
2643 requirements.remove('revlogv1')
2612 # generaldelta is implied by revlogv2.
2644 # generaldelta is implied by revlogv2.
2613 requirements.discard('generaldelta')
2645 requirements.discard('generaldelta')
2614 requirements.add(REVLOGV2_REQUIREMENT)
2646 requirements.add(REVLOGV2_REQUIREMENT)
2615 # experimental config: format.internal-phase
2647 # experimental config: format.internal-phase
2616 if ui.configbool('format', 'internal-phase'):
2648 if ui.configbool('format', 'internal-phase'):
2617 requirements.add('internal-phase')
2649 requirements.add('internal-phase')
2618
2650
2619 if createopts.get('narrowfiles'):
2651 if createopts.get('narrowfiles'):
2620 requirements.add(repository.NARROW_REQUIREMENT)
2652 requirements.add(repository.NARROW_REQUIREMENT)
2621
2653
2622 return requirements
2654 return requirements
2623
2655
2624 def filterknowncreateopts(ui, createopts):
2656 def filterknowncreateopts(ui, createopts):
2625 """Filters a dict of repo creation options against options that are known.
2657 """Filters a dict of repo creation options against options that are known.
2626
2658
2627 Receives a dict of repo creation options and returns a dict of those
2659 Receives a dict of repo creation options and returns a dict of those
2628 options that we don't know how to handle.
2660 options that we don't know how to handle.
2629
2661
2630 This function is called as part of repository creation. If the
2662 This function is called as part of repository creation. If the
2631 returned dict contains any items, repository creation will not
2663 returned dict contains any items, repository creation will not
2632 be allowed, as it means there was a request to create a repository
2664 be allowed, as it means there was a request to create a repository
2633 with options not recognized by loaded code.
2665 with options not recognized by loaded code.
2634
2666
2635 Extensions can wrap this function to filter out creation options
2667 Extensions can wrap this function to filter out creation options
2636 they know how to handle.
2668 they know how to handle.
2637 """
2669 """
2638 known = {'narrowfiles'}
2670 known = {'narrowfiles'}
2639
2671
2640 return {k: v for k, v in createopts.items() if k not in known}
2672 return {k: v for k, v in createopts.items() if k not in known}
2641
2673
2642 def createrepository(ui, path, createopts=None):
2674 def createrepository(ui, path, createopts=None):
2643 """Create a new repository in a vfs.
2675 """Create a new repository in a vfs.
2644
2676
2645 ``path`` path to the new repo's working directory.
2677 ``path`` path to the new repo's working directory.
2646 ``createopts`` options for the new repository.
2678 ``createopts`` options for the new repository.
2647 """
2679 """
2648 createopts = createopts or {}
2680 createopts = createopts or {}
2649
2681
2650 unknownopts = filterknowncreateopts(ui, createopts)
2682 unknownopts = filterknowncreateopts(ui, createopts)
2651
2683
2652 if not isinstance(unknownopts, dict):
2684 if not isinstance(unknownopts, dict):
2653 raise error.ProgrammingError('filterknowncreateopts() did not return '
2685 raise error.ProgrammingError('filterknowncreateopts() did not return '
2654 'a dict')
2686 'a dict')
2655
2687
2656 if unknownopts:
2688 if unknownopts:
2657 raise error.Abort(_('unable to create repository because of unknown '
2689 raise error.Abort(_('unable to create repository because of unknown '
2658 'creation option: %s') %
2690 'creation option: %s') %
2659 ', '.sorted(unknownopts),
2691 ', '.sorted(unknownopts),
2660 hint=_('is a required extension not loaded?'))
2692 hint=_('is a required extension not loaded?'))
2661
2693
2662 requirements = newreporequirements(ui, createopts=createopts)
2694 requirements = newreporequirements(ui, createopts=createopts)
2663
2695
2664 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
2696 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
2665 if not wdirvfs.exists():
2697 if not wdirvfs.exists():
2666 wdirvfs.makedirs()
2698 wdirvfs.makedirs()
2667
2699
2668 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
2700 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
2669 if hgvfs.exists():
2701 if hgvfs.exists():
2670 raise error.RepoError(_('repository %s already exists') % path)
2702 raise error.RepoError(_('repository %s already exists') % path)
2671
2703
2672 hgvfs.makedir(notindexed=True)
2704 hgvfs.makedir(notindexed=True)
2673
2705
2674 if b'store' in requirements:
2706 if b'store' in requirements:
2675 hgvfs.mkdir(b'store')
2707 hgvfs.mkdir(b'store')
2676
2708
2677 # We create an invalid changelog outside the store so very old
2709 # We create an invalid changelog outside the store so very old
2678 # Mercurial versions (which didn't know about the requirements
2710 # Mercurial versions (which didn't know about the requirements
2679 # file) encounter an error on reading the changelog. This
2711 # file) encounter an error on reading the changelog. This
2680 # effectively locks out old clients and prevents them from
2712 # effectively locks out old clients and prevents them from
2681 # mucking with a repo in an unknown format.
2713 # mucking with a repo in an unknown format.
2682 #
2714 #
2683 # The revlog header has version 2, which won't be recognized by
2715 # The revlog header has version 2, which won't be recognized by
2684 # such old clients.
2716 # such old clients.
2685 hgvfs.append(b'00changelog.i',
2717 hgvfs.append(b'00changelog.i',
2686 b'\0\0\0\2 dummy changelog to prevent using the old repo '
2718 b'\0\0\0\2 dummy changelog to prevent using the old repo '
2687 b'layout')
2719 b'layout')
2688
2720
2689 scmutil.writerequires(hgvfs, requirements)
2721 scmutil.writerequires(hgvfs, requirements)
2690
2722
2691 def poisonrepository(repo):
2723 def poisonrepository(repo):
2692 """Poison a repository instance so it can no longer be used."""
2724 """Poison a repository instance so it can no longer be used."""
2693 # Perform any cleanup on the instance.
2725 # Perform any cleanup on the instance.
2694 repo.close()
2726 repo.close()
2695
2727
2696 # Our strategy is to replace the type of the object with one that
2728 # Our strategy is to replace the type of the object with one that
2697 # has all attribute lookups result in error.
2729 # has all attribute lookups result in error.
2698 #
2730 #
2699 # But we have to allow the close() method because some constructors
2731 # But we have to allow the close() method because some constructors
2700 # of repos call close() on repo references.
2732 # of repos call close() on repo references.
2701 class poisonedrepository(object):
2733 class poisonedrepository(object):
2702 def __getattribute__(self, item):
2734 def __getattribute__(self, item):
2703 if item == r'close':
2735 if item == r'close':
2704 return object.__getattribute__(self, item)
2736 return object.__getattribute__(self, item)
2705
2737
2706 raise error.ProgrammingError('repo instances should not be used '
2738 raise error.ProgrammingError('repo instances should not be used '
2707 'after unshare')
2739 'after unshare')
2708
2740
2709 def close(self):
2741 def close(self):
2710 pass
2742 pass
2711
2743
2712 # We may have a repoview, which intercepts __setattr__. So be sure
2744 # We may have a repoview, which intercepts __setattr__. So be sure
2713 # we operate at the lowest level possible.
2745 # we operate at the lowest level possible.
2714 object.__setattr__(repo, r'__class__', poisonedrepository)
2746 object.__setattr__(repo, r'__class__', poisonedrepository)
@@ -1,1580 +1,1574
1 # repository.py - Interfaces and base classes for repositories and peers.
1 # repository.py - Interfaces and base classes for repositories and peers.
2 #
2 #
3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 from .i18n import _
10 from .i18n import _
11 from . import (
11 from . import (
12 error,
12 error,
13 )
13 )
14 from .utils import (
14 from .utils import (
15 interfaceutil,
15 interfaceutil,
16 )
16 )
17
17
18 # When narrowing is finalized and no longer subject to format changes,
18 # When narrowing is finalized and no longer subject to format changes,
19 # we should move this to just "narrow" or similar.
19 # we should move this to just "narrow" or similar.
20 NARROW_REQUIREMENT = 'narrowhg-experimental'
20 NARROW_REQUIREMENT = 'narrowhg-experimental'
21
21
22 class ipeerconnection(interfaceutil.Interface):
22 class ipeerconnection(interfaceutil.Interface):
23 """Represents a "connection" to a repository.
23 """Represents a "connection" to a repository.
24
24
25 This is the base interface for representing a connection to a repository.
25 This is the base interface for representing a connection to a repository.
26 It holds basic properties and methods applicable to all peer types.
26 It holds basic properties and methods applicable to all peer types.
27
27
28 This is not a complete interface definition and should not be used
28 This is not a complete interface definition and should not be used
29 outside of this module.
29 outside of this module.
30 """
30 """
31 ui = interfaceutil.Attribute("""ui.ui instance""")
31 ui = interfaceutil.Attribute("""ui.ui instance""")
32
32
33 def url():
33 def url():
34 """Returns a URL string representing this peer.
34 """Returns a URL string representing this peer.
35
35
36 Currently, implementations expose the raw URL used to construct the
36 Currently, implementations expose the raw URL used to construct the
37 instance. It may contain credentials as part of the URL. The
37 instance. It may contain credentials as part of the URL. The
38 expectations of the value aren't well-defined and this could lead to
38 expectations of the value aren't well-defined and this could lead to
39 data leakage.
39 data leakage.
40
40
41 TODO audit/clean consumers and more clearly define the contents of this
41 TODO audit/clean consumers and more clearly define the contents of this
42 value.
42 value.
43 """
43 """
44
44
45 def local():
45 def local():
46 """Returns a local repository instance.
46 """Returns a local repository instance.
47
47
48 If the peer represents a local repository, returns an object that
48 If the peer represents a local repository, returns an object that
49 can be used to interface with it. Otherwise returns ``None``.
49 can be used to interface with it. Otherwise returns ``None``.
50 """
50 """
51
51
52 def peer():
52 def peer():
53 """Returns an object conforming to this interface.
53 """Returns an object conforming to this interface.
54
54
55 Most implementations will ``return self``.
55 Most implementations will ``return self``.
56 """
56 """
57
57
58 def canpush():
58 def canpush():
59 """Returns a boolean indicating if this peer can be pushed to."""
59 """Returns a boolean indicating if this peer can be pushed to."""
60
60
61 def close():
61 def close():
62 """Close the connection to this peer.
62 """Close the connection to this peer.
63
63
64 This is called when the peer will no longer be used. Resources
64 This is called when the peer will no longer be used. Resources
65 associated with the peer should be cleaned up.
65 associated with the peer should be cleaned up.
66 """
66 """
67
67
68 class ipeercapabilities(interfaceutil.Interface):
68 class ipeercapabilities(interfaceutil.Interface):
69 """Peer sub-interface related to capabilities."""
69 """Peer sub-interface related to capabilities."""
70
70
71 def capable(name):
71 def capable(name):
72 """Determine support for a named capability.
72 """Determine support for a named capability.
73
73
74 Returns ``False`` if capability not supported.
74 Returns ``False`` if capability not supported.
75
75
76 Returns ``True`` if boolean capability is supported. Returns a string
76 Returns ``True`` if boolean capability is supported. Returns a string
77 if capability support is non-boolean.
77 if capability support is non-boolean.
78
78
79 Capability strings may or may not map to wire protocol capabilities.
79 Capability strings may or may not map to wire protocol capabilities.
80 """
80 """
81
81
82 def requirecap(name, purpose):
82 def requirecap(name, purpose):
83 """Require a capability to be present.
83 """Require a capability to be present.
84
84
85 Raises a ``CapabilityError`` if the capability isn't present.
85 Raises a ``CapabilityError`` if the capability isn't present.
86 """
86 """
87
87
88 class ipeercommands(interfaceutil.Interface):
88 class ipeercommands(interfaceutil.Interface):
89 """Client-side interface for communicating over the wire protocol.
89 """Client-side interface for communicating over the wire protocol.
90
90
91 This interface is used as a gateway to the Mercurial wire protocol.
91 This interface is used as a gateway to the Mercurial wire protocol.
92 methods commonly call wire protocol commands of the same name.
92 methods commonly call wire protocol commands of the same name.
93 """
93 """
94
94
95 def branchmap():
95 def branchmap():
96 """Obtain heads in named branches.
96 """Obtain heads in named branches.
97
97
98 Returns a dict mapping branch name to an iterable of nodes that are
98 Returns a dict mapping branch name to an iterable of nodes that are
99 heads on that branch.
99 heads on that branch.
100 """
100 """
101
101
102 def capabilities():
102 def capabilities():
103 """Obtain capabilities of the peer.
103 """Obtain capabilities of the peer.
104
104
105 Returns a set of string capabilities.
105 Returns a set of string capabilities.
106 """
106 """
107
107
108 def clonebundles():
108 def clonebundles():
109 """Obtains the clone bundles manifest for the repo.
109 """Obtains the clone bundles manifest for the repo.
110
110
111 Returns the manifest as unparsed bytes.
111 Returns the manifest as unparsed bytes.
112 """
112 """
113
113
114 def debugwireargs(one, two, three=None, four=None, five=None):
114 def debugwireargs(one, two, three=None, four=None, five=None):
115 """Used to facilitate debugging of arguments passed over the wire."""
115 """Used to facilitate debugging of arguments passed over the wire."""
116
116
117 def getbundle(source, **kwargs):
117 def getbundle(source, **kwargs):
118 """Obtain remote repository data as a bundle.
118 """Obtain remote repository data as a bundle.
119
119
120 This command is how the bulk of repository data is transferred from
120 This command is how the bulk of repository data is transferred from
121 the peer to the local repository
121 the peer to the local repository
122
122
123 Returns a generator of bundle data.
123 Returns a generator of bundle data.
124 """
124 """
125
125
126 def heads():
126 def heads():
127 """Determine all known head revisions in the peer.
127 """Determine all known head revisions in the peer.
128
128
129 Returns an iterable of binary nodes.
129 Returns an iterable of binary nodes.
130 """
130 """
131
131
132 def known(nodes):
132 def known(nodes):
133 """Determine whether multiple nodes are known.
133 """Determine whether multiple nodes are known.
134
134
135 Accepts an iterable of nodes whose presence to check for.
135 Accepts an iterable of nodes whose presence to check for.
136
136
137 Returns an iterable of booleans indicating of the corresponding node
137 Returns an iterable of booleans indicating of the corresponding node
138 at that index is known to the peer.
138 at that index is known to the peer.
139 """
139 """
140
140
141 def listkeys(namespace):
141 def listkeys(namespace):
142 """Obtain all keys in a pushkey namespace.
142 """Obtain all keys in a pushkey namespace.
143
143
144 Returns an iterable of key names.
144 Returns an iterable of key names.
145 """
145 """
146
146
147 def lookup(key):
147 def lookup(key):
148 """Resolve a value to a known revision.
148 """Resolve a value to a known revision.
149
149
150 Returns a binary node of the resolved revision on success.
150 Returns a binary node of the resolved revision on success.
151 """
151 """
152
152
153 def pushkey(namespace, key, old, new):
153 def pushkey(namespace, key, old, new):
154 """Set a value using the ``pushkey`` protocol.
154 """Set a value using the ``pushkey`` protocol.
155
155
156 Arguments correspond to the pushkey namespace and key to operate on and
156 Arguments correspond to the pushkey namespace and key to operate on and
157 the old and new values for that key.
157 the old and new values for that key.
158
158
159 Returns a string with the peer result. The value inside varies by the
159 Returns a string with the peer result. The value inside varies by the
160 namespace.
160 namespace.
161 """
161 """
162
162
163 def stream_out():
163 def stream_out():
164 """Obtain streaming clone data.
164 """Obtain streaming clone data.
165
165
166 Successful result should be a generator of data chunks.
166 Successful result should be a generator of data chunks.
167 """
167 """
168
168
169 def unbundle(bundle, heads, url):
169 def unbundle(bundle, heads, url):
170 """Transfer repository data to the peer.
170 """Transfer repository data to the peer.
171
171
172 This is how the bulk of data during a push is transferred.
172 This is how the bulk of data during a push is transferred.
173
173
174 Returns the integer number of heads added to the peer.
174 Returns the integer number of heads added to the peer.
175 """
175 """
176
176
177 class ipeerlegacycommands(interfaceutil.Interface):
177 class ipeerlegacycommands(interfaceutil.Interface):
178 """Interface for implementing support for legacy wire protocol commands.
178 """Interface for implementing support for legacy wire protocol commands.
179
179
180 Wire protocol commands transition to legacy status when they are no longer
180 Wire protocol commands transition to legacy status when they are no longer
181 used by modern clients. To facilitate identifying which commands are
181 used by modern clients. To facilitate identifying which commands are
182 legacy, the interfaces are split.
182 legacy, the interfaces are split.
183 """
183 """
184
184
185 def between(pairs):
185 def between(pairs):
186 """Obtain nodes between pairs of nodes.
186 """Obtain nodes between pairs of nodes.
187
187
188 ``pairs`` is an iterable of node pairs.
188 ``pairs`` is an iterable of node pairs.
189
189
190 Returns an iterable of iterables of nodes corresponding to each
190 Returns an iterable of iterables of nodes corresponding to each
191 requested pair.
191 requested pair.
192 """
192 """
193
193
194 def branches(nodes):
194 def branches(nodes):
195 """Obtain ancestor changesets of specific nodes back to a branch point.
195 """Obtain ancestor changesets of specific nodes back to a branch point.
196
196
197 For each requested node, the peer finds the first ancestor node that is
197 For each requested node, the peer finds the first ancestor node that is
198 a DAG root or is a merge.
198 a DAG root or is a merge.
199
199
200 Returns an iterable of iterables with the resolved values for each node.
200 Returns an iterable of iterables with the resolved values for each node.
201 """
201 """
202
202
203 def changegroup(nodes, source):
203 def changegroup(nodes, source):
204 """Obtain a changegroup with data for descendants of specified nodes."""
204 """Obtain a changegroup with data for descendants of specified nodes."""
205
205
206 def changegroupsubset(bases, heads, source):
206 def changegroupsubset(bases, heads, source):
207 pass
207 pass
208
208
209 class ipeercommandexecutor(interfaceutil.Interface):
209 class ipeercommandexecutor(interfaceutil.Interface):
210 """Represents a mechanism to execute remote commands.
210 """Represents a mechanism to execute remote commands.
211
211
212 This is the primary interface for requesting that wire protocol commands
212 This is the primary interface for requesting that wire protocol commands
213 be executed. Instances of this interface are active in a context manager
213 be executed. Instances of this interface are active in a context manager
214 and have a well-defined lifetime. When the context manager exits, all
214 and have a well-defined lifetime. When the context manager exits, all
215 outstanding requests are waited on.
215 outstanding requests are waited on.
216 """
216 """
217
217
218 def callcommand(name, args):
218 def callcommand(name, args):
219 """Request that a named command be executed.
219 """Request that a named command be executed.
220
220
221 Receives the command name and a dictionary of command arguments.
221 Receives the command name and a dictionary of command arguments.
222
222
223 Returns a ``concurrent.futures.Future`` that will resolve to the
223 Returns a ``concurrent.futures.Future`` that will resolve to the
224 result of that command request. That exact value is left up to
224 result of that command request. That exact value is left up to
225 the implementation and possibly varies by command.
225 the implementation and possibly varies by command.
226
226
227 Not all commands can coexist with other commands in an executor
227 Not all commands can coexist with other commands in an executor
228 instance: it depends on the underlying wire protocol transport being
228 instance: it depends on the underlying wire protocol transport being
229 used and the command itself.
229 used and the command itself.
230
230
231 Implementations MAY call ``sendcommands()`` automatically if the
231 Implementations MAY call ``sendcommands()`` automatically if the
232 requested command can not coexist with other commands in this executor.
232 requested command can not coexist with other commands in this executor.
233
233
234 Implementations MAY call ``sendcommands()`` automatically when the
234 Implementations MAY call ``sendcommands()`` automatically when the
235 future's ``result()`` is called. So, consumers using multiple
235 future's ``result()`` is called. So, consumers using multiple
236 commands with an executor MUST ensure that ``result()`` is not called
236 commands with an executor MUST ensure that ``result()`` is not called
237 until all command requests have been issued.
237 until all command requests have been issued.
238 """
238 """
239
239
240 def sendcommands():
240 def sendcommands():
241 """Trigger submission of queued command requests.
241 """Trigger submission of queued command requests.
242
242
243 Not all transports submit commands as soon as they are requested to
243 Not all transports submit commands as soon as they are requested to
244 run. When called, this method forces queued command requests to be
244 run. When called, this method forces queued command requests to be
245 issued. It will no-op if all commands have already been sent.
245 issued. It will no-op if all commands have already been sent.
246
246
247 When called, no more new commands may be issued with this executor.
247 When called, no more new commands may be issued with this executor.
248 """
248 """
249
249
250 def close():
250 def close():
251 """Signal that this command request is finished.
251 """Signal that this command request is finished.
252
252
253 When called, no more new commands may be issued. All outstanding
253 When called, no more new commands may be issued. All outstanding
254 commands that have previously been issued are waited on before
254 commands that have previously been issued are waited on before
255 returning. This not only includes waiting for the futures to resolve,
255 returning. This not only includes waiting for the futures to resolve,
256 but also waiting for all response data to arrive. In other words,
256 but also waiting for all response data to arrive. In other words,
257 calling this waits for all on-wire state for issued command requests
257 calling this waits for all on-wire state for issued command requests
258 to finish.
258 to finish.
259
259
260 When used as a context manager, this method is called when exiting the
260 When used as a context manager, this method is called when exiting the
261 context manager.
261 context manager.
262
262
263 This method may call ``sendcommands()`` if there are buffered commands.
263 This method may call ``sendcommands()`` if there are buffered commands.
264 """
264 """
265
265
266 class ipeerrequests(interfaceutil.Interface):
266 class ipeerrequests(interfaceutil.Interface):
267 """Interface for executing commands on a peer."""
267 """Interface for executing commands on a peer."""
268
268
269 def commandexecutor():
269 def commandexecutor():
270 """A context manager that resolves to an ipeercommandexecutor.
270 """A context manager that resolves to an ipeercommandexecutor.
271
271
272 The object this resolves to can be used to issue command requests
272 The object this resolves to can be used to issue command requests
273 to the peer.
273 to the peer.
274
274
275 Callers should call its ``callcommand`` method to issue command
275 Callers should call its ``callcommand`` method to issue command
276 requests.
276 requests.
277
277
278 A new executor should be obtained for each distinct set of commands
278 A new executor should be obtained for each distinct set of commands
279 (possibly just a single command) that the consumer wants to execute
279 (possibly just a single command) that the consumer wants to execute
280 as part of a single operation or round trip. This is because some
280 as part of a single operation or round trip. This is because some
281 peers are half-duplex and/or don't support persistent connections.
281 peers are half-duplex and/or don't support persistent connections.
282 e.g. in the case of HTTP peers, commands sent to an executor represent
282 e.g. in the case of HTTP peers, commands sent to an executor represent
283 a single HTTP request. While some peers may support multiple command
283 a single HTTP request. While some peers may support multiple command
284 sends over the wire per executor, consumers need to code to the least
284 sends over the wire per executor, consumers need to code to the least
285 capable peer. So it should be assumed that command executors buffer
285 capable peer. So it should be assumed that command executors buffer
286 called commands until they are told to send them and that each
286 called commands until they are told to send them and that each
287 command executor could result in a new connection or wire-level request
287 command executor could result in a new connection or wire-level request
288 being issued.
288 being issued.
289 """
289 """
290
290
291 class ipeerbase(ipeerconnection, ipeercapabilities, ipeerrequests):
291 class ipeerbase(ipeerconnection, ipeercapabilities, ipeerrequests):
292 """Unified interface for peer repositories.
292 """Unified interface for peer repositories.
293
293
294 All peer instances must conform to this interface.
294 All peer instances must conform to this interface.
295 """
295 """
296
296
297 @interfaceutil.implementer(ipeerbase)
297 @interfaceutil.implementer(ipeerbase)
298 class peer(object):
298 class peer(object):
299 """Base class for peer repositories."""
299 """Base class for peer repositories."""
300
300
301 def capable(self, name):
301 def capable(self, name):
302 caps = self.capabilities()
302 caps = self.capabilities()
303 if name in caps:
303 if name in caps:
304 return True
304 return True
305
305
306 name = '%s=' % name
306 name = '%s=' % name
307 for cap in caps:
307 for cap in caps:
308 if cap.startswith(name):
308 if cap.startswith(name):
309 return cap[len(name):]
309 return cap[len(name):]
310
310
311 return False
311 return False
312
312
313 def requirecap(self, name, purpose):
313 def requirecap(self, name, purpose):
314 if self.capable(name):
314 if self.capable(name):
315 return
315 return
316
316
317 raise error.CapabilityError(
317 raise error.CapabilityError(
318 _('cannot %s; remote repository does not support the %r '
318 _('cannot %s; remote repository does not support the %r '
319 'capability') % (purpose, name))
319 'capability') % (purpose, name))
320
320
321 class irevisiondelta(interfaceutil.Interface):
321 class irevisiondelta(interfaceutil.Interface):
322 """Represents a delta between one revision and another.
322 """Represents a delta between one revision and another.
323
323
324 Instances convey enough information to allow a revision to be exchanged
324 Instances convey enough information to allow a revision to be exchanged
325 with another repository.
325 with another repository.
326
326
327 Instances represent the fulltext revision data or a delta against
327 Instances represent the fulltext revision data or a delta against
328 another revision. Therefore the ``revision`` and ``delta`` attributes
328 another revision. Therefore the ``revision`` and ``delta`` attributes
329 are mutually exclusive.
329 are mutually exclusive.
330
330
331 Typically used for changegroup generation.
331 Typically used for changegroup generation.
332 """
332 """
333
333
334 node = interfaceutil.Attribute(
334 node = interfaceutil.Attribute(
335 """20 byte node of this revision.""")
335 """20 byte node of this revision.""")
336
336
337 p1node = interfaceutil.Attribute(
337 p1node = interfaceutil.Attribute(
338 """20 byte node of 1st parent of this revision.""")
338 """20 byte node of 1st parent of this revision.""")
339
339
340 p2node = interfaceutil.Attribute(
340 p2node = interfaceutil.Attribute(
341 """20 byte node of 2nd parent of this revision.""")
341 """20 byte node of 2nd parent of this revision.""")
342
342
343 linknode = interfaceutil.Attribute(
343 linknode = interfaceutil.Attribute(
344 """20 byte node of the changelog revision this node is linked to.""")
344 """20 byte node of the changelog revision this node is linked to.""")
345
345
346 flags = interfaceutil.Attribute(
346 flags = interfaceutil.Attribute(
347 """2 bytes of integer flags that apply to this revision.""")
347 """2 bytes of integer flags that apply to this revision.""")
348
348
349 basenode = interfaceutil.Attribute(
349 basenode = interfaceutil.Attribute(
350 """20 byte node of the revision this data is a delta against.
350 """20 byte node of the revision this data is a delta against.
351
351
352 ``nullid`` indicates that the revision is a full revision and not
352 ``nullid`` indicates that the revision is a full revision and not
353 a delta.
353 a delta.
354 """)
354 """)
355
355
356 baserevisionsize = interfaceutil.Attribute(
356 baserevisionsize = interfaceutil.Attribute(
357 """Size of base revision this delta is against.
357 """Size of base revision this delta is against.
358
358
359 May be ``None`` if ``basenode`` is ``nullid``.
359 May be ``None`` if ``basenode`` is ``nullid``.
360 """)
360 """)
361
361
362 revision = interfaceutil.Attribute(
362 revision = interfaceutil.Attribute(
363 """Raw fulltext of revision data for this node.""")
363 """Raw fulltext of revision data for this node.""")
364
364
365 delta = interfaceutil.Attribute(
365 delta = interfaceutil.Attribute(
366 """Delta between ``basenode`` and ``node``.
366 """Delta between ``basenode`` and ``node``.
367
367
368 Stored in the bdiff delta format.
368 Stored in the bdiff delta format.
369 """)
369 """)
370
370
371 class irevisiondeltarequest(interfaceutil.Interface):
371 class irevisiondeltarequest(interfaceutil.Interface):
372 """Represents a request to generate an ``irevisiondelta``."""
372 """Represents a request to generate an ``irevisiondelta``."""
373
373
374 node = interfaceutil.Attribute(
374 node = interfaceutil.Attribute(
375 """20 byte node of revision being requested.""")
375 """20 byte node of revision being requested.""")
376
376
377 p1node = interfaceutil.Attribute(
377 p1node = interfaceutil.Attribute(
378 """20 byte node of 1st parent of revision.""")
378 """20 byte node of 1st parent of revision.""")
379
379
380 p2node = interfaceutil.Attribute(
380 p2node = interfaceutil.Attribute(
381 """20 byte node of 2nd parent of revision.""")
381 """20 byte node of 2nd parent of revision.""")
382
382
383 linknode = interfaceutil.Attribute(
383 linknode = interfaceutil.Attribute(
384 """20 byte node to store in ``linknode`` attribute.""")
384 """20 byte node to store in ``linknode`` attribute.""")
385
385
386 basenode = interfaceutil.Attribute(
386 basenode = interfaceutil.Attribute(
387 """Base revision that delta should be generated against.
387 """Base revision that delta should be generated against.
388
388
389 If ``nullid``, the derived ``irevisiondelta`` should have its
389 If ``nullid``, the derived ``irevisiondelta`` should have its
390 ``revision`` field populated and no delta should be generated.
390 ``revision`` field populated and no delta should be generated.
391
391
392 If ``None``, the delta may be generated against any revision that
392 If ``None``, the delta may be generated against any revision that
393 is an ancestor of this revision. Or a full revision may be used.
393 is an ancestor of this revision. Or a full revision may be used.
394
394
395 If any other value, the delta should be produced against that
395 If any other value, the delta should be produced against that
396 revision.
396 revision.
397 """)
397 """)
398
398
399 ellipsis = interfaceutil.Attribute(
399 ellipsis = interfaceutil.Attribute(
400 """Boolean on whether the ellipsis flag should be set.""")
400 """Boolean on whether the ellipsis flag should be set.""")
401
401
402 class ifilerevisionssequence(interfaceutil.Interface):
402 class ifilerevisionssequence(interfaceutil.Interface):
403 """Contains index data for all revisions of a file.
403 """Contains index data for all revisions of a file.
404
404
405 Types implementing this behave like lists of tuples. The index
405 Types implementing this behave like lists of tuples. The index
406 in the list corresponds to the revision number. The values contain
406 in the list corresponds to the revision number. The values contain
407 index metadata.
407 index metadata.
408
408
409 The *null* revision (revision number -1) is always the last item
409 The *null* revision (revision number -1) is always the last item
410 in the index.
410 in the index.
411 """
411 """
412
412
413 def __len__():
413 def __len__():
414 """The total number of revisions."""
414 """The total number of revisions."""
415
415
416 def __getitem__(rev):
416 def __getitem__(rev):
417 """Returns the object having a specific revision number.
417 """Returns the object having a specific revision number.
418
418
419 Returns an 8-tuple with the following fields:
419 Returns an 8-tuple with the following fields:
420
420
421 offset+flags
421 offset+flags
422 Contains the offset and flags for the revision. 64-bit unsigned
422 Contains the offset and flags for the revision. 64-bit unsigned
423 integer where first 6 bytes are the offset and the next 2 bytes
423 integer where first 6 bytes are the offset and the next 2 bytes
424 are flags. The offset can be 0 if it is not used by the store.
424 are flags. The offset can be 0 if it is not used by the store.
425 compressed size
425 compressed size
426 Size of the revision data in the store. It can be 0 if it isn't
426 Size of the revision data in the store. It can be 0 if it isn't
427 needed by the store.
427 needed by the store.
428 uncompressed size
428 uncompressed size
429 Fulltext size. It can be 0 if it isn't needed by the store.
429 Fulltext size. It can be 0 if it isn't needed by the store.
430 base revision
430 base revision
431 Revision number of revision the delta for storage is encoded
431 Revision number of revision the delta for storage is encoded
432 against. -1 indicates not encoded against a base revision.
432 against. -1 indicates not encoded against a base revision.
433 link revision
433 link revision
434 Revision number of changelog revision this entry is related to.
434 Revision number of changelog revision this entry is related to.
435 p1 revision
435 p1 revision
436 Revision number of 1st parent. -1 if no 1st parent.
436 Revision number of 1st parent. -1 if no 1st parent.
437 p2 revision
437 p2 revision
438 Revision number of 2nd parent. -1 if no 1st parent.
438 Revision number of 2nd parent. -1 if no 1st parent.
439 node
439 node
440 Binary node value for this revision number.
440 Binary node value for this revision number.
441
441
442 Negative values should index off the end of the sequence. ``-1``
442 Negative values should index off the end of the sequence. ``-1``
443 should return the null revision. ``-2`` should return the most
443 should return the null revision. ``-2`` should return the most
444 recent revision.
444 recent revision.
445 """
445 """
446
446
447 def __contains__(rev):
447 def __contains__(rev):
448 """Whether a revision number exists."""
448 """Whether a revision number exists."""
449
449
450 def insert(self, i, entry):
450 def insert(self, i, entry):
451 """Add an item to the index at specific revision."""
451 """Add an item to the index at specific revision."""
452
452
453 class ifileindex(interfaceutil.Interface):
453 class ifileindex(interfaceutil.Interface):
454 """Storage interface for index data of a single file.
454 """Storage interface for index data of a single file.
455
455
456 File storage data is divided into index metadata and data storage.
456 File storage data is divided into index metadata and data storage.
457 This interface defines the index portion of the interface.
457 This interface defines the index portion of the interface.
458
458
459 The index logically consists of:
459 The index logically consists of:
460
460
461 * A mapping between revision numbers and nodes.
461 * A mapping between revision numbers and nodes.
462 * DAG data (storing and querying the relationship between nodes).
462 * DAG data (storing and querying the relationship between nodes).
463 * Metadata to facilitate storage.
463 * Metadata to facilitate storage.
464 """
464 """
465 index = interfaceutil.Attribute(
465 index = interfaceutil.Attribute(
466 """An ``ifilerevisionssequence`` instance.""")
466 """An ``ifilerevisionssequence`` instance.""")
467
467
468 def __len__():
468 def __len__():
469 """Obtain the number of revisions stored for this file."""
469 """Obtain the number of revisions stored for this file."""
470
470
471 def __iter__():
471 def __iter__():
472 """Iterate over revision numbers for this file."""
472 """Iterate over revision numbers for this file."""
473
473
474 def revs(start=0, stop=None):
474 def revs(start=0, stop=None):
475 """Iterate over revision numbers for this file, with control."""
475 """Iterate over revision numbers for this file, with control."""
476
476
477 def parents(node):
477 def parents(node):
478 """Returns a 2-tuple of parent nodes for a revision.
478 """Returns a 2-tuple of parent nodes for a revision.
479
479
480 Values will be ``nullid`` if the parent is empty.
480 Values will be ``nullid`` if the parent is empty.
481 """
481 """
482
482
483 def parentrevs(rev):
483 def parentrevs(rev):
484 """Like parents() but operates on revision numbers."""
484 """Like parents() but operates on revision numbers."""
485
485
486 def rev(node):
486 def rev(node):
487 """Obtain the revision number given a node.
487 """Obtain the revision number given a node.
488
488
489 Raises ``error.LookupError`` if the node is not known.
489 Raises ``error.LookupError`` if the node is not known.
490 """
490 """
491
491
492 def node(rev):
492 def node(rev):
493 """Obtain the node value given a revision number.
493 """Obtain the node value given a revision number.
494
494
495 Raises ``IndexError`` if the node is not known.
495 Raises ``IndexError`` if the node is not known.
496 """
496 """
497
497
498 def lookup(node):
498 def lookup(node):
499 """Attempt to resolve a value to a node.
499 """Attempt to resolve a value to a node.
500
500
501 Value can be a binary node, hex node, revision number, or a string
501 Value can be a binary node, hex node, revision number, or a string
502 that can be converted to an integer.
502 that can be converted to an integer.
503
503
504 Raises ``error.LookupError`` if a node could not be resolved.
504 Raises ``error.LookupError`` if a node could not be resolved.
505 """
505 """
506
506
507 def linkrev(rev):
507 def linkrev(rev):
508 """Obtain the changeset revision number a revision is linked to."""
508 """Obtain the changeset revision number a revision is linked to."""
509
509
510 def flags(rev):
510 def flags(rev):
511 """Obtain flags used to affect storage of a revision."""
511 """Obtain flags used to affect storage of a revision."""
512
512
513 def iscensored(rev):
513 def iscensored(rev):
514 """Return whether a revision's content has been censored."""
514 """Return whether a revision's content has been censored."""
515
515
516 def commonancestorsheads(node1, node2):
516 def commonancestorsheads(node1, node2):
517 """Obtain an iterable of nodes containing heads of common ancestors.
517 """Obtain an iterable of nodes containing heads of common ancestors.
518
518
519 See ``ancestor.commonancestorsheads()``.
519 See ``ancestor.commonancestorsheads()``.
520 """
520 """
521
521
522 def descendants(revs):
522 def descendants(revs):
523 """Obtain descendant revision numbers for a set of revision numbers.
523 """Obtain descendant revision numbers for a set of revision numbers.
524
524
525 If ``nullrev`` is in the set, this is equivalent to ``revs()``.
525 If ``nullrev`` is in the set, this is equivalent to ``revs()``.
526 """
526 """
527
527
528 def headrevs():
528 def headrevs():
529 """Obtain a list of revision numbers that are DAG heads.
529 """Obtain a list of revision numbers that are DAG heads.
530
530
531 The list is sorted oldest to newest.
531 The list is sorted oldest to newest.
532
532
533 TODO determine if sorting is required.
533 TODO determine if sorting is required.
534 """
534 """
535
535
536 def heads(start=None, stop=None):
536 def heads(start=None, stop=None):
537 """Obtain a list of nodes that are DAG heads, with control.
537 """Obtain a list of nodes that are DAG heads, with control.
538
538
539 The set of revisions examined can be limited by specifying
539 The set of revisions examined can be limited by specifying
540 ``start`` and ``stop``. ``start`` is a node. ``stop`` is an
540 ``start`` and ``stop``. ``start`` is a node. ``stop`` is an
541 iterable of nodes. DAG traversal starts at earlier revision
541 iterable of nodes. DAG traversal starts at earlier revision
542 ``start`` and iterates forward until any node in ``stop`` is
542 ``start`` and iterates forward until any node in ``stop`` is
543 encountered.
543 encountered.
544 """
544 """
545
545
546 def children(node):
546 def children(node):
547 """Obtain nodes that are children of a node.
547 """Obtain nodes that are children of a node.
548
548
549 Returns a list of nodes.
549 Returns a list of nodes.
550 """
550 """
551
551
552 def deltaparent(rev):
552 def deltaparent(rev):
553 """"Return the revision that is a suitable parent to delta against."""
553 """"Return the revision that is a suitable parent to delta against."""
554
554
555 class ifiledata(interfaceutil.Interface):
555 class ifiledata(interfaceutil.Interface):
556 """Storage interface for data storage of a specific file.
556 """Storage interface for data storage of a specific file.
557
557
558 This complements ``ifileindex`` and provides an interface for accessing
558 This complements ``ifileindex`` and provides an interface for accessing
559 data for a tracked file.
559 data for a tracked file.
560 """
560 """
561 def rawsize(rev):
561 def rawsize(rev):
562 """The size of the fulltext data for a revision as stored."""
562 """The size of the fulltext data for a revision as stored."""
563
563
564 def size(rev):
564 def size(rev):
565 """Obtain the fulltext size of file data.
565 """Obtain the fulltext size of file data.
566
566
567 Any metadata is excluded from size measurements. Use ``rawsize()`` if
567 Any metadata is excluded from size measurements. Use ``rawsize()`` if
568 metadata size is important.
568 metadata size is important.
569 """
569 """
570
570
571 def checkhash(fulltext, node, p1=None, p2=None, rev=None):
571 def checkhash(fulltext, node, p1=None, p2=None, rev=None):
572 """Validate the stored hash of a given fulltext and node.
572 """Validate the stored hash of a given fulltext and node.
573
573
574 Raises ``error.RevlogError`` is hash validation fails.
574 Raises ``error.RevlogError`` is hash validation fails.
575 """
575 """
576
576
577 def revision(node, raw=False):
577 def revision(node, raw=False):
578 """"Obtain fulltext data for a node.
578 """"Obtain fulltext data for a node.
579
579
580 By default, any storage transformations are applied before the data
580 By default, any storage transformations are applied before the data
581 is returned. If ``raw`` is True, non-raw storage transformations
581 is returned. If ``raw`` is True, non-raw storage transformations
582 are not applied.
582 are not applied.
583
583
584 The fulltext data may contain a header containing metadata. Most
584 The fulltext data may contain a header containing metadata. Most
585 consumers should use ``read()`` to obtain the actual file data.
585 consumers should use ``read()`` to obtain the actual file data.
586 """
586 """
587
587
588 def read(node):
588 def read(node):
589 """Resolve file fulltext data.
589 """Resolve file fulltext data.
590
590
591 This is similar to ``revision()`` except any metadata in the data
591 This is similar to ``revision()`` except any metadata in the data
592 headers is stripped.
592 headers is stripped.
593 """
593 """
594
594
595 def renamed(node):
595 def renamed(node):
596 """Obtain copy metadata for a node.
596 """Obtain copy metadata for a node.
597
597
598 Returns ``False`` if no copy metadata is stored or a 2-tuple of
598 Returns ``False`` if no copy metadata is stored or a 2-tuple of
599 (path, node) from which this revision was copied.
599 (path, node) from which this revision was copied.
600 """
600 """
601
601
602 def cmp(node, fulltext):
602 def cmp(node, fulltext):
603 """Compare fulltext to another revision.
603 """Compare fulltext to another revision.
604
604
605 Returns True if the fulltext is different from what is stored.
605 Returns True if the fulltext is different from what is stored.
606
606
607 This takes copy metadata into account.
607 This takes copy metadata into account.
608
608
609 TODO better document the copy metadata and censoring logic.
609 TODO better document the copy metadata and censoring logic.
610 """
610 """
611
611
612 def revdiff(rev1, rev2):
612 def revdiff(rev1, rev2):
613 """Obtain a delta between two revision numbers.
613 """Obtain a delta between two revision numbers.
614
614
615 Operates on raw data in the store (``revision(node, raw=True)``).
615 Operates on raw data in the store (``revision(node, raw=True)``).
616
616
617 The returned data is the result of ``bdiff.bdiff`` on the raw
617 The returned data is the result of ``bdiff.bdiff`` on the raw
618 revision data.
618 revision data.
619 """
619 """
620
620
621 def emitrevisiondeltas(requests):
621 def emitrevisiondeltas(requests):
622 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
622 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
623
623
624 Given an iterable of objects conforming to the ``irevisiondeltarequest``
624 Given an iterable of objects conforming to the ``irevisiondeltarequest``
625 interface, emits objects conforming to the ``irevisiondelta``
625 interface, emits objects conforming to the ``irevisiondelta``
626 interface.
626 interface.
627
627
628 This method is a generator.
628 This method is a generator.
629
629
630 ``irevisiondelta`` should be emitted in the same order of
630 ``irevisiondelta`` should be emitted in the same order of
631 ``irevisiondeltarequest`` that was passed in.
631 ``irevisiondeltarequest`` that was passed in.
632
632
633 The emitted objects MUST conform by the results of
633 The emitted objects MUST conform by the results of
634 ``irevisiondeltarequest``. Namely, they must respect any requests
634 ``irevisiondeltarequest``. Namely, they must respect any requests
635 for building a delta from a specific ``basenode`` if defined.
635 for building a delta from a specific ``basenode`` if defined.
636
636
637 When sending deltas, implementations must take into account whether
637 When sending deltas, implementations must take into account whether
638 the client has the base delta before encoding a delta against that
638 the client has the base delta before encoding a delta against that
639 revision. A revision encountered previously in ``requests`` is
639 revision. A revision encountered previously in ``requests`` is
640 always a suitable base revision. An example of a bad delta is a delta
640 always a suitable base revision. An example of a bad delta is a delta
641 against a non-ancestor revision. Another example of a bad delta is a
641 against a non-ancestor revision. Another example of a bad delta is a
642 delta against a censored revision.
642 delta against a censored revision.
643 """
643 """
644
644
645 class ifilemutation(interfaceutil.Interface):
645 class ifilemutation(interfaceutil.Interface):
646 """Storage interface for mutation events of a tracked file."""
646 """Storage interface for mutation events of a tracked file."""
647
647
648 def add(filedata, meta, transaction, linkrev, p1, p2):
648 def add(filedata, meta, transaction, linkrev, p1, p2):
649 """Add a new revision to the store.
649 """Add a new revision to the store.
650
650
651 Takes file data, dictionary of metadata, a transaction, linkrev,
651 Takes file data, dictionary of metadata, a transaction, linkrev,
652 and parent nodes.
652 and parent nodes.
653
653
654 Returns the node that was added.
654 Returns the node that was added.
655
655
656 May no-op if a revision matching the supplied data is already stored.
656 May no-op if a revision matching the supplied data is already stored.
657 """
657 """
658
658
659 def addrevision(revisiondata, transaction, linkrev, p1, p2, node=None,
659 def addrevision(revisiondata, transaction, linkrev, p1, p2, node=None,
660 flags=0, cachedelta=None):
660 flags=0, cachedelta=None):
661 """Add a new revision to the store.
661 """Add a new revision to the store.
662
662
663 This is similar to ``add()`` except it operates at a lower level.
663 This is similar to ``add()`` except it operates at a lower level.
664
664
665 The data passed in already contains a metadata header, if any.
665 The data passed in already contains a metadata header, if any.
666
666
667 ``node`` and ``flags`` can be used to define the expected node and
667 ``node`` and ``flags`` can be used to define the expected node and
668 the flags to use with storage.
668 the flags to use with storage.
669
669
670 ``add()`` is usually called when adding files from e.g. the working
670 ``add()`` is usually called when adding files from e.g. the working
671 directory. ``addrevision()`` is often called by ``add()`` and for
671 directory. ``addrevision()`` is often called by ``add()`` and for
672 scenarios where revision data has already been computed, such as when
672 scenarios where revision data has already been computed, such as when
673 applying raw data from a peer repo.
673 applying raw data from a peer repo.
674 """
674 """
675
675
676 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
676 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
677 """Process a series of deltas for storage.
677 """Process a series of deltas for storage.
678
678
679 ``deltas`` is an iterable of 7-tuples of
679 ``deltas`` is an iterable of 7-tuples of
680 (node, p1, p2, linknode, deltabase, delta, flags) defining revisions
680 (node, p1, p2, linknode, deltabase, delta, flags) defining revisions
681 to add.
681 to add.
682
682
683 The ``delta`` field contains ``mpatch`` data to apply to a base
683 The ``delta`` field contains ``mpatch`` data to apply to a base
684 revision, identified by ``deltabase``. The base node can be
684 revision, identified by ``deltabase``. The base node can be
685 ``nullid``, in which case the header from the delta can be ignored
685 ``nullid``, in which case the header from the delta can be ignored
686 and the delta used as the fulltext.
686 and the delta used as the fulltext.
687
687
688 ``addrevisioncb`` should be called for each node as it is committed.
688 ``addrevisioncb`` should be called for each node as it is committed.
689
689
690 Returns a list of nodes that were processed. A node will be in the list
690 Returns a list of nodes that were processed. A node will be in the list
691 even if it existed in the store previously.
691 even if it existed in the store previously.
692 """
692 """
693
693
694 def getstrippoint(minlink):
694 def getstrippoint(minlink):
695 """Find the minimum revision that must be stripped to strip a linkrev.
695 """Find the minimum revision that must be stripped to strip a linkrev.
696
696
697 Returns a 2-tuple containing the minimum revision number and a set
697 Returns a 2-tuple containing the minimum revision number and a set
698 of all revisions numbers that would be broken by this strip.
698 of all revisions numbers that would be broken by this strip.
699
699
700 TODO this is highly revlog centric and should be abstracted into
700 TODO this is highly revlog centric and should be abstracted into
701 a higher-level deletion API. ``repair.strip()`` relies on this.
701 a higher-level deletion API. ``repair.strip()`` relies on this.
702 """
702 """
703
703
704 def strip(minlink, transaction):
704 def strip(minlink, transaction):
705 """Remove storage of items starting at a linkrev.
705 """Remove storage of items starting at a linkrev.
706
706
707 This uses ``getstrippoint()`` to determine the first node to remove.
707 This uses ``getstrippoint()`` to determine the first node to remove.
708 Then it effectively truncates storage for all revisions after that.
708 Then it effectively truncates storage for all revisions after that.
709
709
710 TODO this is highly revlog centric and should be abstracted into a
710 TODO this is highly revlog centric and should be abstracted into a
711 higher-level deletion API.
711 higher-level deletion API.
712 """
712 """
713
713
714 class ifilestorage(ifileindex, ifiledata, ifilemutation):
714 class ifilestorage(ifileindex, ifiledata, ifilemutation):
715 """Complete storage interface for a single tracked file."""
715 """Complete storage interface for a single tracked file."""
716
716
717 version = interfaceutil.Attribute(
717 version = interfaceutil.Attribute(
718 """Version number of storage.
718 """Version number of storage.
719
719
720 TODO this feels revlog centric and could likely be removed.
720 TODO this feels revlog centric and could likely be removed.
721 """)
721 """)
722
722
723 _generaldelta = interfaceutil.Attribute(
723 _generaldelta = interfaceutil.Attribute(
724 """Whether deltas can be against any parent revision.
724 """Whether deltas can be against any parent revision.
725
725
726 TODO this is used by changegroup code and it could probably be
726 TODO this is used by changegroup code and it could probably be
727 folded into another API.
727 folded into another API.
728 """)
728 """)
729
729
730 def files():
730 def files():
731 """Obtain paths that are backing storage for this file.
731 """Obtain paths that are backing storage for this file.
732
732
733 TODO this is used heavily by verify code and there should probably
733 TODO this is used heavily by verify code and there should probably
734 be a better API for that.
734 be a better API for that.
735 """
735 """
736
736
737 def checksize():
737 def checksize():
738 """Obtain the expected sizes of backing files.
738 """Obtain the expected sizes of backing files.
739
739
740 TODO this is used by verify and it should not be part of the interface.
740 TODO this is used by verify and it should not be part of the interface.
741 """
741 """
742
742
743 class idirs(interfaceutil.Interface):
743 class idirs(interfaceutil.Interface):
744 """Interface representing a collection of directories from paths.
744 """Interface representing a collection of directories from paths.
745
745
746 This interface is essentially a derived data structure representing
746 This interface is essentially a derived data structure representing
747 directories from a collection of paths.
747 directories from a collection of paths.
748 """
748 """
749
749
750 def addpath(path):
750 def addpath(path):
751 """Add a path to the collection.
751 """Add a path to the collection.
752
752
753 All directories in the path will be added to the collection.
753 All directories in the path will be added to the collection.
754 """
754 """
755
755
756 def delpath(path):
756 def delpath(path):
757 """Remove a path from the collection.
757 """Remove a path from the collection.
758
758
759 If the removal was the last path in a particular directory, the
759 If the removal was the last path in a particular directory, the
760 directory is removed from the collection.
760 directory is removed from the collection.
761 """
761 """
762
762
763 def __iter__():
763 def __iter__():
764 """Iterate over the directories in this collection of paths."""
764 """Iterate over the directories in this collection of paths."""
765
765
766 def __contains__(path):
766 def __contains__(path):
767 """Whether a specific directory is in this collection."""
767 """Whether a specific directory is in this collection."""
768
768
769 class imanifestdict(interfaceutil.Interface):
769 class imanifestdict(interfaceutil.Interface):
770 """Interface representing a manifest data structure.
770 """Interface representing a manifest data structure.
771
771
772 A manifest is effectively a dict mapping paths to entries. Each entry
772 A manifest is effectively a dict mapping paths to entries. Each entry
773 consists of a binary node and extra flags affecting that entry.
773 consists of a binary node and extra flags affecting that entry.
774 """
774 """
775
775
776 def __getitem__(path):
776 def __getitem__(path):
777 """Returns the binary node value for a path in the manifest.
777 """Returns the binary node value for a path in the manifest.
778
778
779 Raises ``KeyError`` if the path does not exist in the manifest.
779 Raises ``KeyError`` if the path does not exist in the manifest.
780
780
781 Equivalent to ``self.find(path)[0]``.
781 Equivalent to ``self.find(path)[0]``.
782 """
782 """
783
783
784 def find(path):
784 def find(path):
785 """Returns the entry for a path in the manifest.
785 """Returns the entry for a path in the manifest.
786
786
787 Returns a 2-tuple of (node, flags).
787 Returns a 2-tuple of (node, flags).
788
788
789 Raises ``KeyError`` if the path does not exist in the manifest.
789 Raises ``KeyError`` if the path does not exist in the manifest.
790 """
790 """
791
791
792 def __len__():
792 def __len__():
793 """Return the number of entries in the manifest."""
793 """Return the number of entries in the manifest."""
794
794
795 def __nonzero__():
795 def __nonzero__():
796 """Returns True if the manifest has entries, False otherwise."""
796 """Returns True if the manifest has entries, False otherwise."""
797
797
798 __bool__ = __nonzero__
798 __bool__ = __nonzero__
799
799
800 def __setitem__(path, node):
800 def __setitem__(path, node):
801 """Define the node value for a path in the manifest.
801 """Define the node value for a path in the manifest.
802
802
803 If the path is already in the manifest, its flags will be copied to
803 If the path is already in the manifest, its flags will be copied to
804 the new entry.
804 the new entry.
805 """
805 """
806
806
807 def __contains__(path):
807 def __contains__(path):
808 """Whether a path exists in the manifest."""
808 """Whether a path exists in the manifest."""
809
809
810 def __delitem__(path):
810 def __delitem__(path):
811 """Remove a path from the manifest.
811 """Remove a path from the manifest.
812
812
813 Raises ``KeyError`` if the path is not in the manifest.
813 Raises ``KeyError`` if the path is not in the manifest.
814 """
814 """
815
815
816 def __iter__():
816 def __iter__():
817 """Iterate over paths in the manifest."""
817 """Iterate over paths in the manifest."""
818
818
819 def iterkeys():
819 def iterkeys():
820 """Iterate over paths in the manifest."""
820 """Iterate over paths in the manifest."""
821
821
822 def keys():
822 def keys():
823 """Obtain a list of paths in the manifest."""
823 """Obtain a list of paths in the manifest."""
824
824
825 def filesnotin(other, match=None):
825 def filesnotin(other, match=None):
826 """Obtain the set of paths in this manifest but not in another.
826 """Obtain the set of paths in this manifest but not in another.
827
827
828 ``match`` is an optional matcher function to be applied to both
828 ``match`` is an optional matcher function to be applied to both
829 manifests.
829 manifests.
830
830
831 Returns a set of paths.
831 Returns a set of paths.
832 """
832 """
833
833
834 def dirs():
834 def dirs():
835 """Returns an object implementing the ``idirs`` interface."""
835 """Returns an object implementing the ``idirs`` interface."""
836
836
837 def hasdir(dir):
837 def hasdir(dir):
838 """Returns a bool indicating if a directory is in this manifest."""
838 """Returns a bool indicating if a directory is in this manifest."""
839
839
840 def matches(match):
840 def matches(match):
841 """Generate a new manifest filtered through a matcher.
841 """Generate a new manifest filtered through a matcher.
842
842
843 Returns an object conforming to the ``imanifestdict`` interface.
843 Returns an object conforming to the ``imanifestdict`` interface.
844 """
844 """
845
845
846 def walk(match):
846 def walk(match):
847 """Generator of paths in manifest satisfying a matcher.
847 """Generator of paths in manifest satisfying a matcher.
848
848
849 This is equivalent to ``self.matches(match).iterkeys()`` except a new
849 This is equivalent to ``self.matches(match).iterkeys()`` except a new
850 manifest object is not created.
850 manifest object is not created.
851
851
852 If the matcher has explicit files listed and they don't exist in
852 If the matcher has explicit files listed and they don't exist in
853 the manifest, ``match.bad()`` is called for each missing file.
853 the manifest, ``match.bad()`` is called for each missing file.
854 """
854 """
855
855
856 def diff(other, match=None, clean=False):
856 def diff(other, match=None, clean=False):
857 """Find differences between this manifest and another.
857 """Find differences between this manifest and another.
858
858
859 This manifest is compared to ``other``.
859 This manifest is compared to ``other``.
860
860
861 If ``match`` is provided, the two manifests are filtered against this
861 If ``match`` is provided, the two manifests are filtered against this
862 matcher and only entries satisfying the matcher are compared.
862 matcher and only entries satisfying the matcher are compared.
863
863
864 If ``clean`` is True, unchanged files are included in the returned
864 If ``clean`` is True, unchanged files are included in the returned
865 object.
865 object.
866
866
867 Returns a dict with paths as keys and values of 2-tuples of 2-tuples of
867 Returns a dict with paths as keys and values of 2-tuples of 2-tuples of
868 the form ``((node1, flag1), (node2, flag2))`` where ``(node1, flag1)``
868 the form ``((node1, flag1), (node2, flag2))`` where ``(node1, flag1)``
869 represents the node and flags for this manifest and ``(node2, flag2)``
869 represents the node and flags for this manifest and ``(node2, flag2)``
870 are the same for the other manifest.
870 are the same for the other manifest.
871 """
871 """
872
872
873 def setflag(path, flag):
873 def setflag(path, flag):
874 """Set the flag value for a given path.
874 """Set the flag value for a given path.
875
875
876 Raises ``KeyError`` if the path is not already in the manifest.
876 Raises ``KeyError`` if the path is not already in the manifest.
877 """
877 """
878
878
879 def get(path, default=None):
879 def get(path, default=None):
880 """Obtain the node value for a path or a default value if missing."""
880 """Obtain the node value for a path or a default value if missing."""
881
881
882 def flags(path, default=''):
882 def flags(path, default=''):
883 """Return the flags value for a path or a default value if missing."""
883 """Return the flags value for a path or a default value if missing."""
884
884
885 def copy():
885 def copy():
886 """Return a copy of this manifest."""
886 """Return a copy of this manifest."""
887
887
888 def items():
888 def items():
889 """Returns an iterable of (path, node) for items in this manifest."""
889 """Returns an iterable of (path, node) for items in this manifest."""
890
890
891 def iteritems():
891 def iteritems():
892 """Identical to items()."""
892 """Identical to items()."""
893
893
894 def iterentries():
894 def iterentries():
895 """Returns an iterable of (path, node, flags) for this manifest.
895 """Returns an iterable of (path, node, flags) for this manifest.
896
896
897 Similar to ``iteritems()`` except items are a 3-tuple and include
897 Similar to ``iteritems()`` except items are a 3-tuple and include
898 flags.
898 flags.
899 """
899 """
900
900
901 def text():
901 def text():
902 """Obtain the raw data representation for this manifest.
902 """Obtain the raw data representation for this manifest.
903
903
904 Result is used to create a manifest revision.
904 Result is used to create a manifest revision.
905 """
905 """
906
906
907 def fastdelta(base, changes):
907 def fastdelta(base, changes):
908 """Obtain a delta between this manifest and another given changes.
908 """Obtain a delta between this manifest and another given changes.
909
909
910 ``base`` in the raw data representation for another manifest.
910 ``base`` in the raw data representation for another manifest.
911
911
912 ``changes`` is an iterable of ``(path, to_delete)``.
912 ``changes`` is an iterable of ``(path, to_delete)``.
913
913
914 Returns a 2-tuple containing ``bytearray(self.text())`` and the
914 Returns a 2-tuple containing ``bytearray(self.text())`` and the
915 delta between ``base`` and this manifest.
915 delta between ``base`` and this manifest.
916 """
916 """
917
917
918 class imanifestrevisionbase(interfaceutil.Interface):
918 class imanifestrevisionbase(interfaceutil.Interface):
919 """Base interface representing a single revision of a manifest.
919 """Base interface representing a single revision of a manifest.
920
920
921 Should not be used as a primary interface: should always be inherited
921 Should not be used as a primary interface: should always be inherited
922 as part of a larger interface.
922 as part of a larger interface.
923 """
923 """
924
924
925 def new():
925 def new():
926 """Obtain a new manifest instance.
926 """Obtain a new manifest instance.
927
927
928 Returns an object conforming to the ``imanifestrevisionwritable``
928 Returns an object conforming to the ``imanifestrevisionwritable``
929 interface. The instance will be associated with the same
929 interface. The instance will be associated with the same
930 ``imanifestlog`` collection as this instance.
930 ``imanifestlog`` collection as this instance.
931 """
931 """
932
932
933 def copy():
933 def copy():
934 """Obtain a copy of this manifest instance.
934 """Obtain a copy of this manifest instance.
935
935
936 Returns an object conforming to the ``imanifestrevisionwritable``
936 Returns an object conforming to the ``imanifestrevisionwritable``
937 interface. The instance will be associated with the same
937 interface. The instance will be associated with the same
938 ``imanifestlog`` collection as this instance.
938 ``imanifestlog`` collection as this instance.
939 """
939 """
940
940
941 def read():
941 def read():
942 """Obtain the parsed manifest data structure.
942 """Obtain the parsed manifest data structure.
943
943
944 The returned object conforms to the ``imanifestdict`` interface.
944 The returned object conforms to the ``imanifestdict`` interface.
945 """
945 """
946
946
947 class imanifestrevisionstored(imanifestrevisionbase):
947 class imanifestrevisionstored(imanifestrevisionbase):
948 """Interface representing a manifest revision committed to storage."""
948 """Interface representing a manifest revision committed to storage."""
949
949
950 def node():
950 def node():
951 """The binary node for this manifest."""
951 """The binary node for this manifest."""
952
952
953 parents = interfaceutil.Attribute(
953 parents = interfaceutil.Attribute(
954 """List of binary nodes that are parents for this manifest revision."""
954 """List of binary nodes that are parents for this manifest revision."""
955 )
955 )
956
956
957 def readdelta(shallow=False):
957 def readdelta(shallow=False):
958 """Obtain the manifest data structure representing changes from parent.
958 """Obtain the manifest data structure representing changes from parent.
959
959
960 This manifest is compared to its 1st parent. A new manifest representing
960 This manifest is compared to its 1st parent. A new manifest representing
961 those differences is constructed.
961 those differences is constructed.
962
962
963 The returned object conforms to the ``imanifestdict`` interface.
963 The returned object conforms to the ``imanifestdict`` interface.
964 """
964 """
965
965
966 def readfast(shallow=False):
966 def readfast(shallow=False):
967 """Calls either ``read()`` or ``readdelta()``.
967 """Calls either ``read()`` or ``readdelta()``.
968
968
969 The faster of the two options is called.
969 The faster of the two options is called.
970 """
970 """
971
971
972 def find(key):
972 def find(key):
973 """Calls self.read().find(key)``.
973 """Calls self.read().find(key)``.
974
974
975 Returns a 2-tuple of ``(node, flags)`` or raises ``KeyError``.
975 Returns a 2-tuple of ``(node, flags)`` or raises ``KeyError``.
976 """
976 """
977
977
978 class imanifestrevisionwritable(imanifestrevisionbase):
978 class imanifestrevisionwritable(imanifestrevisionbase):
979 """Interface representing a manifest revision that can be committed."""
979 """Interface representing a manifest revision that can be committed."""
980
980
981 def write(transaction, linkrev, p1node, p2node, added, removed, match=None):
981 def write(transaction, linkrev, p1node, p2node, added, removed, match=None):
982 """Add this revision to storage.
982 """Add this revision to storage.
983
983
984 Takes a transaction object, the changeset revision number it will
984 Takes a transaction object, the changeset revision number it will
985 be associated with, its parent nodes, and lists of added and
985 be associated with, its parent nodes, and lists of added and
986 removed paths.
986 removed paths.
987
987
988 If match is provided, storage can choose not to inspect or write out
988 If match is provided, storage can choose not to inspect or write out
989 items that do not match. Storage is still required to be able to provide
989 items that do not match. Storage is still required to be able to provide
990 the full manifest in the future for any directories written (these
990 the full manifest in the future for any directories written (these
991 manifests should not be "narrowed on disk").
991 manifests should not be "narrowed on disk").
992
992
993 Returns the binary node of the created revision.
993 Returns the binary node of the created revision.
994 """
994 """
995
995
996 class imanifeststorage(interfaceutil.Interface):
996 class imanifeststorage(interfaceutil.Interface):
997 """Storage interface for manifest data."""
997 """Storage interface for manifest data."""
998
998
999 tree = interfaceutil.Attribute(
999 tree = interfaceutil.Attribute(
1000 """The path to the directory this manifest tracks.
1000 """The path to the directory this manifest tracks.
1001
1001
1002 The empty bytestring represents the root manifest.
1002 The empty bytestring represents the root manifest.
1003 """)
1003 """)
1004
1004
1005 index = interfaceutil.Attribute(
1005 index = interfaceutil.Attribute(
1006 """An ``ifilerevisionssequence`` instance.""")
1006 """An ``ifilerevisionssequence`` instance.""")
1007
1007
1008 indexfile = interfaceutil.Attribute(
1008 indexfile = interfaceutil.Attribute(
1009 """Path of revlog index file.
1009 """Path of revlog index file.
1010
1010
1011 TODO this is revlog specific and should not be exposed.
1011 TODO this is revlog specific and should not be exposed.
1012 """)
1012 """)
1013
1013
1014 opener = interfaceutil.Attribute(
1014 opener = interfaceutil.Attribute(
1015 """VFS opener to use to access underlying files used for storage.
1015 """VFS opener to use to access underlying files used for storage.
1016
1016
1017 TODO this is revlog specific and should not be exposed.
1017 TODO this is revlog specific and should not be exposed.
1018 """)
1018 """)
1019
1019
1020 version = interfaceutil.Attribute(
1020 version = interfaceutil.Attribute(
1021 """Revlog version number.
1021 """Revlog version number.
1022
1022
1023 TODO this is revlog specific and should not be exposed.
1023 TODO this is revlog specific and should not be exposed.
1024 """)
1024 """)
1025
1025
1026 _generaldelta = interfaceutil.Attribute(
1026 _generaldelta = interfaceutil.Attribute(
1027 """Whether generaldelta storage is being used.
1027 """Whether generaldelta storage is being used.
1028
1028
1029 TODO this is revlog specific and should not be exposed.
1029 TODO this is revlog specific and should not be exposed.
1030 """)
1030 """)
1031
1031
1032 fulltextcache = interfaceutil.Attribute(
1032 fulltextcache = interfaceutil.Attribute(
1033 """Dict with cache of fulltexts.
1033 """Dict with cache of fulltexts.
1034
1034
1035 TODO this doesn't feel appropriate for the storage interface.
1035 TODO this doesn't feel appropriate for the storage interface.
1036 """)
1036 """)
1037
1037
1038 def __len__():
1038 def __len__():
1039 """Obtain the number of revisions stored for this manifest."""
1039 """Obtain the number of revisions stored for this manifest."""
1040
1040
1041 def __iter__():
1041 def __iter__():
1042 """Iterate over revision numbers for this manifest."""
1042 """Iterate over revision numbers for this manifest."""
1043
1043
1044 def rev(node):
1044 def rev(node):
1045 """Obtain the revision number given a binary node.
1045 """Obtain the revision number given a binary node.
1046
1046
1047 Raises ``error.LookupError`` if the node is not known.
1047 Raises ``error.LookupError`` if the node is not known.
1048 """
1048 """
1049
1049
1050 def node(rev):
1050 def node(rev):
1051 """Obtain the node value given a revision number.
1051 """Obtain the node value given a revision number.
1052
1052
1053 Raises ``error.LookupError`` if the revision is not known.
1053 Raises ``error.LookupError`` if the revision is not known.
1054 """
1054 """
1055
1055
1056 def lookup(value):
1056 def lookup(value):
1057 """Attempt to resolve a value to a node.
1057 """Attempt to resolve a value to a node.
1058
1058
1059 Value can be a binary node, hex node, revision number, or a bytes
1059 Value can be a binary node, hex node, revision number, or a bytes
1060 that can be converted to an integer.
1060 that can be converted to an integer.
1061
1061
1062 Raises ``error.LookupError`` if a ndoe could not be resolved.
1062 Raises ``error.LookupError`` if a ndoe could not be resolved.
1063
1063
1064 TODO this is only used by debug* commands and can probably be deleted
1064 TODO this is only used by debug* commands and can probably be deleted
1065 easily.
1065 easily.
1066 """
1066 """
1067
1067
1068 def parents(node):
1068 def parents(node):
1069 """Returns a 2-tuple of parent nodes for a node.
1069 """Returns a 2-tuple of parent nodes for a node.
1070
1070
1071 Values will be ``nullid`` if the parent is empty.
1071 Values will be ``nullid`` if the parent is empty.
1072 """
1072 """
1073
1073
1074 def parentrevs(rev):
1074 def parentrevs(rev):
1075 """Like parents() but operates on revision numbers."""
1075 """Like parents() but operates on revision numbers."""
1076
1076
1077 def linkrev(rev):
1077 def linkrev(rev):
1078 """Obtain the changeset revision number a revision is linked to."""
1078 """Obtain the changeset revision number a revision is linked to."""
1079
1079
1080 def revision(node, _df=None, raw=False):
1080 def revision(node, _df=None, raw=False):
1081 """Obtain fulltext data for a node."""
1081 """Obtain fulltext data for a node."""
1082
1082
1083 def revdiff(rev1, rev2):
1083 def revdiff(rev1, rev2):
1084 """Obtain a delta between two revision numbers.
1084 """Obtain a delta between two revision numbers.
1085
1085
1086 The returned data is the result of ``bdiff.bdiff()`` on the raw
1086 The returned data is the result of ``bdiff.bdiff()`` on the raw
1087 revision data.
1087 revision data.
1088 """
1088 """
1089
1089
1090 def cmp(node, fulltext):
1090 def cmp(node, fulltext):
1091 """Compare fulltext to another revision.
1091 """Compare fulltext to another revision.
1092
1092
1093 Returns True if the fulltext is different from what is stored.
1093 Returns True if the fulltext is different from what is stored.
1094 """
1094 """
1095
1095
1096 def emitrevisiondeltas(requests):
1096 def emitrevisiondeltas(requests):
1097 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
1097 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
1098
1098
1099 See the documentation for ``ifiledata`` for more.
1099 See the documentation for ``ifiledata`` for more.
1100 """
1100 """
1101
1101
1102 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
1102 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
1103 """Process a series of deltas for storage.
1103 """Process a series of deltas for storage.
1104
1104
1105 See the documentation in ``ifilemutation`` for more.
1105 See the documentation in ``ifilemutation`` for more.
1106 """
1106 """
1107
1107
1108 def getstrippoint(minlink):
1108 def getstrippoint(minlink):
1109 """Find minimum revision that must be stripped to strip a linkrev.
1109 """Find minimum revision that must be stripped to strip a linkrev.
1110
1110
1111 See the documentation in ``ifilemutation`` for more.
1111 See the documentation in ``ifilemutation`` for more.
1112 """
1112 """
1113
1113
1114 def strip(minlink, transaction):
1114 def strip(minlink, transaction):
1115 """Remove storage of items starting at a linkrev.
1115 """Remove storage of items starting at a linkrev.
1116
1116
1117 See the documentation in ``ifilemutation`` for more.
1117 See the documentation in ``ifilemutation`` for more.
1118 """
1118 """
1119
1119
1120 def checksize():
1120 def checksize():
1121 """Obtain the expected sizes of backing files.
1121 """Obtain the expected sizes of backing files.
1122
1122
1123 TODO this is used by verify and it should not be part of the interface.
1123 TODO this is used by verify and it should not be part of the interface.
1124 """
1124 """
1125
1125
1126 def files():
1126 def files():
1127 """Obtain paths that are backing storage for this manifest.
1127 """Obtain paths that are backing storage for this manifest.
1128
1128
1129 TODO this is used by verify and there should probably be a better API
1129 TODO this is used by verify and there should probably be a better API
1130 for this functionality.
1130 for this functionality.
1131 """
1131 """
1132
1132
1133 def deltaparent(rev):
1133 def deltaparent(rev):
1134 """Obtain the revision that a revision is delta'd against.
1134 """Obtain the revision that a revision is delta'd against.
1135
1135
1136 TODO delta encoding is an implementation detail of storage and should
1136 TODO delta encoding is an implementation detail of storage and should
1137 not be exposed to the storage interface.
1137 not be exposed to the storage interface.
1138 """
1138 """
1139
1139
1140 def clone(tr, dest, **kwargs):
1140 def clone(tr, dest, **kwargs):
1141 """Clone this instance to another."""
1141 """Clone this instance to another."""
1142
1142
1143 def clearcaches(clear_persisted_data=False):
1143 def clearcaches(clear_persisted_data=False):
1144 """Clear any caches associated with this instance."""
1144 """Clear any caches associated with this instance."""
1145
1145
1146 def dirlog(d):
1146 def dirlog(d):
1147 """Obtain a manifest storage instance for a tree."""
1147 """Obtain a manifest storage instance for a tree."""
1148
1148
1149 def add(m, transaction, link, p1, p2, added, removed, readtree=None,
1149 def add(m, transaction, link, p1, p2, added, removed, readtree=None,
1150 match=None):
1150 match=None):
1151 """Add a revision to storage.
1151 """Add a revision to storage.
1152
1152
1153 ``m`` is an object conforming to ``imanifestdict``.
1153 ``m`` is an object conforming to ``imanifestdict``.
1154
1154
1155 ``link`` is the linkrev revision number.
1155 ``link`` is the linkrev revision number.
1156
1156
1157 ``p1`` and ``p2`` are the parent revision numbers.
1157 ``p1`` and ``p2`` are the parent revision numbers.
1158
1158
1159 ``added`` and ``removed`` are iterables of added and removed paths,
1159 ``added`` and ``removed`` are iterables of added and removed paths,
1160 respectively.
1160 respectively.
1161
1161
1162 ``readtree`` is a function that can be used to read the child tree(s)
1162 ``readtree`` is a function that can be used to read the child tree(s)
1163 when recursively writing the full tree structure when using
1163 when recursively writing the full tree structure when using
1164 treemanifets.
1164 treemanifets.
1165
1165
1166 ``match`` is a matcher that can be used to hint to storage that not all
1166 ``match`` is a matcher that can be used to hint to storage that not all
1167 paths must be inspected; this is an optimization and can be safely
1167 paths must be inspected; this is an optimization and can be safely
1168 ignored. Note that the storage must still be able to reproduce a full
1168 ignored. Note that the storage must still be able to reproduce a full
1169 manifest including files that did not match.
1169 manifest including files that did not match.
1170 """
1170 """
1171
1171
1172 class imanifestlog(interfaceutil.Interface):
1172 class imanifestlog(interfaceutil.Interface):
1173 """Interface representing a collection of manifest snapshots.
1173 """Interface representing a collection of manifest snapshots.
1174
1174
1175 Represents the root manifest in a repository.
1175 Represents the root manifest in a repository.
1176
1176
1177 Also serves as a means to access nested tree manifests and to cache
1177 Also serves as a means to access nested tree manifests and to cache
1178 tree manifests.
1178 tree manifests.
1179 """
1179 """
1180
1180
1181 def __getitem__(node):
1181 def __getitem__(node):
1182 """Obtain a manifest instance for a given binary node.
1182 """Obtain a manifest instance for a given binary node.
1183
1183
1184 Equivalent to calling ``self.get('', node)``.
1184 Equivalent to calling ``self.get('', node)``.
1185
1185
1186 The returned object conforms to the ``imanifestrevisionstored``
1186 The returned object conforms to the ``imanifestrevisionstored``
1187 interface.
1187 interface.
1188 """
1188 """
1189
1189
1190 def get(tree, node, verify=True):
1190 def get(tree, node, verify=True):
1191 """Retrieve the manifest instance for a given directory and binary node.
1191 """Retrieve the manifest instance for a given directory and binary node.
1192
1192
1193 ``node`` always refers to the node of the root manifest (which will be
1193 ``node`` always refers to the node of the root manifest (which will be
1194 the only manifest if flat manifests are being used).
1194 the only manifest if flat manifests are being used).
1195
1195
1196 If ``tree`` is the empty string, the root manifest is returned.
1196 If ``tree`` is the empty string, the root manifest is returned.
1197 Otherwise the manifest for the specified directory will be returned
1197 Otherwise the manifest for the specified directory will be returned
1198 (requires tree manifests).
1198 (requires tree manifests).
1199
1199
1200 If ``verify`` is True, ``LookupError`` is raised if the node is not
1200 If ``verify`` is True, ``LookupError`` is raised if the node is not
1201 known.
1201 known.
1202
1202
1203 The returned object conforms to the ``imanifestrevisionstored``
1203 The returned object conforms to the ``imanifestrevisionstored``
1204 interface.
1204 interface.
1205 """
1205 """
1206
1206
1207 def getstorage(tree):
1207 def getstorage(tree):
1208 """Retrieve an interface to storage for a particular tree.
1208 """Retrieve an interface to storage for a particular tree.
1209
1209
1210 If ``tree`` is the empty bytestring, storage for the root manifest will
1210 If ``tree`` is the empty bytestring, storage for the root manifest will
1211 be returned. Otherwise storage for a tree manifest is returned.
1211 be returned. Otherwise storage for a tree manifest is returned.
1212
1212
1213 TODO formalize interface for returned object.
1213 TODO formalize interface for returned object.
1214 """
1214 """
1215
1215
1216 def clearcaches():
1216 def clearcaches():
1217 """Clear caches associated with this collection."""
1217 """Clear caches associated with this collection."""
1218
1218
1219 def rev(node):
1219 def rev(node):
1220 """Obtain the revision number for a binary node.
1220 """Obtain the revision number for a binary node.
1221
1221
1222 Raises ``error.LookupError`` if the node is not known.
1222 Raises ``error.LookupError`` if the node is not known.
1223 """
1223 """
1224
1224
1225 class completelocalrepository(interfaceutil.Interface):
1225 class completelocalrepository(interfaceutil.Interface):
1226 """Monolithic interface for local repositories.
1226 """Monolithic interface for local repositories.
1227
1227
1228 This currently captures the reality of things - not how things should be.
1228 This currently captures the reality of things - not how things should be.
1229 """
1229 """
1230
1230
1231 supportedformats = interfaceutil.Attribute(
1231 supportedformats = interfaceutil.Attribute(
1232 """Set of requirements that apply to stream clone.
1232 """Set of requirements that apply to stream clone.
1233
1233
1234 This is actually a class attribute and is shared among all instances.
1234 This is actually a class attribute and is shared among all instances.
1235 """)
1235 """)
1236
1236
1237 openerreqs = interfaceutil.Attribute(
1238 """Set of requirements that are passed to the opener.
1239
1240 This is actually a class attribute and is shared among all instances.
1241 """)
1242
1243 supported = interfaceutil.Attribute(
1237 supported = interfaceutil.Attribute(
1244 """Set of requirements that this repo is capable of opening.""")
1238 """Set of requirements that this repo is capable of opening.""")
1245
1239
1246 requirements = interfaceutil.Attribute(
1240 requirements = interfaceutil.Attribute(
1247 """Set of requirements this repo uses.""")
1241 """Set of requirements this repo uses.""")
1248
1242
1249 filtername = interfaceutil.Attribute(
1243 filtername = interfaceutil.Attribute(
1250 """Name of the repoview that is active on this repo.""")
1244 """Name of the repoview that is active on this repo.""")
1251
1245
1252 wvfs = interfaceutil.Attribute(
1246 wvfs = interfaceutil.Attribute(
1253 """VFS used to access the working directory.""")
1247 """VFS used to access the working directory.""")
1254
1248
1255 vfs = interfaceutil.Attribute(
1249 vfs = interfaceutil.Attribute(
1256 """VFS rooted at the .hg directory.
1250 """VFS rooted at the .hg directory.
1257
1251
1258 Used to access repository data not in the store.
1252 Used to access repository data not in the store.
1259 """)
1253 """)
1260
1254
1261 svfs = interfaceutil.Attribute(
1255 svfs = interfaceutil.Attribute(
1262 """VFS rooted at the store.
1256 """VFS rooted at the store.
1263
1257
1264 Used to access repository data in the store. Typically .hg/store.
1258 Used to access repository data in the store. Typically .hg/store.
1265 But can point elsewhere if the store is shared.
1259 But can point elsewhere if the store is shared.
1266 """)
1260 """)
1267
1261
1268 root = interfaceutil.Attribute(
1262 root = interfaceutil.Attribute(
1269 """Path to the root of the working directory.""")
1263 """Path to the root of the working directory.""")
1270
1264
1271 path = interfaceutil.Attribute(
1265 path = interfaceutil.Attribute(
1272 """Path to the .hg directory.""")
1266 """Path to the .hg directory.""")
1273
1267
1274 origroot = interfaceutil.Attribute(
1268 origroot = interfaceutil.Attribute(
1275 """The filesystem path that was used to construct the repo.""")
1269 """The filesystem path that was used to construct the repo.""")
1276
1270
1277 auditor = interfaceutil.Attribute(
1271 auditor = interfaceutil.Attribute(
1278 """A pathauditor for the working directory.
1272 """A pathauditor for the working directory.
1279
1273
1280 This checks if a path refers to a nested repository.
1274 This checks if a path refers to a nested repository.
1281
1275
1282 Operates on the filesystem.
1276 Operates on the filesystem.
1283 """)
1277 """)
1284
1278
1285 nofsauditor = interfaceutil.Attribute(
1279 nofsauditor = interfaceutil.Attribute(
1286 """A pathauditor for the working directory.
1280 """A pathauditor for the working directory.
1287
1281
1288 This is like ``auditor`` except it doesn't do filesystem checks.
1282 This is like ``auditor`` except it doesn't do filesystem checks.
1289 """)
1283 """)
1290
1284
1291 baseui = interfaceutil.Attribute(
1285 baseui = interfaceutil.Attribute(
1292 """Original ui instance passed into constructor.""")
1286 """Original ui instance passed into constructor.""")
1293
1287
1294 ui = interfaceutil.Attribute(
1288 ui = interfaceutil.Attribute(
1295 """Main ui instance for this instance.""")
1289 """Main ui instance for this instance.""")
1296
1290
1297 sharedpath = interfaceutil.Attribute(
1291 sharedpath = interfaceutil.Attribute(
1298 """Path to the .hg directory of the repo this repo was shared from.""")
1292 """Path to the .hg directory of the repo this repo was shared from.""")
1299
1293
1300 store = interfaceutil.Attribute(
1294 store = interfaceutil.Attribute(
1301 """A store instance.""")
1295 """A store instance.""")
1302
1296
1303 spath = interfaceutil.Attribute(
1297 spath = interfaceutil.Attribute(
1304 """Path to the store.""")
1298 """Path to the store.""")
1305
1299
1306 sjoin = interfaceutil.Attribute(
1300 sjoin = interfaceutil.Attribute(
1307 """Alias to self.store.join.""")
1301 """Alias to self.store.join.""")
1308
1302
1309 cachevfs = interfaceutil.Attribute(
1303 cachevfs = interfaceutil.Attribute(
1310 """A VFS used to access the cache directory.
1304 """A VFS used to access the cache directory.
1311
1305
1312 Typically .hg/cache.
1306 Typically .hg/cache.
1313 """)
1307 """)
1314
1308
1315 filteredrevcache = interfaceutil.Attribute(
1309 filteredrevcache = interfaceutil.Attribute(
1316 """Holds sets of revisions to be filtered.""")
1310 """Holds sets of revisions to be filtered.""")
1317
1311
1318 names = interfaceutil.Attribute(
1312 names = interfaceutil.Attribute(
1319 """A ``namespaces`` instance.""")
1313 """A ``namespaces`` instance.""")
1320
1314
1321 def close():
1315 def close():
1322 """Close the handle on this repository."""
1316 """Close the handle on this repository."""
1323
1317
1324 def peer():
1318 def peer():
1325 """Obtain an object conforming to the ``peer`` interface."""
1319 """Obtain an object conforming to the ``peer`` interface."""
1326
1320
1327 def unfiltered():
1321 def unfiltered():
1328 """Obtain an unfiltered/raw view of this repo."""
1322 """Obtain an unfiltered/raw view of this repo."""
1329
1323
1330 def filtered(name, visibilityexceptions=None):
1324 def filtered(name, visibilityexceptions=None):
1331 """Obtain a named view of this repository."""
1325 """Obtain a named view of this repository."""
1332
1326
1333 obsstore = interfaceutil.Attribute(
1327 obsstore = interfaceutil.Attribute(
1334 """A store of obsolescence data.""")
1328 """A store of obsolescence data.""")
1335
1329
1336 changelog = interfaceutil.Attribute(
1330 changelog = interfaceutil.Attribute(
1337 """A handle on the changelog revlog.""")
1331 """A handle on the changelog revlog.""")
1338
1332
1339 manifestlog = interfaceutil.Attribute(
1333 manifestlog = interfaceutil.Attribute(
1340 """An instance conforming to the ``imanifestlog`` interface.
1334 """An instance conforming to the ``imanifestlog`` interface.
1341
1335
1342 Provides access to manifests for the repository.
1336 Provides access to manifests for the repository.
1343 """)
1337 """)
1344
1338
1345 dirstate = interfaceutil.Attribute(
1339 dirstate = interfaceutil.Attribute(
1346 """Working directory state.""")
1340 """Working directory state.""")
1347
1341
1348 narrowpats = interfaceutil.Attribute(
1342 narrowpats = interfaceutil.Attribute(
1349 """Matcher patterns for this repository's narrowspec.""")
1343 """Matcher patterns for this repository's narrowspec.""")
1350
1344
1351 def narrowmatch():
1345 def narrowmatch():
1352 """Obtain a matcher for the narrowspec."""
1346 """Obtain a matcher for the narrowspec."""
1353
1347
1354 def setnarrowpats(newincludes, newexcludes):
1348 def setnarrowpats(newincludes, newexcludes):
1355 """Define the narrowspec for this repository."""
1349 """Define the narrowspec for this repository."""
1356
1350
1357 def __getitem__(changeid):
1351 def __getitem__(changeid):
1358 """Try to resolve a changectx."""
1352 """Try to resolve a changectx."""
1359
1353
1360 def __contains__(changeid):
1354 def __contains__(changeid):
1361 """Whether a changeset exists."""
1355 """Whether a changeset exists."""
1362
1356
1363 def __nonzero__():
1357 def __nonzero__():
1364 """Always returns True."""
1358 """Always returns True."""
1365 return True
1359 return True
1366
1360
1367 __bool__ = __nonzero__
1361 __bool__ = __nonzero__
1368
1362
1369 def __len__():
1363 def __len__():
1370 """Returns the number of changesets in the repo."""
1364 """Returns the number of changesets in the repo."""
1371
1365
1372 def __iter__():
1366 def __iter__():
1373 """Iterate over revisions in the changelog."""
1367 """Iterate over revisions in the changelog."""
1374
1368
1375 def revs(expr, *args):
1369 def revs(expr, *args):
1376 """Evaluate a revset.
1370 """Evaluate a revset.
1377
1371
1378 Emits revisions.
1372 Emits revisions.
1379 """
1373 """
1380
1374
1381 def set(expr, *args):
1375 def set(expr, *args):
1382 """Evaluate a revset.
1376 """Evaluate a revset.
1383
1377
1384 Emits changectx instances.
1378 Emits changectx instances.
1385 """
1379 """
1386
1380
1387 def anyrevs(specs, user=False, localalias=None):
1381 def anyrevs(specs, user=False, localalias=None):
1388 """Find revisions matching one of the given revsets."""
1382 """Find revisions matching one of the given revsets."""
1389
1383
1390 def url():
1384 def url():
1391 """Returns a string representing the location of this repo."""
1385 """Returns a string representing the location of this repo."""
1392
1386
1393 def hook(name, throw=False, **args):
1387 def hook(name, throw=False, **args):
1394 """Call a hook."""
1388 """Call a hook."""
1395
1389
1396 def tags():
1390 def tags():
1397 """Return a mapping of tag to node."""
1391 """Return a mapping of tag to node."""
1398
1392
1399 def tagtype(tagname):
1393 def tagtype(tagname):
1400 """Return the type of a given tag."""
1394 """Return the type of a given tag."""
1401
1395
1402 def tagslist():
1396 def tagslist():
1403 """Return a list of tags ordered by revision."""
1397 """Return a list of tags ordered by revision."""
1404
1398
1405 def nodetags(node):
1399 def nodetags(node):
1406 """Return the tags associated with a node."""
1400 """Return the tags associated with a node."""
1407
1401
1408 def nodebookmarks(node):
1402 def nodebookmarks(node):
1409 """Return the list of bookmarks pointing to the specified node."""
1403 """Return the list of bookmarks pointing to the specified node."""
1410
1404
1411 def branchmap():
1405 def branchmap():
1412 """Return a mapping of branch to heads in that branch."""
1406 """Return a mapping of branch to heads in that branch."""
1413
1407
1414 def revbranchcache():
1408 def revbranchcache():
1415 pass
1409 pass
1416
1410
1417 def branchtip(branchtip, ignoremissing=False):
1411 def branchtip(branchtip, ignoremissing=False):
1418 """Return the tip node for a given branch."""
1412 """Return the tip node for a given branch."""
1419
1413
1420 def lookup(key):
1414 def lookup(key):
1421 """Resolve the node for a revision."""
1415 """Resolve the node for a revision."""
1422
1416
1423 def lookupbranch(key):
1417 def lookupbranch(key):
1424 """Look up the branch name of the given revision or branch name."""
1418 """Look up the branch name of the given revision or branch name."""
1425
1419
1426 def known(nodes):
1420 def known(nodes):
1427 """Determine whether a series of nodes is known.
1421 """Determine whether a series of nodes is known.
1428
1422
1429 Returns a list of bools.
1423 Returns a list of bools.
1430 """
1424 """
1431
1425
1432 def local():
1426 def local():
1433 """Whether the repository is local."""
1427 """Whether the repository is local."""
1434 return True
1428 return True
1435
1429
1436 def publishing():
1430 def publishing():
1437 """Whether the repository is a publishing repository."""
1431 """Whether the repository is a publishing repository."""
1438
1432
1439 def cancopy():
1433 def cancopy():
1440 pass
1434 pass
1441
1435
1442 def shared():
1436 def shared():
1443 """The type of shared repository or None."""
1437 """The type of shared repository or None."""
1444
1438
1445 def wjoin(f, *insidef):
1439 def wjoin(f, *insidef):
1446 """Calls self.vfs.reljoin(self.root, f, *insidef)"""
1440 """Calls self.vfs.reljoin(self.root, f, *insidef)"""
1447
1441
1448 def file(f):
1442 def file(f):
1449 """Obtain a filelog for a tracked path.
1443 """Obtain a filelog for a tracked path.
1450
1444
1451 The returned type conforms to the ``ifilestorage`` interface.
1445 The returned type conforms to the ``ifilestorage`` interface.
1452 """
1446 """
1453
1447
1454 def setparents(p1, p2):
1448 def setparents(p1, p2):
1455 """Set the parent nodes of the working directory."""
1449 """Set the parent nodes of the working directory."""
1456
1450
1457 def filectx(path, changeid=None, fileid=None):
1451 def filectx(path, changeid=None, fileid=None):
1458 """Obtain a filectx for the given file revision."""
1452 """Obtain a filectx for the given file revision."""
1459
1453
1460 def getcwd():
1454 def getcwd():
1461 """Obtain the current working directory from the dirstate."""
1455 """Obtain the current working directory from the dirstate."""
1462
1456
1463 def pathto(f, cwd=None):
1457 def pathto(f, cwd=None):
1464 """Obtain the relative path to a file."""
1458 """Obtain the relative path to a file."""
1465
1459
1466 def adddatafilter(name, fltr):
1460 def adddatafilter(name, fltr):
1467 pass
1461 pass
1468
1462
1469 def wread(filename):
1463 def wread(filename):
1470 """Read a file from wvfs, using data filters."""
1464 """Read a file from wvfs, using data filters."""
1471
1465
1472 def wwrite(filename, data, flags, backgroundclose=False, **kwargs):
1466 def wwrite(filename, data, flags, backgroundclose=False, **kwargs):
1473 """Write data to a file in the wvfs, using data filters."""
1467 """Write data to a file in the wvfs, using data filters."""
1474
1468
1475 def wwritedata(filename, data):
1469 def wwritedata(filename, data):
1476 """Resolve data for writing to the wvfs, using data filters."""
1470 """Resolve data for writing to the wvfs, using data filters."""
1477
1471
1478 def currenttransaction():
1472 def currenttransaction():
1479 """Obtain the current transaction instance or None."""
1473 """Obtain the current transaction instance or None."""
1480
1474
1481 def transaction(desc, report=None):
1475 def transaction(desc, report=None):
1482 """Open a new transaction to write to the repository."""
1476 """Open a new transaction to write to the repository."""
1483
1477
1484 def undofiles():
1478 def undofiles():
1485 """Returns a list of (vfs, path) for files to undo transactions."""
1479 """Returns a list of (vfs, path) for files to undo transactions."""
1486
1480
1487 def recover():
1481 def recover():
1488 """Roll back an interrupted transaction."""
1482 """Roll back an interrupted transaction."""
1489
1483
1490 def rollback(dryrun=False, force=False):
1484 def rollback(dryrun=False, force=False):
1491 """Undo the last transaction.
1485 """Undo the last transaction.
1492
1486
1493 DANGEROUS.
1487 DANGEROUS.
1494 """
1488 """
1495
1489
1496 def updatecaches(tr=None, full=False):
1490 def updatecaches(tr=None, full=False):
1497 """Warm repo caches."""
1491 """Warm repo caches."""
1498
1492
1499 def invalidatecaches():
1493 def invalidatecaches():
1500 """Invalidate cached data due to the repository mutating."""
1494 """Invalidate cached data due to the repository mutating."""
1501
1495
1502 def invalidatevolatilesets():
1496 def invalidatevolatilesets():
1503 pass
1497 pass
1504
1498
1505 def invalidatedirstate():
1499 def invalidatedirstate():
1506 """Invalidate the dirstate."""
1500 """Invalidate the dirstate."""
1507
1501
1508 def invalidate(clearfilecache=False):
1502 def invalidate(clearfilecache=False):
1509 pass
1503 pass
1510
1504
1511 def invalidateall():
1505 def invalidateall():
1512 pass
1506 pass
1513
1507
1514 def lock(wait=True):
1508 def lock(wait=True):
1515 """Lock the repository store and return a lock instance."""
1509 """Lock the repository store and return a lock instance."""
1516
1510
1517 def wlock(wait=True):
1511 def wlock(wait=True):
1518 """Lock the non-store parts of the repository."""
1512 """Lock the non-store parts of the repository."""
1519
1513
1520 def currentwlock():
1514 def currentwlock():
1521 """Return the wlock if it's held or None."""
1515 """Return the wlock if it's held or None."""
1522
1516
1523 def checkcommitpatterns(wctx, vdirs, match, status, fail):
1517 def checkcommitpatterns(wctx, vdirs, match, status, fail):
1524 pass
1518 pass
1525
1519
1526 def commit(text='', user=None, date=None, match=None, force=False,
1520 def commit(text='', user=None, date=None, match=None, force=False,
1527 editor=False, extra=None):
1521 editor=False, extra=None):
1528 """Add a new revision to the repository."""
1522 """Add a new revision to the repository."""
1529
1523
1530 def commitctx(ctx, error=False):
1524 def commitctx(ctx, error=False):
1531 """Commit a commitctx instance to the repository."""
1525 """Commit a commitctx instance to the repository."""
1532
1526
1533 def destroying():
1527 def destroying():
1534 """Inform the repository that nodes are about to be destroyed."""
1528 """Inform the repository that nodes are about to be destroyed."""
1535
1529
1536 def destroyed():
1530 def destroyed():
1537 """Inform the repository that nodes have been destroyed."""
1531 """Inform the repository that nodes have been destroyed."""
1538
1532
1539 def status(node1='.', node2=None, match=None, ignored=False,
1533 def status(node1='.', node2=None, match=None, ignored=False,
1540 clean=False, unknown=False, listsubrepos=False):
1534 clean=False, unknown=False, listsubrepos=False):
1541 """Convenience method to call repo[x].status()."""
1535 """Convenience method to call repo[x].status()."""
1542
1536
1543 def addpostdsstatus(ps):
1537 def addpostdsstatus(ps):
1544 pass
1538 pass
1545
1539
1546 def postdsstatus():
1540 def postdsstatus():
1547 pass
1541 pass
1548
1542
1549 def clearpostdsstatus():
1543 def clearpostdsstatus():
1550 pass
1544 pass
1551
1545
1552 def heads(start=None):
1546 def heads(start=None):
1553 """Obtain list of nodes that are DAG heads."""
1547 """Obtain list of nodes that are DAG heads."""
1554
1548
1555 def branchheads(branch=None, start=None, closed=False):
1549 def branchheads(branch=None, start=None, closed=False):
1556 pass
1550 pass
1557
1551
1558 def branches(nodes):
1552 def branches(nodes):
1559 pass
1553 pass
1560
1554
1561 def between(pairs):
1555 def between(pairs):
1562 pass
1556 pass
1563
1557
1564 def checkpush(pushop):
1558 def checkpush(pushop):
1565 pass
1559 pass
1566
1560
1567 prepushoutgoinghooks = interfaceutil.Attribute(
1561 prepushoutgoinghooks = interfaceutil.Attribute(
1568 """util.hooks instance.""")
1562 """util.hooks instance.""")
1569
1563
1570 def pushkey(namespace, key, old, new):
1564 def pushkey(namespace, key, old, new):
1571 pass
1565 pass
1572
1566
1573 def listkeys(namespace):
1567 def listkeys(namespace):
1574 pass
1568 pass
1575
1569
1576 def debugwireargs(one, two, three=None, four=None, five=None):
1570 def debugwireargs(one, two, three=None, four=None, five=None):
1577 pass
1571 pass
1578
1572
1579 def savecommitmessage(text):
1573 def savecommitmessage(text):
1580 pass
1574 pass
@@ -1,641 +1,647
1 # streamclone.py - producing and consuming streaming repository data
1 # streamclone.py - producing and consuming streaming repository data
2 #
2 #
3 # Copyright 2015 Gregory Szorc <gregory.szorc@gmail.com>
3 # Copyright 2015 Gregory Szorc <gregory.szorc@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import contextlib
10 import contextlib
11 import os
11 import os
12 import struct
12 import struct
13 import warnings
13 import warnings
14
14
15 from .i18n import _
15 from .i18n import _
16 from . import (
16 from . import (
17 branchmap,
17 branchmap,
18 cacheutil,
18 cacheutil,
19 error,
19 error,
20 phases,
20 phases,
21 pycompat,
21 pycompat,
22 store,
22 store,
23 util,
23 util,
24 )
24 )
25
25
26 def canperformstreamclone(pullop, bundle2=False):
26 def canperformstreamclone(pullop, bundle2=False):
27 """Whether it is possible to perform a streaming clone as part of pull.
27 """Whether it is possible to perform a streaming clone as part of pull.
28
28
29 ``bundle2`` will cause the function to consider stream clone through
29 ``bundle2`` will cause the function to consider stream clone through
30 bundle2 and only through bundle2.
30 bundle2 and only through bundle2.
31
31
32 Returns a tuple of (supported, requirements). ``supported`` is True if
32 Returns a tuple of (supported, requirements). ``supported`` is True if
33 streaming clone is supported and False otherwise. ``requirements`` is
33 streaming clone is supported and False otherwise. ``requirements`` is
34 a set of repo requirements from the remote, or ``None`` if stream clone
34 a set of repo requirements from the remote, or ``None`` if stream clone
35 isn't supported.
35 isn't supported.
36 """
36 """
37 repo = pullop.repo
37 repo = pullop.repo
38 remote = pullop.remote
38 remote = pullop.remote
39
39
40 bundle2supported = False
40 bundle2supported = False
41 if pullop.canusebundle2:
41 if pullop.canusebundle2:
42 if 'v2' in pullop.remotebundle2caps.get('stream', []):
42 if 'v2' in pullop.remotebundle2caps.get('stream', []):
43 bundle2supported = True
43 bundle2supported = True
44 # else
44 # else
45 # Server doesn't support bundle2 stream clone or doesn't support
45 # Server doesn't support bundle2 stream clone or doesn't support
46 # the versions we support. Fall back and possibly allow legacy.
46 # the versions we support. Fall back and possibly allow legacy.
47
47
48 # Ensures legacy code path uses available bundle2.
48 # Ensures legacy code path uses available bundle2.
49 if bundle2supported and not bundle2:
49 if bundle2supported and not bundle2:
50 return False, None
50 return False, None
51 # Ensures bundle2 doesn't try to do a stream clone if it isn't supported.
51 # Ensures bundle2 doesn't try to do a stream clone if it isn't supported.
52 elif bundle2 and not bundle2supported:
52 elif bundle2 and not bundle2supported:
53 return False, None
53 return False, None
54
54
55 # Streaming clone only works on empty repositories.
55 # Streaming clone only works on empty repositories.
56 if len(repo):
56 if len(repo):
57 return False, None
57 return False, None
58
58
59 # Streaming clone only works if all data is being requested.
59 # Streaming clone only works if all data is being requested.
60 if pullop.heads:
60 if pullop.heads:
61 return False, None
61 return False, None
62
62
63 streamrequested = pullop.streamclonerequested
63 streamrequested = pullop.streamclonerequested
64
64
65 # If we don't have a preference, let the server decide for us. This
65 # If we don't have a preference, let the server decide for us. This
66 # likely only comes into play in LANs.
66 # likely only comes into play in LANs.
67 if streamrequested is None:
67 if streamrequested is None:
68 # The server can advertise whether to prefer streaming clone.
68 # The server can advertise whether to prefer streaming clone.
69 streamrequested = remote.capable('stream-preferred')
69 streamrequested = remote.capable('stream-preferred')
70
70
71 if not streamrequested:
71 if not streamrequested:
72 return False, None
72 return False, None
73
73
74 # In order for stream clone to work, the client has to support all the
74 # In order for stream clone to work, the client has to support all the
75 # requirements advertised by the server.
75 # requirements advertised by the server.
76 #
76 #
77 # The server advertises its requirements via the "stream" and "streamreqs"
77 # The server advertises its requirements via the "stream" and "streamreqs"
78 # capability. "stream" (a value-less capability) is advertised if and only
78 # capability. "stream" (a value-less capability) is advertised if and only
79 # if the only requirement is "revlogv1." Else, the "streamreqs" capability
79 # if the only requirement is "revlogv1." Else, the "streamreqs" capability
80 # is advertised and contains a comma-delimited list of requirements.
80 # is advertised and contains a comma-delimited list of requirements.
81 requirements = set()
81 requirements = set()
82 if remote.capable('stream'):
82 if remote.capable('stream'):
83 requirements.add('revlogv1')
83 requirements.add('revlogv1')
84 else:
84 else:
85 streamreqs = remote.capable('streamreqs')
85 streamreqs = remote.capable('streamreqs')
86 # This is weird and shouldn't happen with modern servers.
86 # This is weird and shouldn't happen with modern servers.
87 if not streamreqs:
87 if not streamreqs:
88 pullop.repo.ui.warn(_(
88 pullop.repo.ui.warn(_(
89 'warning: stream clone requested but server has them '
89 'warning: stream clone requested but server has them '
90 'disabled\n'))
90 'disabled\n'))
91 return False, None
91 return False, None
92
92
93 streamreqs = set(streamreqs.split(','))
93 streamreqs = set(streamreqs.split(','))
94 # Server requires something we don't support. Bail.
94 # Server requires something we don't support. Bail.
95 missingreqs = streamreqs - repo.supportedformats
95 missingreqs = streamreqs - repo.supportedformats
96 if missingreqs:
96 if missingreqs:
97 pullop.repo.ui.warn(_(
97 pullop.repo.ui.warn(_(
98 'warning: stream clone requested but client is missing '
98 'warning: stream clone requested but client is missing '
99 'requirements: %s\n') % ', '.join(sorted(missingreqs)))
99 'requirements: %s\n') % ', '.join(sorted(missingreqs)))
100 pullop.repo.ui.warn(
100 pullop.repo.ui.warn(
101 _('(see https://www.mercurial-scm.org/wiki/MissingRequirement '
101 _('(see https://www.mercurial-scm.org/wiki/MissingRequirement '
102 'for more information)\n'))
102 'for more information)\n'))
103 return False, None
103 return False, None
104 requirements = streamreqs
104 requirements = streamreqs
105
105
106 return True, requirements
106 return True, requirements
107
107
108 def maybeperformlegacystreamclone(pullop):
108 def maybeperformlegacystreamclone(pullop):
109 """Possibly perform a legacy stream clone operation.
109 """Possibly perform a legacy stream clone operation.
110
110
111 Legacy stream clones are performed as part of pull but before all other
111 Legacy stream clones are performed as part of pull but before all other
112 operations.
112 operations.
113
113
114 A legacy stream clone will not be performed if a bundle2 stream clone is
114 A legacy stream clone will not be performed if a bundle2 stream clone is
115 supported.
115 supported.
116 """
116 """
117 from . import localrepo
118
117 supported, requirements = canperformstreamclone(pullop)
119 supported, requirements = canperformstreamclone(pullop)
118
120
119 if not supported:
121 if not supported:
120 return
122 return
121
123
122 repo = pullop.repo
124 repo = pullop.repo
123 remote = pullop.remote
125 remote = pullop.remote
124
126
125 # Save remote branchmap. We will use it later to speed up branchcache
127 # Save remote branchmap. We will use it later to speed up branchcache
126 # creation.
128 # creation.
127 rbranchmap = None
129 rbranchmap = None
128 if remote.capable('branchmap'):
130 if remote.capable('branchmap'):
129 with remote.commandexecutor() as e:
131 with remote.commandexecutor() as e:
130 rbranchmap = e.callcommand('branchmap', {}).result()
132 rbranchmap = e.callcommand('branchmap', {}).result()
131
133
132 repo.ui.status(_('streaming all changes\n'))
134 repo.ui.status(_('streaming all changes\n'))
133
135
134 with remote.commandexecutor() as e:
136 with remote.commandexecutor() as e:
135 fp = e.callcommand('stream_out', {}).result()
137 fp = e.callcommand('stream_out', {}).result()
136
138
137 # TODO strictly speaking, this code should all be inside the context
139 # TODO strictly speaking, this code should all be inside the context
138 # manager because the context manager is supposed to ensure all wire state
140 # manager because the context manager is supposed to ensure all wire state
139 # is flushed when exiting. But the legacy peers don't do this, so it
141 # is flushed when exiting. But the legacy peers don't do this, so it
140 # doesn't matter.
142 # doesn't matter.
141 l = fp.readline()
143 l = fp.readline()
142 try:
144 try:
143 resp = int(l)
145 resp = int(l)
144 except ValueError:
146 except ValueError:
145 raise error.ResponseError(
147 raise error.ResponseError(
146 _('unexpected response from remote server:'), l)
148 _('unexpected response from remote server:'), l)
147 if resp == 1:
149 if resp == 1:
148 raise error.Abort(_('operation forbidden by server'))
150 raise error.Abort(_('operation forbidden by server'))
149 elif resp == 2:
151 elif resp == 2:
150 raise error.Abort(_('locking the remote repository failed'))
152 raise error.Abort(_('locking the remote repository failed'))
151 elif resp != 0:
153 elif resp != 0:
152 raise error.Abort(_('the server sent an unknown error code'))
154 raise error.Abort(_('the server sent an unknown error code'))
153
155
154 l = fp.readline()
156 l = fp.readline()
155 try:
157 try:
156 filecount, bytecount = map(int, l.split(' ', 1))
158 filecount, bytecount = map(int, l.split(' ', 1))
157 except (ValueError, TypeError):
159 except (ValueError, TypeError):
158 raise error.ResponseError(
160 raise error.ResponseError(
159 _('unexpected response from remote server:'), l)
161 _('unexpected response from remote server:'), l)
160
162
161 with repo.lock():
163 with repo.lock():
162 consumev1(repo, fp, filecount, bytecount)
164 consumev1(repo, fp, filecount, bytecount)
163
165
164 # new requirements = old non-format requirements +
166 # new requirements = old non-format requirements +
165 # new format-related remote requirements
167 # new format-related remote requirements
166 # requirements from the streamed-in repository
168 # requirements from the streamed-in repository
167 repo.requirements = requirements | (
169 repo.requirements = requirements | (
168 repo.requirements - repo.supportedformats)
170 repo.requirements - repo.supportedformats)
169 repo._applyopenerreqs()
171 repo.svfs.options = localrepo.resolvestorevfsoptions(
172 repo.ui, repo.requirements)
170 repo._writerequirements()
173 repo._writerequirements()
171
174
172 if rbranchmap:
175 if rbranchmap:
173 branchmap.replacecache(repo, rbranchmap)
176 branchmap.replacecache(repo, rbranchmap)
174
177
175 repo.invalidate()
178 repo.invalidate()
176
179
177 def allowservergeneration(repo):
180 def allowservergeneration(repo):
178 """Whether streaming clones are allowed from the server."""
181 """Whether streaming clones are allowed from the server."""
179 if not repo.ui.configbool('server', 'uncompressed', untrusted=True):
182 if not repo.ui.configbool('server', 'uncompressed', untrusted=True):
180 return False
183 return False
181
184
182 # The way stream clone works makes it impossible to hide secret changesets.
185 # The way stream clone works makes it impossible to hide secret changesets.
183 # So don't allow this by default.
186 # So don't allow this by default.
184 secret = phases.hassecret(repo)
187 secret = phases.hassecret(repo)
185 if secret:
188 if secret:
186 return repo.ui.configbool('server', 'uncompressedallowsecret')
189 return repo.ui.configbool('server', 'uncompressedallowsecret')
187
190
188 return True
191 return True
189
192
190 # This is it's own function so extensions can override it.
193 # This is it's own function so extensions can override it.
191 def _walkstreamfiles(repo):
194 def _walkstreamfiles(repo):
192 return repo.store.walk()
195 return repo.store.walk()
193
196
194 def generatev1(repo):
197 def generatev1(repo):
195 """Emit content for version 1 of a streaming clone.
198 """Emit content for version 1 of a streaming clone.
196
199
197 This returns a 3-tuple of (file count, byte size, data iterator).
200 This returns a 3-tuple of (file count, byte size, data iterator).
198
201
199 The data iterator consists of N entries for each file being transferred.
202 The data iterator consists of N entries for each file being transferred.
200 Each file entry starts as a line with the file name and integer size
203 Each file entry starts as a line with the file name and integer size
201 delimited by a null byte.
204 delimited by a null byte.
202
205
203 The raw file data follows. Following the raw file data is the next file
206 The raw file data follows. Following the raw file data is the next file
204 entry, or EOF.
207 entry, or EOF.
205
208
206 When used on the wire protocol, an additional line indicating protocol
209 When used on the wire protocol, an additional line indicating protocol
207 success will be prepended to the stream. This function is not responsible
210 success will be prepended to the stream. This function is not responsible
208 for adding it.
211 for adding it.
209
212
210 This function will obtain a repository lock to ensure a consistent view of
213 This function will obtain a repository lock to ensure a consistent view of
211 the store is captured. It therefore may raise LockError.
214 the store is captured. It therefore may raise LockError.
212 """
215 """
213 entries = []
216 entries = []
214 total_bytes = 0
217 total_bytes = 0
215 # Get consistent snapshot of repo, lock during scan.
218 # Get consistent snapshot of repo, lock during scan.
216 with repo.lock():
219 with repo.lock():
217 repo.ui.debug('scanning\n')
220 repo.ui.debug('scanning\n')
218 for name, ename, size in _walkstreamfiles(repo):
221 for name, ename, size in _walkstreamfiles(repo):
219 if size:
222 if size:
220 entries.append((name, size))
223 entries.append((name, size))
221 total_bytes += size
224 total_bytes += size
222
225
223 repo.ui.debug('%d files, %d bytes to transfer\n' %
226 repo.ui.debug('%d files, %d bytes to transfer\n' %
224 (len(entries), total_bytes))
227 (len(entries), total_bytes))
225
228
226 svfs = repo.svfs
229 svfs = repo.svfs
227 debugflag = repo.ui.debugflag
230 debugflag = repo.ui.debugflag
228
231
229 def emitrevlogdata():
232 def emitrevlogdata():
230 for name, size in entries:
233 for name, size in entries:
231 if debugflag:
234 if debugflag:
232 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
235 repo.ui.debug('sending %s (%d bytes)\n' % (name, size))
233 # partially encode name over the wire for backwards compat
236 # partially encode name over the wire for backwards compat
234 yield '%s\0%d\n' % (store.encodedir(name), size)
237 yield '%s\0%d\n' % (store.encodedir(name), size)
235 # auditing at this stage is both pointless (paths are already
238 # auditing at this stage is both pointless (paths are already
236 # trusted by the local repo) and expensive
239 # trusted by the local repo) and expensive
237 with svfs(name, 'rb', auditpath=False) as fp:
240 with svfs(name, 'rb', auditpath=False) as fp:
238 if size <= 65536:
241 if size <= 65536:
239 yield fp.read(size)
242 yield fp.read(size)
240 else:
243 else:
241 for chunk in util.filechunkiter(fp, limit=size):
244 for chunk in util.filechunkiter(fp, limit=size):
242 yield chunk
245 yield chunk
243
246
244 return len(entries), total_bytes, emitrevlogdata()
247 return len(entries), total_bytes, emitrevlogdata()
245
248
246 def generatev1wireproto(repo):
249 def generatev1wireproto(repo):
247 """Emit content for version 1 of streaming clone suitable for the wire.
250 """Emit content for version 1 of streaming clone suitable for the wire.
248
251
249 This is the data output from ``generatev1()`` with 2 header lines. The
252 This is the data output from ``generatev1()`` with 2 header lines. The
250 first line indicates overall success. The 2nd contains the file count and
253 first line indicates overall success. The 2nd contains the file count and
251 byte size of payload.
254 byte size of payload.
252
255
253 The success line contains "0" for success, "1" for stream generation not
256 The success line contains "0" for success, "1" for stream generation not
254 allowed, and "2" for error locking the repository (possibly indicating
257 allowed, and "2" for error locking the repository (possibly indicating
255 a permissions error for the server process).
258 a permissions error for the server process).
256 """
259 """
257 if not allowservergeneration(repo):
260 if not allowservergeneration(repo):
258 yield '1\n'
261 yield '1\n'
259 return
262 return
260
263
261 try:
264 try:
262 filecount, bytecount, it = generatev1(repo)
265 filecount, bytecount, it = generatev1(repo)
263 except error.LockError:
266 except error.LockError:
264 yield '2\n'
267 yield '2\n'
265 return
268 return
266
269
267 # Indicates successful response.
270 # Indicates successful response.
268 yield '0\n'
271 yield '0\n'
269 yield '%d %d\n' % (filecount, bytecount)
272 yield '%d %d\n' % (filecount, bytecount)
270 for chunk in it:
273 for chunk in it:
271 yield chunk
274 yield chunk
272
275
273 def generatebundlev1(repo, compression='UN'):
276 def generatebundlev1(repo, compression='UN'):
274 """Emit content for version 1 of a stream clone bundle.
277 """Emit content for version 1 of a stream clone bundle.
275
278
276 The first 4 bytes of the output ("HGS1") denote this as stream clone
279 The first 4 bytes of the output ("HGS1") denote this as stream clone
277 bundle version 1.
280 bundle version 1.
278
281
279 The next 2 bytes indicate the compression type. Only "UN" is currently
282 The next 2 bytes indicate the compression type. Only "UN" is currently
280 supported.
283 supported.
281
284
282 The next 16 bytes are two 64-bit big endian unsigned integers indicating
285 The next 16 bytes are two 64-bit big endian unsigned integers indicating
283 file count and byte count, respectively.
286 file count and byte count, respectively.
284
287
285 The next 2 bytes is a 16-bit big endian unsigned short declaring the length
288 The next 2 bytes is a 16-bit big endian unsigned short declaring the length
286 of the requirements string, including a trailing \0. The following N bytes
289 of the requirements string, including a trailing \0. The following N bytes
287 are the requirements string, which is ASCII containing a comma-delimited
290 are the requirements string, which is ASCII containing a comma-delimited
288 list of repo requirements that are needed to support the data.
291 list of repo requirements that are needed to support the data.
289
292
290 The remaining content is the output of ``generatev1()`` (which may be
293 The remaining content is the output of ``generatev1()`` (which may be
291 compressed in the future).
294 compressed in the future).
292
295
293 Returns a tuple of (requirements, data generator).
296 Returns a tuple of (requirements, data generator).
294 """
297 """
295 if compression != 'UN':
298 if compression != 'UN':
296 raise ValueError('we do not support the compression argument yet')
299 raise ValueError('we do not support the compression argument yet')
297
300
298 requirements = repo.requirements & repo.supportedformats
301 requirements = repo.requirements & repo.supportedformats
299 requires = ','.join(sorted(requirements))
302 requires = ','.join(sorted(requirements))
300
303
301 def gen():
304 def gen():
302 yield 'HGS1'
305 yield 'HGS1'
303 yield compression
306 yield compression
304
307
305 filecount, bytecount, it = generatev1(repo)
308 filecount, bytecount, it = generatev1(repo)
306 repo.ui.status(_('writing %d bytes for %d files\n') %
309 repo.ui.status(_('writing %d bytes for %d files\n') %
307 (bytecount, filecount))
310 (bytecount, filecount))
308
311
309 yield struct.pack('>QQ', filecount, bytecount)
312 yield struct.pack('>QQ', filecount, bytecount)
310 yield struct.pack('>H', len(requires) + 1)
313 yield struct.pack('>H', len(requires) + 1)
311 yield requires + '\0'
314 yield requires + '\0'
312
315
313 # This is where we'll add compression in the future.
316 # This is where we'll add compression in the future.
314 assert compression == 'UN'
317 assert compression == 'UN'
315
318
316 progress = repo.ui.makeprogress(_('bundle'), total=bytecount,
319 progress = repo.ui.makeprogress(_('bundle'), total=bytecount,
317 unit=_('bytes'))
320 unit=_('bytes'))
318 progress.update(0)
321 progress.update(0)
319
322
320 for chunk in it:
323 for chunk in it:
321 progress.increment(step=len(chunk))
324 progress.increment(step=len(chunk))
322 yield chunk
325 yield chunk
323
326
324 progress.complete()
327 progress.complete()
325
328
326 return requirements, gen()
329 return requirements, gen()
327
330
328 def consumev1(repo, fp, filecount, bytecount):
331 def consumev1(repo, fp, filecount, bytecount):
329 """Apply the contents from version 1 of a streaming clone file handle.
332 """Apply the contents from version 1 of a streaming clone file handle.
330
333
331 This takes the output from "stream_out" and applies it to the specified
334 This takes the output from "stream_out" and applies it to the specified
332 repository.
335 repository.
333
336
334 Like "stream_out," the status line added by the wire protocol is not
337 Like "stream_out," the status line added by the wire protocol is not
335 handled by this function.
338 handled by this function.
336 """
339 """
337 with repo.lock():
340 with repo.lock():
338 repo.ui.status(_('%d files to transfer, %s of data\n') %
341 repo.ui.status(_('%d files to transfer, %s of data\n') %
339 (filecount, util.bytecount(bytecount)))
342 (filecount, util.bytecount(bytecount)))
340 progress = repo.ui.makeprogress(_('clone'), total=bytecount,
343 progress = repo.ui.makeprogress(_('clone'), total=bytecount,
341 unit=_('bytes'))
344 unit=_('bytes'))
342 progress.update(0)
345 progress.update(0)
343 start = util.timer()
346 start = util.timer()
344
347
345 # TODO: get rid of (potential) inconsistency
348 # TODO: get rid of (potential) inconsistency
346 #
349 #
347 # If transaction is started and any @filecache property is
350 # If transaction is started and any @filecache property is
348 # changed at this point, it causes inconsistency between
351 # changed at this point, it causes inconsistency between
349 # in-memory cached property and streamclone-ed file on the
352 # in-memory cached property and streamclone-ed file on the
350 # disk. Nested transaction prevents transaction scope "clone"
353 # disk. Nested transaction prevents transaction scope "clone"
351 # below from writing in-memory changes out at the end of it,
354 # below from writing in-memory changes out at the end of it,
352 # even though in-memory changes are discarded at the end of it
355 # even though in-memory changes are discarded at the end of it
353 # regardless of transaction nesting.
356 # regardless of transaction nesting.
354 #
357 #
355 # But transaction nesting can't be simply prohibited, because
358 # But transaction nesting can't be simply prohibited, because
356 # nesting occurs also in ordinary case (e.g. enabling
359 # nesting occurs also in ordinary case (e.g. enabling
357 # clonebundles).
360 # clonebundles).
358
361
359 with repo.transaction('clone'):
362 with repo.transaction('clone'):
360 with repo.svfs.backgroundclosing(repo.ui, expectedcount=filecount):
363 with repo.svfs.backgroundclosing(repo.ui, expectedcount=filecount):
361 for i in pycompat.xrange(filecount):
364 for i in pycompat.xrange(filecount):
362 # XXX doesn't support '\n' or '\r' in filenames
365 # XXX doesn't support '\n' or '\r' in filenames
363 l = fp.readline()
366 l = fp.readline()
364 try:
367 try:
365 name, size = l.split('\0', 1)
368 name, size = l.split('\0', 1)
366 size = int(size)
369 size = int(size)
367 except (ValueError, TypeError):
370 except (ValueError, TypeError):
368 raise error.ResponseError(
371 raise error.ResponseError(
369 _('unexpected response from remote server:'), l)
372 _('unexpected response from remote server:'), l)
370 if repo.ui.debugflag:
373 if repo.ui.debugflag:
371 repo.ui.debug('adding %s (%s)\n' %
374 repo.ui.debug('adding %s (%s)\n' %
372 (name, util.bytecount(size)))
375 (name, util.bytecount(size)))
373 # for backwards compat, name was partially encoded
376 # for backwards compat, name was partially encoded
374 path = store.decodedir(name)
377 path = store.decodedir(name)
375 with repo.svfs(path, 'w', backgroundclose=True) as ofp:
378 with repo.svfs(path, 'w', backgroundclose=True) as ofp:
376 for chunk in util.filechunkiter(fp, limit=size):
379 for chunk in util.filechunkiter(fp, limit=size):
377 progress.increment(step=len(chunk))
380 progress.increment(step=len(chunk))
378 ofp.write(chunk)
381 ofp.write(chunk)
379
382
380 # force @filecache properties to be reloaded from
383 # force @filecache properties to be reloaded from
381 # streamclone-ed file at next access
384 # streamclone-ed file at next access
382 repo.invalidate(clearfilecache=True)
385 repo.invalidate(clearfilecache=True)
383
386
384 elapsed = util.timer() - start
387 elapsed = util.timer() - start
385 if elapsed <= 0:
388 if elapsed <= 0:
386 elapsed = 0.001
389 elapsed = 0.001
387 progress.complete()
390 progress.complete()
388 repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
391 repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
389 (util.bytecount(bytecount), elapsed,
392 (util.bytecount(bytecount), elapsed,
390 util.bytecount(bytecount / elapsed)))
393 util.bytecount(bytecount / elapsed)))
391
394
392 def readbundle1header(fp):
395 def readbundle1header(fp):
393 compression = fp.read(2)
396 compression = fp.read(2)
394 if compression != 'UN':
397 if compression != 'UN':
395 raise error.Abort(_('only uncompressed stream clone bundles are '
398 raise error.Abort(_('only uncompressed stream clone bundles are '
396 'supported; got %s') % compression)
399 'supported; got %s') % compression)
397
400
398 filecount, bytecount = struct.unpack('>QQ', fp.read(16))
401 filecount, bytecount = struct.unpack('>QQ', fp.read(16))
399 requireslen = struct.unpack('>H', fp.read(2))[0]
402 requireslen = struct.unpack('>H', fp.read(2))[0]
400 requires = fp.read(requireslen)
403 requires = fp.read(requireslen)
401
404
402 if not requires.endswith('\0'):
405 if not requires.endswith('\0'):
403 raise error.Abort(_('malformed stream clone bundle: '
406 raise error.Abort(_('malformed stream clone bundle: '
404 'requirements not properly encoded'))
407 'requirements not properly encoded'))
405
408
406 requirements = set(requires.rstrip('\0').split(','))
409 requirements = set(requires.rstrip('\0').split(','))
407
410
408 return filecount, bytecount, requirements
411 return filecount, bytecount, requirements
409
412
410 def applybundlev1(repo, fp):
413 def applybundlev1(repo, fp):
411 """Apply the content from a stream clone bundle version 1.
414 """Apply the content from a stream clone bundle version 1.
412
415
413 We assume the 4 byte header has been read and validated and the file handle
416 We assume the 4 byte header has been read and validated and the file handle
414 is at the 2 byte compression identifier.
417 is at the 2 byte compression identifier.
415 """
418 """
416 if len(repo):
419 if len(repo):
417 raise error.Abort(_('cannot apply stream clone bundle on non-empty '
420 raise error.Abort(_('cannot apply stream clone bundle on non-empty '
418 'repo'))
421 'repo'))
419
422
420 filecount, bytecount, requirements = readbundle1header(fp)
423 filecount, bytecount, requirements = readbundle1header(fp)
421 missingreqs = requirements - repo.supportedformats
424 missingreqs = requirements - repo.supportedformats
422 if missingreqs:
425 if missingreqs:
423 raise error.Abort(_('unable to apply stream clone: '
426 raise error.Abort(_('unable to apply stream clone: '
424 'unsupported format: %s') %
427 'unsupported format: %s') %
425 ', '.join(sorted(missingreqs)))
428 ', '.join(sorted(missingreqs)))
426
429
427 consumev1(repo, fp, filecount, bytecount)
430 consumev1(repo, fp, filecount, bytecount)
428
431
429 class streamcloneapplier(object):
432 class streamcloneapplier(object):
430 """Class to manage applying streaming clone bundles.
433 """Class to manage applying streaming clone bundles.
431
434
432 We need to wrap ``applybundlev1()`` in a dedicated type to enable bundle
435 We need to wrap ``applybundlev1()`` in a dedicated type to enable bundle
433 readers to perform bundle type-specific functionality.
436 readers to perform bundle type-specific functionality.
434 """
437 """
435 def __init__(self, fh):
438 def __init__(self, fh):
436 self._fh = fh
439 self._fh = fh
437
440
438 def apply(self, repo):
441 def apply(self, repo):
439 return applybundlev1(repo, self._fh)
442 return applybundlev1(repo, self._fh)
440
443
441 # type of file to stream
444 # type of file to stream
442 _fileappend = 0 # append only file
445 _fileappend = 0 # append only file
443 _filefull = 1 # full snapshot file
446 _filefull = 1 # full snapshot file
444
447
445 # Source of the file
448 # Source of the file
446 _srcstore = 's' # store (svfs)
449 _srcstore = 's' # store (svfs)
447 _srccache = 'c' # cache (cache)
450 _srccache = 'c' # cache (cache)
448
451
449 # This is it's own function so extensions can override it.
452 # This is it's own function so extensions can override it.
450 def _walkstreamfullstorefiles(repo):
453 def _walkstreamfullstorefiles(repo):
451 """list snapshot file from the store"""
454 """list snapshot file from the store"""
452 fnames = []
455 fnames = []
453 if not repo.publishing():
456 if not repo.publishing():
454 fnames.append('phaseroots')
457 fnames.append('phaseroots')
455 return fnames
458 return fnames
456
459
457 def _filterfull(entry, copy, vfsmap):
460 def _filterfull(entry, copy, vfsmap):
458 """actually copy the snapshot files"""
461 """actually copy the snapshot files"""
459 src, name, ftype, data = entry
462 src, name, ftype, data = entry
460 if ftype != _filefull:
463 if ftype != _filefull:
461 return entry
464 return entry
462 return (src, name, ftype, copy(vfsmap[src].join(name)))
465 return (src, name, ftype, copy(vfsmap[src].join(name)))
463
466
464 @contextlib.contextmanager
467 @contextlib.contextmanager
465 def maketempcopies():
468 def maketempcopies():
466 """return a function to temporary copy file"""
469 """return a function to temporary copy file"""
467 files = []
470 files = []
468 try:
471 try:
469 def copy(src):
472 def copy(src):
470 fd, dst = pycompat.mkstemp()
473 fd, dst = pycompat.mkstemp()
471 os.close(fd)
474 os.close(fd)
472 files.append(dst)
475 files.append(dst)
473 util.copyfiles(src, dst, hardlink=True)
476 util.copyfiles(src, dst, hardlink=True)
474 return dst
477 return dst
475 yield copy
478 yield copy
476 finally:
479 finally:
477 for tmp in files:
480 for tmp in files:
478 util.tryunlink(tmp)
481 util.tryunlink(tmp)
479
482
480 def _makemap(repo):
483 def _makemap(repo):
481 """make a (src -> vfs) map for the repo"""
484 """make a (src -> vfs) map for the repo"""
482 vfsmap = {
485 vfsmap = {
483 _srcstore: repo.svfs,
486 _srcstore: repo.svfs,
484 _srccache: repo.cachevfs,
487 _srccache: repo.cachevfs,
485 }
488 }
486 # we keep repo.vfs out of the on purpose, ther are too many danger there
489 # we keep repo.vfs out of the on purpose, ther are too many danger there
487 # (eg: .hg/hgrc)
490 # (eg: .hg/hgrc)
488 assert repo.vfs not in vfsmap.values()
491 assert repo.vfs not in vfsmap.values()
489
492
490 return vfsmap
493 return vfsmap
491
494
492 def _emit2(repo, entries, totalfilesize):
495 def _emit2(repo, entries, totalfilesize):
493 """actually emit the stream bundle"""
496 """actually emit the stream bundle"""
494 vfsmap = _makemap(repo)
497 vfsmap = _makemap(repo)
495 progress = repo.ui.makeprogress(_('bundle'), total=totalfilesize,
498 progress = repo.ui.makeprogress(_('bundle'), total=totalfilesize,
496 unit=_('bytes'))
499 unit=_('bytes'))
497 progress.update(0)
500 progress.update(0)
498 with maketempcopies() as copy, progress:
501 with maketempcopies() as copy, progress:
499 # copy is delayed until we are in the try
502 # copy is delayed until we are in the try
500 entries = [_filterfull(e, copy, vfsmap) for e in entries]
503 entries = [_filterfull(e, copy, vfsmap) for e in entries]
501 yield None # this release the lock on the repository
504 yield None # this release the lock on the repository
502 seen = 0
505 seen = 0
503
506
504 for src, name, ftype, data in entries:
507 for src, name, ftype, data in entries:
505 vfs = vfsmap[src]
508 vfs = vfsmap[src]
506 yield src
509 yield src
507 yield util.uvarintencode(len(name))
510 yield util.uvarintencode(len(name))
508 if ftype == _fileappend:
511 if ftype == _fileappend:
509 fp = vfs(name)
512 fp = vfs(name)
510 size = data
513 size = data
511 elif ftype == _filefull:
514 elif ftype == _filefull:
512 fp = open(data, 'rb')
515 fp = open(data, 'rb')
513 size = util.fstat(fp).st_size
516 size = util.fstat(fp).st_size
514 try:
517 try:
515 yield util.uvarintencode(size)
518 yield util.uvarintencode(size)
516 yield name
519 yield name
517 if size <= 65536:
520 if size <= 65536:
518 chunks = (fp.read(size),)
521 chunks = (fp.read(size),)
519 else:
522 else:
520 chunks = util.filechunkiter(fp, limit=size)
523 chunks = util.filechunkiter(fp, limit=size)
521 for chunk in chunks:
524 for chunk in chunks:
522 seen += len(chunk)
525 seen += len(chunk)
523 progress.update(seen)
526 progress.update(seen)
524 yield chunk
527 yield chunk
525 finally:
528 finally:
526 fp.close()
529 fp.close()
527
530
528 def generatev2(repo):
531 def generatev2(repo):
529 """Emit content for version 2 of a streaming clone.
532 """Emit content for version 2 of a streaming clone.
530
533
531 the data stream consists the following entries:
534 the data stream consists the following entries:
532 1) A char representing the file destination (eg: store or cache)
535 1) A char representing the file destination (eg: store or cache)
533 2) A varint containing the length of the filename
536 2) A varint containing the length of the filename
534 3) A varint containing the length of file data
537 3) A varint containing the length of file data
535 4) N bytes containing the filename (the internal, store-agnostic form)
538 4) N bytes containing the filename (the internal, store-agnostic form)
536 5) N bytes containing the file data
539 5) N bytes containing the file data
537
540
538 Returns a 3-tuple of (file count, file size, data iterator).
541 Returns a 3-tuple of (file count, file size, data iterator).
539 """
542 """
540
543
541 with repo.lock():
544 with repo.lock():
542
545
543 entries = []
546 entries = []
544 totalfilesize = 0
547 totalfilesize = 0
545
548
546 repo.ui.debug('scanning\n')
549 repo.ui.debug('scanning\n')
547 for name, ename, size in _walkstreamfiles(repo):
550 for name, ename, size in _walkstreamfiles(repo):
548 if size:
551 if size:
549 entries.append((_srcstore, name, _fileappend, size))
552 entries.append((_srcstore, name, _fileappend, size))
550 totalfilesize += size
553 totalfilesize += size
551 for name in _walkstreamfullstorefiles(repo):
554 for name in _walkstreamfullstorefiles(repo):
552 if repo.svfs.exists(name):
555 if repo.svfs.exists(name):
553 totalfilesize += repo.svfs.lstat(name).st_size
556 totalfilesize += repo.svfs.lstat(name).st_size
554 entries.append((_srcstore, name, _filefull, None))
557 entries.append((_srcstore, name, _filefull, None))
555 for name in cacheutil.cachetocopy(repo):
558 for name in cacheutil.cachetocopy(repo):
556 if repo.cachevfs.exists(name):
559 if repo.cachevfs.exists(name):
557 totalfilesize += repo.cachevfs.lstat(name).st_size
560 totalfilesize += repo.cachevfs.lstat(name).st_size
558 entries.append((_srccache, name, _filefull, None))
561 entries.append((_srccache, name, _filefull, None))
559
562
560 chunks = _emit2(repo, entries, totalfilesize)
563 chunks = _emit2(repo, entries, totalfilesize)
561 first = next(chunks)
564 first = next(chunks)
562 assert first is None
565 assert first is None
563
566
564 return len(entries), totalfilesize, chunks
567 return len(entries), totalfilesize, chunks
565
568
566 @contextlib.contextmanager
569 @contextlib.contextmanager
567 def nested(*ctxs):
570 def nested(*ctxs):
568 with warnings.catch_warnings():
571 with warnings.catch_warnings():
569 # For some reason, Python decided 'nested' was deprecated without
572 # For some reason, Python decided 'nested' was deprecated without
570 # replacement. They officially advertised for filtering the deprecation
573 # replacement. They officially advertised for filtering the deprecation
571 # warning for people who actually need the feature.
574 # warning for people who actually need the feature.
572 warnings.filterwarnings("ignore",category=DeprecationWarning)
575 warnings.filterwarnings("ignore",category=DeprecationWarning)
573 with contextlib.nested(*ctxs):
576 with contextlib.nested(*ctxs):
574 yield
577 yield
575
578
576 def consumev2(repo, fp, filecount, filesize):
579 def consumev2(repo, fp, filecount, filesize):
577 """Apply the contents from a version 2 streaming clone.
580 """Apply the contents from a version 2 streaming clone.
578
581
579 Data is read from an object that only needs to provide a ``read(size)``
582 Data is read from an object that only needs to provide a ``read(size)``
580 method.
583 method.
581 """
584 """
582 with repo.lock():
585 with repo.lock():
583 repo.ui.status(_('%d files to transfer, %s of data\n') %
586 repo.ui.status(_('%d files to transfer, %s of data\n') %
584 (filecount, util.bytecount(filesize)))
587 (filecount, util.bytecount(filesize)))
585
588
586 start = util.timer()
589 start = util.timer()
587 progress = repo.ui.makeprogress(_('clone'), total=filesize,
590 progress = repo.ui.makeprogress(_('clone'), total=filesize,
588 unit=_('bytes'))
591 unit=_('bytes'))
589 progress.update(0)
592 progress.update(0)
590
593
591 vfsmap = _makemap(repo)
594 vfsmap = _makemap(repo)
592
595
593 with repo.transaction('clone'):
596 with repo.transaction('clone'):
594 ctxs = (vfs.backgroundclosing(repo.ui)
597 ctxs = (vfs.backgroundclosing(repo.ui)
595 for vfs in vfsmap.values())
598 for vfs in vfsmap.values())
596 with nested(*ctxs):
599 with nested(*ctxs):
597 for i in range(filecount):
600 for i in range(filecount):
598 src = util.readexactly(fp, 1)
601 src = util.readexactly(fp, 1)
599 vfs = vfsmap[src]
602 vfs = vfsmap[src]
600 namelen = util.uvarintdecodestream(fp)
603 namelen = util.uvarintdecodestream(fp)
601 datalen = util.uvarintdecodestream(fp)
604 datalen = util.uvarintdecodestream(fp)
602
605
603 name = util.readexactly(fp, namelen)
606 name = util.readexactly(fp, namelen)
604
607
605 if repo.ui.debugflag:
608 if repo.ui.debugflag:
606 repo.ui.debug('adding [%s] %s (%s)\n' %
609 repo.ui.debug('adding [%s] %s (%s)\n' %
607 (src, name, util.bytecount(datalen)))
610 (src, name, util.bytecount(datalen)))
608
611
609 with vfs(name, 'w') as ofp:
612 with vfs(name, 'w') as ofp:
610 for chunk in util.filechunkiter(fp, limit=datalen):
613 for chunk in util.filechunkiter(fp, limit=datalen):
611 progress.increment(step=len(chunk))
614 progress.increment(step=len(chunk))
612 ofp.write(chunk)
615 ofp.write(chunk)
613
616
614 # force @filecache properties to be reloaded from
617 # force @filecache properties to be reloaded from
615 # streamclone-ed file at next access
618 # streamclone-ed file at next access
616 repo.invalidate(clearfilecache=True)
619 repo.invalidate(clearfilecache=True)
617
620
618 elapsed = util.timer() - start
621 elapsed = util.timer() - start
619 if elapsed <= 0:
622 if elapsed <= 0:
620 elapsed = 0.001
623 elapsed = 0.001
621 repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
624 repo.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') %
622 (util.bytecount(progress.pos), elapsed,
625 (util.bytecount(progress.pos), elapsed,
623 util.bytecount(progress.pos / elapsed)))
626 util.bytecount(progress.pos / elapsed)))
624 progress.complete()
627 progress.complete()
625
628
626 def applybundlev2(repo, fp, filecount, filesize, requirements):
629 def applybundlev2(repo, fp, filecount, filesize, requirements):
630 from . import localrepo
631
627 missingreqs = [r for r in requirements if r not in repo.supported]
632 missingreqs = [r for r in requirements if r not in repo.supported]
628 if missingreqs:
633 if missingreqs:
629 raise error.Abort(_('unable to apply stream clone: '
634 raise error.Abort(_('unable to apply stream clone: '
630 'unsupported format: %s') %
635 'unsupported format: %s') %
631 ', '.join(sorted(missingreqs)))
636 ', '.join(sorted(missingreqs)))
632
637
633 consumev2(repo, fp, filecount, filesize)
638 consumev2(repo, fp, filecount, filesize)
634
639
635 # new requirements = old non-format requirements +
640 # new requirements = old non-format requirements +
636 # new format-related remote requirements
641 # new format-related remote requirements
637 # requirements from the streamed-in repository
642 # requirements from the streamed-in repository
638 repo.requirements = set(requirements) | (
643 repo.requirements = set(requirements) | (
639 repo.requirements - repo.supportedformats)
644 repo.requirements - repo.supportedformats)
640 repo._applyopenerreqs()
645 repo.svfs.options = localrepo.resolvestorevfsoptions(
646 repo.ui, repo.requirements)
641 repo._writerequirements()
647 repo._writerequirements()
General Comments 0
You need to be logged in to leave comments. Login now