##// END OF EJS Templates
phases: move the binary decoding function in the phases module...
Boris Feld -
r34321:12c42bcd default
parent child Browse files
Show More
@@ -1,1934 +1,1921 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 phases,
161 phases,
162 pushkey,
162 pushkey,
163 pycompat,
163 pycompat,
164 tags,
164 tags,
165 url,
165 url,
166 util,
166 util,
167 )
167 )
168
168
169 urlerr = util.urlerr
169 urlerr = util.urlerr
170 urlreq = util.urlreq
170 urlreq = util.urlreq
171
171
172 _pack = struct.pack
172 _pack = struct.pack
173 _unpack = struct.unpack
173 _unpack = struct.unpack
174
174
175 _fstreamparamsize = '>i'
175 _fstreamparamsize = '>i'
176 _fpartheadersize = '>i'
176 _fpartheadersize = '>i'
177 _fparttypesize = '>B'
177 _fparttypesize = '>B'
178 _fpartid = '>I'
178 _fpartid = '>I'
179 _fpayloadsize = '>i'
179 _fpayloadsize = '>i'
180 _fpartparamcount = '>BB'
180 _fpartparamcount = '>BB'
181
181
182 preferedchunksize = 4096
182 preferedchunksize = 4096
183
183
184 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
185
185
186 def outdebug(ui, message):
186 def outdebug(ui, message):
187 """debug regarding output stream (bundling)"""
187 """debug regarding output stream (bundling)"""
188 if ui.configbool('devel', 'bundle2.debug'):
188 if ui.configbool('devel', 'bundle2.debug'):
189 ui.debug('bundle2-output: %s\n' % message)
189 ui.debug('bundle2-output: %s\n' % message)
190
190
191 def indebug(ui, message):
191 def indebug(ui, message):
192 """debug on input stream (unbundling)"""
192 """debug on input stream (unbundling)"""
193 if ui.configbool('devel', 'bundle2.debug'):
193 if ui.configbool('devel', 'bundle2.debug'):
194 ui.debug('bundle2-input: %s\n' % message)
194 ui.debug('bundle2-input: %s\n' % message)
195
195
196 def validateparttype(parttype):
196 def validateparttype(parttype):
197 """raise ValueError if a parttype contains invalid character"""
197 """raise ValueError if a parttype contains invalid character"""
198 if _parttypeforbidden.search(parttype):
198 if _parttypeforbidden.search(parttype):
199 raise ValueError(parttype)
199 raise ValueError(parttype)
200
200
201 def _makefpartparamsizes(nbparams):
201 def _makefpartparamsizes(nbparams):
202 """return a struct format to read part parameter sizes
202 """return a struct format to read part parameter sizes
203
203
204 The number parameters is variable so we need to build that format
204 The number parameters is variable so we need to build that format
205 dynamically.
205 dynamically.
206 """
206 """
207 return '>'+('BB'*nbparams)
207 return '>'+('BB'*nbparams)
208
208
209 parthandlermapping = {}
209 parthandlermapping = {}
210
210
211 def parthandler(parttype, params=()):
211 def parthandler(parttype, params=()):
212 """decorator that register a function as a bundle2 part handler
212 """decorator that register a function as a bundle2 part handler
213
213
214 eg::
214 eg::
215
215
216 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
217 def myparttypehandler(...):
217 def myparttypehandler(...):
218 '''process a part of type "my part".'''
218 '''process a part of type "my part".'''
219 ...
219 ...
220 """
220 """
221 validateparttype(parttype)
221 validateparttype(parttype)
222 def _decorator(func):
222 def _decorator(func):
223 lparttype = parttype.lower() # enforce lower case matching.
223 lparttype = parttype.lower() # enforce lower case matching.
224 assert lparttype not in parthandlermapping
224 assert lparttype not in parthandlermapping
225 parthandlermapping[lparttype] = func
225 parthandlermapping[lparttype] = func
226 func.params = frozenset(params)
226 func.params = frozenset(params)
227 return func
227 return func
228 return _decorator
228 return _decorator
229
229
230 class unbundlerecords(object):
230 class unbundlerecords(object):
231 """keep record of what happens during and unbundle
231 """keep record of what happens during and unbundle
232
232
233 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 New records are added using `records.add('cat', obj)`. Where 'cat' is a
234 category of record and obj is an arbitrary object.
234 category of record and obj is an arbitrary object.
235
235
236 `records['cat']` will return all entries of this category 'cat'.
236 `records['cat']` will return all entries of this category 'cat'.
237
237
238 Iterating on the object itself will yield `('category', obj)` tuples
238 Iterating on the object itself will yield `('category', obj)` tuples
239 for all entries.
239 for all entries.
240
240
241 All iterations happens in chronological order.
241 All iterations happens in chronological order.
242 """
242 """
243
243
244 def __init__(self):
244 def __init__(self):
245 self._categories = {}
245 self._categories = {}
246 self._sequences = []
246 self._sequences = []
247 self._replies = {}
247 self._replies = {}
248
248
249 def add(self, category, entry, inreplyto=None):
249 def add(self, category, entry, inreplyto=None):
250 """add a new record of a given category.
250 """add a new record of a given category.
251
251
252 The entry can then be retrieved in the list returned by
252 The entry can then be retrieved in the list returned by
253 self['category']."""
253 self['category']."""
254 self._categories.setdefault(category, []).append(entry)
254 self._categories.setdefault(category, []).append(entry)
255 self._sequences.append((category, entry))
255 self._sequences.append((category, entry))
256 if inreplyto is not None:
256 if inreplyto is not None:
257 self.getreplies(inreplyto).add(category, entry)
257 self.getreplies(inreplyto).add(category, entry)
258
258
259 def getreplies(self, partid):
259 def getreplies(self, partid):
260 """get the records that are replies to a specific part"""
260 """get the records that are replies to a specific part"""
261 return self._replies.setdefault(partid, unbundlerecords())
261 return self._replies.setdefault(partid, unbundlerecords())
262
262
263 def __getitem__(self, cat):
263 def __getitem__(self, cat):
264 return tuple(self._categories.get(cat, ()))
264 return tuple(self._categories.get(cat, ()))
265
265
266 def __iter__(self):
266 def __iter__(self):
267 return iter(self._sequences)
267 return iter(self._sequences)
268
268
269 def __len__(self):
269 def __len__(self):
270 return len(self._sequences)
270 return len(self._sequences)
271
271
272 def __nonzero__(self):
272 def __nonzero__(self):
273 return bool(self._sequences)
273 return bool(self._sequences)
274
274
275 __bool__ = __nonzero__
275 __bool__ = __nonzero__
276
276
277 class bundleoperation(object):
277 class bundleoperation(object):
278 """an object that represents a single bundling process
278 """an object that represents a single bundling process
279
279
280 Its purpose is to carry unbundle-related objects and states.
280 Its purpose is to carry unbundle-related objects and states.
281
281
282 A new object should be created at the beginning of each bundle processing.
282 A new object should be created at the beginning of each bundle processing.
283 The object is to be returned by the processing function.
283 The object is to be returned by the processing function.
284
284
285 The object has very little content now it will ultimately contain:
285 The object has very little content now it will ultimately contain:
286 * an access to the repo the bundle is applied to,
286 * an access to the repo the bundle is applied to,
287 * a ui object,
287 * a ui object,
288 * a way to retrieve a transaction to add changes to the repo,
288 * a way to retrieve a transaction to add changes to the repo,
289 * a way to record the result of processing each part,
289 * a way to record the result of processing each part,
290 * a way to construct a bundle response when applicable.
290 * a way to construct a bundle response when applicable.
291 """
291 """
292
292
293 def __init__(self, repo, transactiongetter, captureoutput=True):
293 def __init__(self, repo, transactiongetter, captureoutput=True):
294 self.repo = repo
294 self.repo = repo
295 self.ui = repo.ui
295 self.ui = repo.ui
296 self.records = unbundlerecords()
296 self.records = unbundlerecords()
297 self.reply = None
297 self.reply = None
298 self.captureoutput = captureoutput
298 self.captureoutput = captureoutput
299 self.hookargs = {}
299 self.hookargs = {}
300 self._gettransaction = transactiongetter
300 self._gettransaction = transactiongetter
301
301
302 def gettransaction(self):
302 def gettransaction(self):
303 transaction = self._gettransaction()
303 transaction = self._gettransaction()
304
304
305 if self.hookargs:
305 if self.hookargs:
306 # the ones added to the transaction supercede those added
306 # the ones added to the transaction supercede those added
307 # to the operation.
307 # to the operation.
308 self.hookargs.update(transaction.hookargs)
308 self.hookargs.update(transaction.hookargs)
309 transaction.hookargs = self.hookargs
309 transaction.hookargs = self.hookargs
310
310
311 # mark the hookargs as flushed. further attempts to add to
311 # mark the hookargs as flushed. further attempts to add to
312 # hookargs will result in an abort.
312 # hookargs will result in an abort.
313 self.hookargs = None
313 self.hookargs = None
314
314
315 return transaction
315 return transaction
316
316
317 def addhookargs(self, hookargs):
317 def addhookargs(self, hookargs):
318 if self.hookargs is None:
318 if self.hookargs is None:
319 raise error.ProgrammingError('attempted to add hookargs to '
319 raise error.ProgrammingError('attempted to add hookargs to '
320 'operation after transaction started')
320 'operation after transaction started')
321 self.hookargs.update(hookargs)
321 self.hookargs.update(hookargs)
322
322
323 class TransactionUnavailable(RuntimeError):
323 class TransactionUnavailable(RuntimeError):
324 pass
324 pass
325
325
326 def _notransaction():
326 def _notransaction():
327 """default method to get a transaction while processing a bundle
327 """default method to get a transaction while processing a bundle
328
328
329 Raise an exception to highlight the fact that no transaction was expected
329 Raise an exception to highlight the fact that no transaction was expected
330 to be created"""
330 to be created"""
331 raise TransactionUnavailable()
331 raise TransactionUnavailable()
332
332
333 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
333 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
334 # transform me into unbundler.apply() as soon as the freeze is lifted
334 # transform me into unbundler.apply() as soon as the freeze is lifted
335 if isinstance(unbundler, unbundle20):
335 if isinstance(unbundler, unbundle20):
336 tr.hookargs['bundle2'] = '1'
336 tr.hookargs['bundle2'] = '1'
337 if source is not None and 'source' not in tr.hookargs:
337 if source is not None and 'source' not in tr.hookargs:
338 tr.hookargs['source'] = source
338 tr.hookargs['source'] = source
339 if url is not None and 'url' not in tr.hookargs:
339 if url is not None and 'url' not in tr.hookargs:
340 tr.hookargs['url'] = url
340 tr.hookargs['url'] = url
341 return processbundle(repo, unbundler, lambda: tr)
341 return processbundle(repo, unbundler, lambda: tr)
342 else:
342 else:
343 # the transactiongetter won't be used, but we might as well set it
343 # the transactiongetter won't be used, but we might as well set it
344 op = bundleoperation(repo, lambda: tr)
344 op = bundleoperation(repo, lambda: tr)
345 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
345 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
346 return op
346 return op
347
347
348 class partiterator(object):
348 class partiterator(object):
349 def __init__(self, repo, op, unbundler):
349 def __init__(self, repo, op, unbundler):
350 self.repo = repo
350 self.repo = repo
351 self.op = op
351 self.op = op
352 self.unbundler = unbundler
352 self.unbundler = unbundler
353 self.iterator = None
353 self.iterator = None
354 self.count = 0
354 self.count = 0
355 self.current = None
355 self.current = None
356
356
357 def __enter__(self):
357 def __enter__(self):
358 def func():
358 def func():
359 itr = enumerate(self.unbundler.iterparts())
359 itr = enumerate(self.unbundler.iterparts())
360 for count, p in itr:
360 for count, p in itr:
361 self.count = count
361 self.count = count
362 self.current = p
362 self.current = p
363 yield p
363 yield p
364 p.seek(0, 2)
364 p.seek(0, 2)
365 self.current = None
365 self.current = None
366 self.iterator = func()
366 self.iterator = func()
367 return self.iterator
367 return self.iterator
368
368
369 def __exit__(self, type, exc, tb):
369 def __exit__(self, type, exc, tb):
370 if not self.iterator:
370 if not self.iterator:
371 return
371 return
372
372
373 if exc:
373 if exc:
374 # If exiting or interrupted, do not attempt to seek the stream in
374 # If exiting or interrupted, do not attempt to seek the stream in
375 # the finally block below. This makes abort faster.
375 # the finally block below. This makes abort faster.
376 if (self.current and
376 if (self.current and
377 not isinstance(exc, (SystemExit, KeyboardInterrupt))):
377 not isinstance(exc, (SystemExit, KeyboardInterrupt))):
378 # consume the part content to not corrupt the stream.
378 # consume the part content to not corrupt the stream.
379 self.current.seek(0, 2)
379 self.current.seek(0, 2)
380
380
381 # Any exceptions seeking to the end of the bundle at this point are
381 # Any exceptions seeking to the end of the bundle at this point are
382 # almost certainly related to the underlying stream being bad.
382 # almost certainly related to the underlying stream being bad.
383 # And, chances are that the exception we're handling is related to
383 # And, chances are that the exception we're handling is related to
384 # getting in that bad state. So, we swallow the seeking error and
384 # getting in that bad state. So, we swallow the seeking error and
385 # re-raise the original error.
385 # re-raise the original error.
386 seekerror = False
386 seekerror = False
387 try:
387 try:
388 for part in self.iterator:
388 for part in self.iterator:
389 # consume the bundle content
389 # consume the bundle content
390 part.seek(0, 2)
390 part.seek(0, 2)
391 except Exception:
391 except Exception:
392 seekerror = True
392 seekerror = True
393
393
394 # Small hack to let caller code distinguish exceptions from bundle2
394 # Small hack to let caller code distinguish exceptions from bundle2
395 # processing from processing the old format. This is mostly needed
395 # processing from processing the old format. This is mostly needed
396 # to handle different return codes to unbundle according to the type
396 # to handle different return codes to unbundle according to the type
397 # of bundle. We should probably clean up or drop this return code
397 # of bundle. We should probably clean up or drop this return code
398 # craziness in a future version.
398 # craziness in a future version.
399 exc.duringunbundle2 = True
399 exc.duringunbundle2 = True
400 salvaged = []
400 salvaged = []
401 replycaps = None
401 replycaps = None
402 if self.op.reply is not None:
402 if self.op.reply is not None:
403 salvaged = self.op.reply.salvageoutput()
403 salvaged = self.op.reply.salvageoutput()
404 replycaps = self.op.reply.capabilities
404 replycaps = self.op.reply.capabilities
405 exc._replycaps = replycaps
405 exc._replycaps = replycaps
406 exc._bundle2salvagedoutput = salvaged
406 exc._bundle2salvagedoutput = salvaged
407
407
408 # Re-raising from a variable loses the original stack. So only use
408 # Re-raising from a variable loses the original stack. So only use
409 # that form if we need to.
409 # that form if we need to.
410 if seekerror:
410 if seekerror:
411 raise exc
411 raise exc
412
412
413 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
413 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
414 self.count)
414 self.count)
415
415
416 def processbundle(repo, unbundler, transactiongetter=None, op=None):
416 def processbundle(repo, unbundler, transactiongetter=None, op=None):
417 """This function process a bundle, apply effect to/from a repo
417 """This function process a bundle, apply effect to/from a repo
418
418
419 It iterates over each part then searches for and uses the proper handling
419 It iterates over each part then searches for and uses the proper handling
420 code to process the part. Parts are processed in order.
420 code to process the part. Parts are processed in order.
421
421
422 Unknown Mandatory part will abort the process.
422 Unknown Mandatory part will abort the process.
423
423
424 It is temporarily possible to provide a prebuilt bundleoperation to the
424 It is temporarily possible to provide a prebuilt bundleoperation to the
425 function. This is used to ensure output is properly propagated in case of
425 function. This is used to ensure output is properly propagated in case of
426 an error during the unbundling. This output capturing part will likely be
426 an error during the unbundling. This output capturing part will likely be
427 reworked and this ability will probably go away in the process.
427 reworked and this ability will probably go away in the process.
428 """
428 """
429 if op is None:
429 if op is None:
430 if transactiongetter is None:
430 if transactiongetter is None:
431 transactiongetter = _notransaction
431 transactiongetter = _notransaction
432 op = bundleoperation(repo, transactiongetter)
432 op = bundleoperation(repo, transactiongetter)
433 # todo:
433 # todo:
434 # - replace this is a init function soon.
434 # - replace this is a init function soon.
435 # - exception catching
435 # - exception catching
436 unbundler.params
436 unbundler.params
437 if repo.ui.debugflag:
437 if repo.ui.debugflag:
438 msg = ['bundle2-input-bundle:']
438 msg = ['bundle2-input-bundle:']
439 if unbundler.params:
439 if unbundler.params:
440 msg.append(' %i params' % len(unbundler.params))
440 msg.append(' %i params' % len(unbundler.params))
441 if op._gettransaction is None or op._gettransaction is _notransaction:
441 if op._gettransaction is None or op._gettransaction is _notransaction:
442 msg.append(' no-transaction')
442 msg.append(' no-transaction')
443 else:
443 else:
444 msg.append(' with-transaction')
444 msg.append(' with-transaction')
445 msg.append('\n')
445 msg.append('\n')
446 repo.ui.debug(''.join(msg))
446 repo.ui.debug(''.join(msg))
447
447
448 processparts(repo, op, unbundler)
448 processparts(repo, op, unbundler)
449
449
450 return op
450 return op
451
451
452 def processparts(repo, op, unbundler):
452 def processparts(repo, op, unbundler):
453 with partiterator(repo, op, unbundler) as parts:
453 with partiterator(repo, op, unbundler) as parts:
454 for part in parts:
454 for part in parts:
455 _processpart(op, part)
455 _processpart(op, part)
456
456
457 def _processchangegroup(op, cg, tr, source, url, **kwargs):
457 def _processchangegroup(op, cg, tr, source, url, **kwargs):
458 ret = cg.apply(op.repo, tr, source, url, **kwargs)
458 ret = cg.apply(op.repo, tr, source, url, **kwargs)
459 op.records.add('changegroup', {
459 op.records.add('changegroup', {
460 'return': ret,
460 'return': ret,
461 })
461 })
462 return ret
462 return ret
463
463
464 def _gethandler(op, part):
464 def _gethandler(op, part):
465 status = 'unknown' # used by debug output
465 status = 'unknown' # used by debug output
466 try:
466 try:
467 handler = parthandlermapping.get(part.type)
467 handler = parthandlermapping.get(part.type)
468 if handler is None:
468 if handler is None:
469 status = 'unsupported-type'
469 status = 'unsupported-type'
470 raise error.BundleUnknownFeatureError(parttype=part.type)
470 raise error.BundleUnknownFeatureError(parttype=part.type)
471 indebug(op.ui, 'found a handler for part %s' % part.type)
471 indebug(op.ui, 'found a handler for part %s' % part.type)
472 unknownparams = part.mandatorykeys - handler.params
472 unknownparams = part.mandatorykeys - handler.params
473 if unknownparams:
473 if unknownparams:
474 unknownparams = list(unknownparams)
474 unknownparams = list(unknownparams)
475 unknownparams.sort()
475 unknownparams.sort()
476 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
476 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
477 raise error.BundleUnknownFeatureError(parttype=part.type,
477 raise error.BundleUnknownFeatureError(parttype=part.type,
478 params=unknownparams)
478 params=unknownparams)
479 status = 'supported'
479 status = 'supported'
480 except error.BundleUnknownFeatureError as exc:
480 except error.BundleUnknownFeatureError as exc:
481 if part.mandatory: # mandatory parts
481 if part.mandatory: # mandatory parts
482 raise
482 raise
483 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
483 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
484 return # skip to part processing
484 return # skip to part processing
485 finally:
485 finally:
486 if op.ui.debugflag:
486 if op.ui.debugflag:
487 msg = ['bundle2-input-part: "%s"' % part.type]
487 msg = ['bundle2-input-part: "%s"' % part.type]
488 if not part.mandatory:
488 if not part.mandatory:
489 msg.append(' (advisory)')
489 msg.append(' (advisory)')
490 nbmp = len(part.mandatorykeys)
490 nbmp = len(part.mandatorykeys)
491 nbap = len(part.params) - nbmp
491 nbap = len(part.params) - nbmp
492 if nbmp or nbap:
492 if nbmp or nbap:
493 msg.append(' (params:')
493 msg.append(' (params:')
494 if nbmp:
494 if nbmp:
495 msg.append(' %i mandatory' % nbmp)
495 msg.append(' %i mandatory' % nbmp)
496 if nbap:
496 if nbap:
497 msg.append(' %i advisory' % nbmp)
497 msg.append(' %i advisory' % nbmp)
498 msg.append(')')
498 msg.append(')')
499 msg.append(' %s\n' % status)
499 msg.append(' %s\n' % status)
500 op.ui.debug(''.join(msg))
500 op.ui.debug(''.join(msg))
501
501
502 return handler
502 return handler
503
503
504 def _processpart(op, part):
504 def _processpart(op, part):
505 """process a single part from a bundle
505 """process a single part from a bundle
506
506
507 The part is guaranteed to have been fully consumed when the function exits
507 The part is guaranteed to have been fully consumed when the function exits
508 (even if an exception is raised)."""
508 (even if an exception is raised)."""
509 handler = _gethandler(op, part)
509 handler = _gethandler(op, part)
510 if handler is None:
510 if handler is None:
511 return
511 return
512
512
513 # handler is called outside the above try block so that we don't
513 # handler is called outside the above try block so that we don't
514 # risk catching KeyErrors from anything other than the
514 # risk catching KeyErrors from anything other than the
515 # parthandlermapping lookup (any KeyError raised by handler()
515 # parthandlermapping lookup (any KeyError raised by handler()
516 # itself represents a defect of a different variety).
516 # itself represents a defect of a different variety).
517 output = None
517 output = None
518 if op.captureoutput and op.reply is not None:
518 if op.captureoutput and op.reply is not None:
519 op.ui.pushbuffer(error=True, subproc=True)
519 op.ui.pushbuffer(error=True, subproc=True)
520 output = ''
520 output = ''
521 try:
521 try:
522 handler(op, part)
522 handler(op, part)
523 finally:
523 finally:
524 if output is not None:
524 if output is not None:
525 output = op.ui.popbuffer()
525 output = op.ui.popbuffer()
526 if output:
526 if output:
527 outpart = op.reply.newpart('output', data=output,
527 outpart = op.reply.newpart('output', data=output,
528 mandatory=False)
528 mandatory=False)
529 outpart.addparam(
529 outpart.addparam(
530 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
530 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
531
531
532 def decodecaps(blob):
532 def decodecaps(blob):
533 """decode a bundle2 caps bytes blob into a dictionary
533 """decode a bundle2 caps bytes blob into a dictionary
534
534
535 The blob is a list of capabilities (one per line)
535 The blob is a list of capabilities (one per line)
536 Capabilities may have values using a line of the form::
536 Capabilities may have values using a line of the form::
537
537
538 capability=value1,value2,value3
538 capability=value1,value2,value3
539
539
540 The values are always a list."""
540 The values are always a list."""
541 caps = {}
541 caps = {}
542 for line in blob.splitlines():
542 for line in blob.splitlines():
543 if not line:
543 if not line:
544 continue
544 continue
545 if '=' not in line:
545 if '=' not in line:
546 key, vals = line, ()
546 key, vals = line, ()
547 else:
547 else:
548 key, vals = line.split('=', 1)
548 key, vals = line.split('=', 1)
549 vals = vals.split(',')
549 vals = vals.split(',')
550 key = urlreq.unquote(key)
550 key = urlreq.unquote(key)
551 vals = [urlreq.unquote(v) for v in vals]
551 vals = [urlreq.unquote(v) for v in vals]
552 caps[key] = vals
552 caps[key] = vals
553 return caps
553 return caps
554
554
555 def encodecaps(caps):
555 def encodecaps(caps):
556 """encode a bundle2 caps dictionary into a bytes blob"""
556 """encode a bundle2 caps dictionary into a bytes blob"""
557 chunks = []
557 chunks = []
558 for ca in sorted(caps):
558 for ca in sorted(caps):
559 vals = caps[ca]
559 vals = caps[ca]
560 ca = urlreq.quote(ca)
560 ca = urlreq.quote(ca)
561 vals = [urlreq.quote(v) for v in vals]
561 vals = [urlreq.quote(v) for v in vals]
562 if vals:
562 if vals:
563 ca = "%s=%s" % (ca, ','.join(vals))
563 ca = "%s=%s" % (ca, ','.join(vals))
564 chunks.append(ca)
564 chunks.append(ca)
565 return '\n'.join(chunks)
565 return '\n'.join(chunks)
566
566
567 bundletypes = {
567 bundletypes = {
568 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
568 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
569 # since the unification ssh accepts a header but there
569 # since the unification ssh accepts a header but there
570 # is no capability signaling it.
570 # is no capability signaling it.
571 "HG20": (), # special-cased below
571 "HG20": (), # special-cased below
572 "HG10UN": ("HG10UN", 'UN'),
572 "HG10UN": ("HG10UN", 'UN'),
573 "HG10BZ": ("HG10", 'BZ'),
573 "HG10BZ": ("HG10", 'BZ'),
574 "HG10GZ": ("HG10GZ", 'GZ'),
574 "HG10GZ": ("HG10GZ", 'GZ'),
575 }
575 }
576
576
577 # hgweb uses this list to communicate its preferred type
577 # hgweb uses this list to communicate its preferred type
578 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
578 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
579
579
580 class bundle20(object):
580 class bundle20(object):
581 """represent an outgoing bundle2 container
581 """represent an outgoing bundle2 container
582
582
583 Use the `addparam` method to add stream level parameter. and `newpart` to
583 Use the `addparam` method to add stream level parameter. and `newpart` to
584 populate it. Then call `getchunks` to retrieve all the binary chunks of
584 populate it. Then call `getchunks` to retrieve all the binary chunks of
585 data that compose the bundle2 container."""
585 data that compose the bundle2 container."""
586
586
587 _magicstring = 'HG20'
587 _magicstring = 'HG20'
588
588
589 def __init__(self, ui, capabilities=()):
589 def __init__(self, ui, capabilities=()):
590 self.ui = ui
590 self.ui = ui
591 self._params = []
591 self._params = []
592 self._parts = []
592 self._parts = []
593 self.capabilities = dict(capabilities)
593 self.capabilities = dict(capabilities)
594 self._compengine = util.compengines.forbundletype('UN')
594 self._compengine = util.compengines.forbundletype('UN')
595 self._compopts = None
595 self._compopts = None
596
596
597 def setcompression(self, alg, compopts=None):
597 def setcompression(self, alg, compopts=None):
598 """setup core part compression to <alg>"""
598 """setup core part compression to <alg>"""
599 if alg in (None, 'UN'):
599 if alg in (None, 'UN'):
600 return
600 return
601 assert not any(n.lower() == 'compression' for n, v in self._params)
601 assert not any(n.lower() == 'compression' for n, v in self._params)
602 self.addparam('Compression', alg)
602 self.addparam('Compression', alg)
603 self._compengine = util.compengines.forbundletype(alg)
603 self._compengine = util.compengines.forbundletype(alg)
604 self._compopts = compopts
604 self._compopts = compopts
605
605
606 @property
606 @property
607 def nbparts(self):
607 def nbparts(self):
608 """total number of parts added to the bundler"""
608 """total number of parts added to the bundler"""
609 return len(self._parts)
609 return len(self._parts)
610
610
611 # methods used to defines the bundle2 content
611 # methods used to defines the bundle2 content
612 def addparam(self, name, value=None):
612 def addparam(self, name, value=None):
613 """add a stream level parameter"""
613 """add a stream level parameter"""
614 if not name:
614 if not name:
615 raise ValueError(r'empty parameter name')
615 raise ValueError(r'empty parameter name')
616 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
616 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
617 raise ValueError(r'non letter first character: %s' % name)
617 raise ValueError(r'non letter first character: %s' % name)
618 self._params.append((name, value))
618 self._params.append((name, value))
619
619
620 def addpart(self, part):
620 def addpart(self, part):
621 """add a new part to the bundle2 container
621 """add a new part to the bundle2 container
622
622
623 Parts contains the actual applicative payload."""
623 Parts contains the actual applicative payload."""
624 assert part.id is None
624 assert part.id is None
625 part.id = len(self._parts) # very cheap counter
625 part.id = len(self._parts) # very cheap counter
626 self._parts.append(part)
626 self._parts.append(part)
627
627
628 def newpart(self, typeid, *args, **kwargs):
628 def newpart(self, typeid, *args, **kwargs):
629 """create a new part and add it to the containers
629 """create a new part and add it to the containers
630
630
631 As the part is directly added to the containers. For now, this means
631 As the part is directly added to the containers. For now, this means
632 that any failure to properly initialize the part after calling
632 that any failure to properly initialize the part after calling
633 ``newpart`` should result in a failure of the whole bundling process.
633 ``newpart`` should result in a failure of the whole bundling process.
634
634
635 You can still fall back to manually create and add if you need better
635 You can still fall back to manually create and add if you need better
636 control."""
636 control."""
637 part = bundlepart(typeid, *args, **kwargs)
637 part = bundlepart(typeid, *args, **kwargs)
638 self.addpart(part)
638 self.addpart(part)
639 return part
639 return part
640
640
641 # methods used to generate the bundle2 stream
641 # methods used to generate the bundle2 stream
642 def getchunks(self):
642 def getchunks(self):
643 if self.ui.debugflag:
643 if self.ui.debugflag:
644 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
644 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
645 if self._params:
645 if self._params:
646 msg.append(' (%i params)' % len(self._params))
646 msg.append(' (%i params)' % len(self._params))
647 msg.append(' %i parts total\n' % len(self._parts))
647 msg.append(' %i parts total\n' % len(self._parts))
648 self.ui.debug(''.join(msg))
648 self.ui.debug(''.join(msg))
649 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
649 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
650 yield self._magicstring
650 yield self._magicstring
651 param = self._paramchunk()
651 param = self._paramchunk()
652 outdebug(self.ui, 'bundle parameter: %s' % param)
652 outdebug(self.ui, 'bundle parameter: %s' % param)
653 yield _pack(_fstreamparamsize, len(param))
653 yield _pack(_fstreamparamsize, len(param))
654 if param:
654 if param:
655 yield param
655 yield param
656 for chunk in self._compengine.compressstream(self._getcorechunk(),
656 for chunk in self._compengine.compressstream(self._getcorechunk(),
657 self._compopts):
657 self._compopts):
658 yield chunk
658 yield chunk
659
659
660 def _paramchunk(self):
660 def _paramchunk(self):
661 """return a encoded version of all stream parameters"""
661 """return a encoded version of all stream parameters"""
662 blocks = []
662 blocks = []
663 for par, value in self._params:
663 for par, value in self._params:
664 par = urlreq.quote(par)
664 par = urlreq.quote(par)
665 if value is not None:
665 if value is not None:
666 value = urlreq.quote(value)
666 value = urlreq.quote(value)
667 par = '%s=%s' % (par, value)
667 par = '%s=%s' % (par, value)
668 blocks.append(par)
668 blocks.append(par)
669 return ' '.join(blocks)
669 return ' '.join(blocks)
670
670
671 def _getcorechunk(self):
671 def _getcorechunk(self):
672 """yield chunk for the core part of the bundle
672 """yield chunk for the core part of the bundle
673
673
674 (all but headers and parameters)"""
674 (all but headers and parameters)"""
675 outdebug(self.ui, 'start of parts')
675 outdebug(self.ui, 'start of parts')
676 for part in self._parts:
676 for part in self._parts:
677 outdebug(self.ui, 'bundle part: "%s"' % part.type)
677 outdebug(self.ui, 'bundle part: "%s"' % part.type)
678 for chunk in part.getchunks(ui=self.ui):
678 for chunk in part.getchunks(ui=self.ui):
679 yield chunk
679 yield chunk
680 outdebug(self.ui, 'end of bundle')
680 outdebug(self.ui, 'end of bundle')
681 yield _pack(_fpartheadersize, 0)
681 yield _pack(_fpartheadersize, 0)
682
682
683
683
684 def salvageoutput(self):
684 def salvageoutput(self):
685 """return a list with a copy of all output parts in the bundle
685 """return a list with a copy of all output parts in the bundle
686
686
687 This is meant to be used during error handling to make sure we preserve
687 This is meant to be used during error handling to make sure we preserve
688 server output"""
688 server output"""
689 salvaged = []
689 salvaged = []
690 for part in self._parts:
690 for part in self._parts:
691 if part.type.startswith('output'):
691 if part.type.startswith('output'):
692 salvaged.append(part.copy())
692 salvaged.append(part.copy())
693 return salvaged
693 return salvaged
694
694
695
695
696 class unpackermixin(object):
696 class unpackermixin(object):
697 """A mixin to extract bytes and struct data from a stream"""
697 """A mixin to extract bytes and struct data from a stream"""
698
698
699 def __init__(self, fp):
699 def __init__(self, fp):
700 self._fp = fp
700 self._fp = fp
701
701
702 def _unpack(self, format):
702 def _unpack(self, format):
703 """unpack this struct format from the stream
703 """unpack this struct format from the stream
704
704
705 This method is meant for internal usage by the bundle2 protocol only.
705 This method is meant for internal usage by the bundle2 protocol only.
706 They directly manipulate the low level stream including bundle2 level
706 They directly manipulate the low level stream including bundle2 level
707 instruction.
707 instruction.
708
708
709 Do not use it to implement higher-level logic or methods."""
709 Do not use it to implement higher-level logic or methods."""
710 data = self._readexact(struct.calcsize(format))
710 data = self._readexact(struct.calcsize(format))
711 return _unpack(format, data)
711 return _unpack(format, data)
712
712
713 def _readexact(self, size):
713 def _readexact(self, size):
714 """read exactly <size> bytes from the stream
714 """read exactly <size> bytes from the stream
715
715
716 This method is meant for internal usage by the bundle2 protocol only.
716 This method is meant for internal usage by the bundle2 protocol only.
717 They directly manipulate the low level stream including bundle2 level
717 They directly manipulate the low level stream including bundle2 level
718 instruction.
718 instruction.
719
719
720 Do not use it to implement higher-level logic or methods."""
720 Do not use it to implement higher-level logic or methods."""
721 return changegroup.readexactly(self._fp, size)
721 return changegroup.readexactly(self._fp, size)
722
722
723 def getunbundler(ui, fp, magicstring=None):
723 def getunbundler(ui, fp, magicstring=None):
724 """return a valid unbundler object for a given magicstring"""
724 """return a valid unbundler object for a given magicstring"""
725 if magicstring is None:
725 if magicstring is None:
726 magicstring = changegroup.readexactly(fp, 4)
726 magicstring = changegroup.readexactly(fp, 4)
727 magic, version = magicstring[0:2], magicstring[2:4]
727 magic, version = magicstring[0:2], magicstring[2:4]
728 if magic != 'HG':
728 if magic != 'HG':
729 ui.debug(
729 ui.debug(
730 "error: invalid magic: %r (version %r), should be 'HG'\n"
730 "error: invalid magic: %r (version %r), should be 'HG'\n"
731 % (magic, version))
731 % (magic, version))
732 raise error.Abort(_('not a Mercurial bundle'))
732 raise error.Abort(_('not a Mercurial bundle'))
733 unbundlerclass = formatmap.get(version)
733 unbundlerclass = formatmap.get(version)
734 if unbundlerclass is None:
734 if unbundlerclass is None:
735 raise error.Abort(_('unknown bundle version %s') % version)
735 raise error.Abort(_('unknown bundle version %s') % version)
736 unbundler = unbundlerclass(ui, fp)
736 unbundler = unbundlerclass(ui, fp)
737 indebug(ui, 'start processing of %s stream' % magicstring)
737 indebug(ui, 'start processing of %s stream' % magicstring)
738 return unbundler
738 return unbundler
739
739
740 class unbundle20(unpackermixin):
740 class unbundle20(unpackermixin):
741 """interpret a bundle2 stream
741 """interpret a bundle2 stream
742
742
743 This class is fed with a binary stream and yields parts through its
743 This class is fed with a binary stream and yields parts through its
744 `iterparts` methods."""
744 `iterparts` methods."""
745
745
746 _magicstring = 'HG20'
746 _magicstring = 'HG20'
747
747
748 def __init__(self, ui, fp):
748 def __init__(self, ui, fp):
749 """If header is specified, we do not read it out of the stream."""
749 """If header is specified, we do not read it out of the stream."""
750 self.ui = ui
750 self.ui = ui
751 self._compengine = util.compengines.forbundletype('UN')
751 self._compengine = util.compengines.forbundletype('UN')
752 self._compressed = None
752 self._compressed = None
753 super(unbundle20, self).__init__(fp)
753 super(unbundle20, self).__init__(fp)
754
754
755 @util.propertycache
755 @util.propertycache
756 def params(self):
756 def params(self):
757 """dictionary of stream level parameters"""
757 """dictionary of stream level parameters"""
758 indebug(self.ui, 'reading bundle2 stream parameters')
758 indebug(self.ui, 'reading bundle2 stream parameters')
759 params = {}
759 params = {}
760 paramssize = self._unpack(_fstreamparamsize)[0]
760 paramssize = self._unpack(_fstreamparamsize)[0]
761 if paramssize < 0:
761 if paramssize < 0:
762 raise error.BundleValueError('negative bundle param size: %i'
762 raise error.BundleValueError('negative bundle param size: %i'
763 % paramssize)
763 % paramssize)
764 if paramssize:
764 if paramssize:
765 params = self._readexact(paramssize)
765 params = self._readexact(paramssize)
766 params = self._processallparams(params)
766 params = self._processallparams(params)
767 return params
767 return params
768
768
769 def _processallparams(self, paramsblock):
769 def _processallparams(self, paramsblock):
770 """"""
770 """"""
771 params = util.sortdict()
771 params = util.sortdict()
772 for p in paramsblock.split(' '):
772 for p in paramsblock.split(' '):
773 p = p.split('=', 1)
773 p = p.split('=', 1)
774 p = [urlreq.unquote(i) for i in p]
774 p = [urlreq.unquote(i) for i in p]
775 if len(p) < 2:
775 if len(p) < 2:
776 p.append(None)
776 p.append(None)
777 self._processparam(*p)
777 self._processparam(*p)
778 params[p[0]] = p[1]
778 params[p[0]] = p[1]
779 return params
779 return params
780
780
781
781
782 def _processparam(self, name, value):
782 def _processparam(self, name, value):
783 """process a parameter, applying its effect if needed
783 """process a parameter, applying its effect if needed
784
784
785 Parameter starting with a lower case letter are advisory and will be
785 Parameter starting with a lower case letter are advisory and will be
786 ignored when unknown. Those starting with an upper case letter are
786 ignored when unknown. Those starting with an upper case letter are
787 mandatory and will this function will raise a KeyError when unknown.
787 mandatory and will this function will raise a KeyError when unknown.
788
788
789 Note: no option are currently supported. Any input will be either
789 Note: no option are currently supported. Any input will be either
790 ignored or failing.
790 ignored or failing.
791 """
791 """
792 if not name:
792 if not name:
793 raise ValueError(r'empty parameter name')
793 raise ValueError(r'empty parameter name')
794 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
794 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
795 raise ValueError(r'non letter first character: %s' % name)
795 raise ValueError(r'non letter first character: %s' % name)
796 try:
796 try:
797 handler = b2streamparamsmap[name.lower()]
797 handler = b2streamparamsmap[name.lower()]
798 except KeyError:
798 except KeyError:
799 if name[0:1].islower():
799 if name[0:1].islower():
800 indebug(self.ui, "ignoring unknown parameter %s" % name)
800 indebug(self.ui, "ignoring unknown parameter %s" % name)
801 else:
801 else:
802 raise error.BundleUnknownFeatureError(params=(name,))
802 raise error.BundleUnknownFeatureError(params=(name,))
803 else:
803 else:
804 handler(self, name, value)
804 handler(self, name, value)
805
805
806 def _forwardchunks(self):
806 def _forwardchunks(self):
807 """utility to transfer a bundle2 as binary
807 """utility to transfer a bundle2 as binary
808
808
809 This is made necessary by the fact the 'getbundle' command over 'ssh'
809 This is made necessary by the fact the 'getbundle' command over 'ssh'
810 have no way to know then the reply end, relying on the bundle to be
810 have no way to know then the reply end, relying on the bundle to be
811 interpreted to know its end. This is terrible and we are sorry, but we
811 interpreted to know its end. This is terrible and we are sorry, but we
812 needed to move forward to get general delta enabled.
812 needed to move forward to get general delta enabled.
813 """
813 """
814 yield self._magicstring
814 yield self._magicstring
815 assert 'params' not in vars(self)
815 assert 'params' not in vars(self)
816 paramssize = self._unpack(_fstreamparamsize)[0]
816 paramssize = self._unpack(_fstreamparamsize)[0]
817 if paramssize < 0:
817 if paramssize < 0:
818 raise error.BundleValueError('negative bundle param size: %i'
818 raise error.BundleValueError('negative bundle param size: %i'
819 % paramssize)
819 % paramssize)
820 yield _pack(_fstreamparamsize, paramssize)
820 yield _pack(_fstreamparamsize, paramssize)
821 if paramssize:
821 if paramssize:
822 params = self._readexact(paramssize)
822 params = self._readexact(paramssize)
823 self._processallparams(params)
823 self._processallparams(params)
824 yield params
824 yield params
825 assert self._compengine.bundletype == 'UN'
825 assert self._compengine.bundletype == 'UN'
826 # From there, payload might need to be decompressed
826 # From there, payload might need to be decompressed
827 self._fp = self._compengine.decompressorreader(self._fp)
827 self._fp = self._compengine.decompressorreader(self._fp)
828 emptycount = 0
828 emptycount = 0
829 while emptycount < 2:
829 while emptycount < 2:
830 # so we can brainlessly loop
830 # so we can brainlessly loop
831 assert _fpartheadersize == _fpayloadsize
831 assert _fpartheadersize == _fpayloadsize
832 size = self._unpack(_fpartheadersize)[0]
832 size = self._unpack(_fpartheadersize)[0]
833 yield _pack(_fpartheadersize, size)
833 yield _pack(_fpartheadersize, size)
834 if size:
834 if size:
835 emptycount = 0
835 emptycount = 0
836 else:
836 else:
837 emptycount += 1
837 emptycount += 1
838 continue
838 continue
839 if size == flaginterrupt:
839 if size == flaginterrupt:
840 continue
840 continue
841 elif size < 0:
841 elif size < 0:
842 raise error.BundleValueError('negative chunk size: %i')
842 raise error.BundleValueError('negative chunk size: %i')
843 yield self._readexact(size)
843 yield self._readexact(size)
844
844
845
845
846 def iterparts(self):
846 def iterparts(self):
847 """yield all parts contained in the stream"""
847 """yield all parts contained in the stream"""
848 # make sure param have been loaded
848 # make sure param have been loaded
849 self.params
849 self.params
850 # From there, payload need to be decompressed
850 # From there, payload need to be decompressed
851 self._fp = self._compengine.decompressorreader(self._fp)
851 self._fp = self._compengine.decompressorreader(self._fp)
852 indebug(self.ui, 'start extraction of bundle2 parts')
852 indebug(self.ui, 'start extraction of bundle2 parts')
853 headerblock = self._readpartheader()
853 headerblock = self._readpartheader()
854 while headerblock is not None:
854 while headerblock is not None:
855 part = unbundlepart(self.ui, headerblock, self._fp)
855 part = unbundlepart(self.ui, headerblock, self._fp)
856 yield part
856 yield part
857 # Seek to the end of the part to force it's consumption so the next
857 # Seek to the end of the part to force it's consumption so the next
858 # part can be read. But then seek back to the beginning so the
858 # part can be read. But then seek back to the beginning so the
859 # code consuming this generator has a part that starts at 0.
859 # code consuming this generator has a part that starts at 0.
860 part.seek(0, 2)
860 part.seek(0, 2)
861 part.seek(0)
861 part.seek(0)
862 headerblock = self._readpartheader()
862 headerblock = self._readpartheader()
863 indebug(self.ui, 'end of bundle2 stream')
863 indebug(self.ui, 'end of bundle2 stream')
864
864
865 def _readpartheader(self):
865 def _readpartheader(self):
866 """reads a part header size and return the bytes blob
866 """reads a part header size and return the bytes blob
867
867
868 returns None if empty"""
868 returns None if empty"""
869 headersize = self._unpack(_fpartheadersize)[0]
869 headersize = self._unpack(_fpartheadersize)[0]
870 if headersize < 0:
870 if headersize < 0:
871 raise error.BundleValueError('negative part header size: %i'
871 raise error.BundleValueError('negative part header size: %i'
872 % headersize)
872 % headersize)
873 indebug(self.ui, 'part header size: %i' % headersize)
873 indebug(self.ui, 'part header size: %i' % headersize)
874 if headersize:
874 if headersize:
875 return self._readexact(headersize)
875 return self._readexact(headersize)
876 return None
876 return None
877
877
878 def compressed(self):
878 def compressed(self):
879 self.params # load params
879 self.params # load params
880 return self._compressed
880 return self._compressed
881
881
882 def close(self):
882 def close(self):
883 """close underlying file"""
883 """close underlying file"""
884 if util.safehasattr(self._fp, 'close'):
884 if util.safehasattr(self._fp, 'close'):
885 return self._fp.close()
885 return self._fp.close()
886
886
887 formatmap = {'20': unbundle20}
887 formatmap = {'20': unbundle20}
888
888
889 b2streamparamsmap = {}
889 b2streamparamsmap = {}
890
890
891 def b2streamparamhandler(name):
891 def b2streamparamhandler(name):
892 """register a handler for a stream level parameter"""
892 """register a handler for a stream level parameter"""
893 def decorator(func):
893 def decorator(func):
894 assert name not in formatmap
894 assert name not in formatmap
895 b2streamparamsmap[name] = func
895 b2streamparamsmap[name] = func
896 return func
896 return func
897 return decorator
897 return decorator
898
898
899 @b2streamparamhandler('compression')
899 @b2streamparamhandler('compression')
900 def processcompression(unbundler, param, value):
900 def processcompression(unbundler, param, value):
901 """read compression parameter and install payload decompression"""
901 """read compression parameter and install payload decompression"""
902 if value not in util.compengines.supportedbundletypes:
902 if value not in util.compengines.supportedbundletypes:
903 raise error.BundleUnknownFeatureError(params=(param,),
903 raise error.BundleUnknownFeatureError(params=(param,),
904 values=(value,))
904 values=(value,))
905 unbundler._compengine = util.compengines.forbundletype(value)
905 unbundler._compengine = util.compengines.forbundletype(value)
906 if value is not None:
906 if value is not None:
907 unbundler._compressed = True
907 unbundler._compressed = True
908
908
909 class bundlepart(object):
909 class bundlepart(object):
910 """A bundle2 part contains application level payload
910 """A bundle2 part contains application level payload
911
911
912 The part `type` is used to route the part to the application level
912 The part `type` is used to route the part to the application level
913 handler.
913 handler.
914
914
915 The part payload is contained in ``part.data``. It could be raw bytes or a
915 The part payload is contained in ``part.data``. It could be raw bytes or a
916 generator of byte chunks.
916 generator of byte chunks.
917
917
918 You can add parameters to the part using the ``addparam`` method.
918 You can add parameters to the part using the ``addparam`` method.
919 Parameters can be either mandatory (default) or advisory. Remote side
919 Parameters can be either mandatory (default) or advisory. Remote side
920 should be able to safely ignore the advisory ones.
920 should be able to safely ignore the advisory ones.
921
921
922 Both data and parameters cannot be modified after the generation has begun.
922 Both data and parameters cannot be modified after the generation has begun.
923 """
923 """
924
924
925 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
925 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
926 data='', mandatory=True):
926 data='', mandatory=True):
927 validateparttype(parttype)
927 validateparttype(parttype)
928 self.id = None
928 self.id = None
929 self.type = parttype
929 self.type = parttype
930 self._data = data
930 self._data = data
931 self._mandatoryparams = list(mandatoryparams)
931 self._mandatoryparams = list(mandatoryparams)
932 self._advisoryparams = list(advisoryparams)
932 self._advisoryparams = list(advisoryparams)
933 # checking for duplicated entries
933 # checking for duplicated entries
934 self._seenparams = set()
934 self._seenparams = set()
935 for pname, __ in self._mandatoryparams + self._advisoryparams:
935 for pname, __ in self._mandatoryparams + self._advisoryparams:
936 if pname in self._seenparams:
936 if pname in self._seenparams:
937 raise error.ProgrammingError('duplicated params: %s' % pname)
937 raise error.ProgrammingError('duplicated params: %s' % pname)
938 self._seenparams.add(pname)
938 self._seenparams.add(pname)
939 # status of the part's generation:
939 # status of the part's generation:
940 # - None: not started,
940 # - None: not started,
941 # - False: currently generated,
941 # - False: currently generated,
942 # - True: generation done.
942 # - True: generation done.
943 self._generated = None
943 self._generated = None
944 self.mandatory = mandatory
944 self.mandatory = mandatory
945
945
946 def __repr__(self):
946 def __repr__(self):
947 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
947 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
948 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
948 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
949 % (cls, id(self), self.id, self.type, self.mandatory))
949 % (cls, id(self), self.id, self.type, self.mandatory))
950
950
951 def copy(self):
951 def copy(self):
952 """return a copy of the part
952 """return a copy of the part
953
953
954 The new part have the very same content but no partid assigned yet.
954 The new part have the very same content but no partid assigned yet.
955 Parts with generated data cannot be copied."""
955 Parts with generated data cannot be copied."""
956 assert not util.safehasattr(self.data, 'next')
956 assert not util.safehasattr(self.data, 'next')
957 return self.__class__(self.type, self._mandatoryparams,
957 return self.__class__(self.type, self._mandatoryparams,
958 self._advisoryparams, self._data, self.mandatory)
958 self._advisoryparams, self._data, self.mandatory)
959
959
960 # methods used to defines the part content
960 # methods used to defines the part content
961 @property
961 @property
962 def data(self):
962 def data(self):
963 return self._data
963 return self._data
964
964
965 @data.setter
965 @data.setter
966 def data(self, data):
966 def data(self, data):
967 if self._generated is not None:
967 if self._generated is not None:
968 raise error.ReadOnlyPartError('part is being generated')
968 raise error.ReadOnlyPartError('part is being generated')
969 self._data = data
969 self._data = data
970
970
971 @property
971 @property
972 def mandatoryparams(self):
972 def mandatoryparams(self):
973 # make it an immutable tuple to force people through ``addparam``
973 # make it an immutable tuple to force people through ``addparam``
974 return tuple(self._mandatoryparams)
974 return tuple(self._mandatoryparams)
975
975
976 @property
976 @property
977 def advisoryparams(self):
977 def advisoryparams(self):
978 # make it an immutable tuple to force people through ``addparam``
978 # make it an immutable tuple to force people through ``addparam``
979 return tuple(self._advisoryparams)
979 return tuple(self._advisoryparams)
980
980
981 def addparam(self, name, value='', mandatory=True):
981 def addparam(self, name, value='', mandatory=True):
982 """add a parameter to the part
982 """add a parameter to the part
983
983
984 If 'mandatory' is set to True, the remote handler must claim support
984 If 'mandatory' is set to True, the remote handler must claim support
985 for this parameter or the unbundling will be aborted.
985 for this parameter or the unbundling will be aborted.
986
986
987 The 'name' and 'value' cannot exceed 255 bytes each.
987 The 'name' and 'value' cannot exceed 255 bytes each.
988 """
988 """
989 if self._generated is not None:
989 if self._generated is not None:
990 raise error.ReadOnlyPartError('part is being generated')
990 raise error.ReadOnlyPartError('part is being generated')
991 if name in self._seenparams:
991 if name in self._seenparams:
992 raise ValueError('duplicated params: %s' % name)
992 raise ValueError('duplicated params: %s' % name)
993 self._seenparams.add(name)
993 self._seenparams.add(name)
994 params = self._advisoryparams
994 params = self._advisoryparams
995 if mandatory:
995 if mandatory:
996 params = self._mandatoryparams
996 params = self._mandatoryparams
997 params.append((name, value))
997 params.append((name, value))
998
998
999 # methods used to generates the bundle2 stream
999 # methods used to generates the bundle2 stream
1000 def getchunks(self, ui):
1000 def getchunks(self, ui):
1001 if self._generated is not None:
1001 if self._generated is not None:
1002 raise error.ProgrammingError('part can only be consumed once')
1002 raise error.ProgrammingError('part can only be consumed once')
1003 self._generated = False
1003 self._generated = False
1004
1004
1005 if ui.debugflag:
1005 if ui.debugflag:
1006 msg = ['bundle2-output-part: "%s"' % self.type]
1006 msg = ['bundle2-output-part: "%s"' % self.type]
1007 if not self.mandatory:
1007 if not self.mandatory:
1008 msg.append(' (advisory)')
1008 msg.append(' (advisory)')
1009 nbmp = len(self.mandatoryparams)
1009 nbmp = len(self.mandatoryparams)
1010 nbap = len(self.advisoryparams)
1010 nbap = len(self.advisoryparams)
1011 if nbmp or nbap:
1011 if nbmp or nbap:
1012 msg.append(' (params:')
1012 msg.append(' (params:')
1013 if nbmp:
1013 if nbmp:
1014 msg.append(' %i mandatory' % nbmp)
1014 msg.append(' %i mandatory' % nbmp)
1015 if nbap:
1015 if nbap:
1016 msg.append(' %i advisory' % nbmp)
1016 msg.append(' %i advisory' % nbmp)
1017 msg.append(')')
1017 msg.append(')')
1018 if not self.data:
1018 if not self.data:
1019 msg.append(' empty payload')
1019 msg.append(' empty payload')
1020 elif (util.safehasattr(self.data, 'next')
1020 elif (util.safehasattr(self.data, 'next')
1021 or util.safehasattr(self.data, '__next__')):
1021 or util.safehasattr(self.data, '__next__')):
1022 msg.append(' streamed payload')
1022 msg.append(' streamed payload')
1023 else:
1023 else:
1024 msg.append(' %i bytes payload' % len(self.data))
1024 msg.append(' %i bytes payload' % len(self.data))
1025 msg.append('\n')
1025 msg.append('\n')
1026 ui.debug(''.join(msg))
1026 ui.debug(''.join(msg))
1027
1027
1028 #### header
1028 #### header
1029 if self.mandatory:
1029 if self.mandatory:
1030 parttype = self.type.upper()
1030 parttype = self.type.upper()
1031 else:
1031 else:
1032 parttype = self.type.lower()
1032 parttype = self.type.lower()
1033 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1033 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1034 ## parttype
1034 ## parttype
1035 header = [_pack(_fparttypesize, len(parttype)),
1035 header = [_pack(_fparttypesize, len(parttype)),
1036 parttype, _pack(_fpartid, self.id),
1036 parttype, _pack(_fpartid, self.id),
1037 ]
1037 ]
1038 ## parameters
1038 ## parameters
1039 # count
1039 # count
1040 manpar = self.mandatoryparams
1040 manpar = self.mandatoryparams
1041 advpar = self.advisoryparams
1041 advpar = self.advisoryparams
1042 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1042 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1043 # size
1043 # size
1044 parsizes = []
1044 parsizes = []
1045 for key, value in manpar:
1045 for key, value in manpar:
1046 parsizes.append(len(key))
1046 parsizes.append(len(key))
1047 parsizes.append(len(value))
1047 parsizes.append(len(value))
1048 for key, value in advpar:
1048 for key, value in advpar:
1049 parsizes.append(len(key))
1049 parsizes.append(len(key))
1050 parsizes.append(len(value))
1050 parsizes.append(len(value))
1051 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1051 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1052 header.append(paramsizes)
1052 header.append(paramsizes)
1053 # key, value
1053 # key, value
1054 for key, value in manpar:
1054 for key, value in manpar:
1055 header.append(key)
1055 header.append(key)
1056 header.append(value)
1056 header.append(value)
1057 for key, value in advpar:
1057 for key, value in advpar:
1058 header.append(key)
1058 header.append(key)
1059 header.append(value)
1059 header.append(value)
1060 ## finalize header
1060 ## finalize header
1061 try:
1061 try:
1062 headerchunk = ''.join(header)
1062 headerchunk = ''.join(header)
1063 except TypeError:
1063 except TypeError:
1064 raise TypeError(r'Found a non-bytes trying to '
1064 raise TypeError(r'Found a non-bytes trying to '
1065 r'build bundle part header: %r' % header)
1065 r'build bundle part header: %r' % header)
1066 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1066 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1067 yield _pack(_fpartheadersize, len(headerchunk))
1067 yield _pack(_fpartheadersize, len(headerchunk))
1068 yield headerchunk
1068 yield headerchunk
1069 ## payload
1069 ## payload
1070 try:
1070 try:
1071 for chunk in self._payloadchunks():
1071 for chunk in self._payloadchunks():
1072 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1072 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1073 yield _pack(_fpayloadsize, len(chunk))
1073 yield _pack(_fpayloadsize, len(chunk))
1074 yield chunk
1074 yield chunk
1075 except GeneratorExit:
1075 except GeneratorExit:
1076 # GeneratorExit means that nobody is listening for our
1076 # GeneratorExit means that nobody is listening for our
1077 # results anyway, so just bail quickly rather than trying
1077 # results anyway, so just bail quickly rather than trying
1078 # to produce an error part.
1078 # to produce an error part.
1079 ui.debug('bundle2-generatorexit\n')
1079 ui.debug('bundle2-generatorexit\n')
1080 raise
1080 raise
1081 except BaseException as exc:
1081 except BaseException as exc:
1082 bexc = util.forcebytestr(exc)
1082 bexc = util.forcebytestr(exc)
1083 # backup exception data for later
1083 # backup exception data for later
1084 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1084 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1085 % bexc)
1085 % bexc)
1086 tb = sys.exc_info()[2]
1086 tb = sys.exc_info()[2]
1087 msg = 'unexpected error: %s' % bexc
1087 msg = 'unexpected error: %s' % bexc
1088 interpart = bundlepart('error:abort', [('message', msg)],
1088 interpart = bundlepart('error:abort', [('message', msg)],
1089 mandatory=False)
1089 mandatory=False)
1090 interpart.id = 0
1090 interpart.id = 0
1091 yield _pack(_fpayloadsize, -1)
1091 yield _pack(_fpayloadsize, -1)
1092 for chunk in interpart.getchunks(ui=ui):
1092 for chunk in interpart.getchunks(ui=ui):
1093 yield chunk
1093 yield chunk
1094 outdebug(ui, 'closing payload chunk')
1094 outdebug(ui, 'closing payload chunk')
1095 # abort current part payload
1095 # abort current part payload
1096 yield _pack(_fpayloadsize, 0)
1096 yield _pack(_fpayloadsize, 0)
1097 pycompat.raisewithtb(exc, tb)
1097 pycompat.raisewithtb(exc, tb)
1098 # end of payload
1098 # end of payload
1099 outdebug(ui, 'closing payload chunk')
1099 outdebug(ui, 'closing payload chunk')
1100 yield _pack(_fpayloadsize, 0)
1100 yield _pack(_fpayloadsize, 0)
1101 self._generated = True
1101 self._generated = True
1102
1102
1103 def _payloadchunks(self):
1103 def _payloadchunks(self):
1104 """yield chunks of a the part payload
1104 """yield chunks of a the part payload
1105
1105
1106 Exists to handle the different methods to provide data to a part."""
1106 Exists to handle the different methods to provide data to a part."""
1107 # we only support fixed size data now.
1107 # we only support fixed size data now.
1108 # This will be improved in the future.
1108 # This will be improved in the future.
1109 if (util.safehasattr(self.data, 'next')
1109 if (util.safehasattr(self.data, 'next')
1110 or util.safehasattr(self.data, '__next__')):
1110 or util.safehasattr(self.data, '__next__')):
1111 buff = util.chunkbuffer(self.data)
1111 buff = util.chunkbuffer(self.data)
1112 chunk = buff.read(preferedchunksize)
1112 chunk = buff.read(preferedchunksize)
1113 while chunk:
1113 while chunk:
1114 yield chunk
1114 yield chunk
1115 chunk = buff.read(preferedchunksize)
1115 chunk = buff.read(preferedchunksize)
1116 elif len(self.data):
1116 elif len(self.data):
1117 yield self.data
1117 yield self.data
1118
1118
1119
1119
1120 flaginterrupt = -1
1120 flaginterrupt = -1
1121
1121
1122 class interrupthandler(unpackermixin):
1122 class interrupthandler(unpackermixin):
1123 """read one part and process it with restricted capability
1123 """read one part and process it with restricted capability
1124
1124
1125 This allows to transmit exception raised on the producer size during part
1125 This allows to transmit exception raised on the producer size during part
1126 iteration while the consumer is reading a part.
1126 iteration while the consumer is reading a part.
1127
1127
1128 Part processed in this manner only have access to a ui object,"""
1128 Part processed in this manner only have access to a ui object,"""
1129
1129
1130 def __init__(self, ui, fp):
1130 def __init__(self, ui, fp):
1131 super(interrupthandler, self).__init__(fp)
1131 super(interrupthandler, self).__init__(fp)
1132 self.ui = ui
1132 self.ui = ui
1133
1133
1134 def _readpartheader(self):
1134 def _readpartheader(self):
1135 """reads a part header size and return the bytes blob
1135 """reads a part header size and return the bytes blob
1136
1136
1137 returns None if empty"""
1137 returns None if empty"""
1138 headersize = self._unpack(_fpartheadersize)[0]
1138 headersize = self._unpack(_fpartheadersize)[0]
1139 if headersize < 0:
1139 if headersize < 0:
1140 raise error.BundleValueError('negative part header size: %i'
1140 raise error.BundleValueError('negative part header size: %i'
1141 % headersize)
1141 % headersize)
1142 indebug(self.ui, 'part header size: %i\n' % headersize)
1142 indebug(self.ui, 'part header size: %i\n' % headersize)
1143 if headersize:
1143 if headersize:
1144 return self._readexact(headersize)
1144 return self._readexact(headersize)
1145 return None
1145 return None
1146
1146
1147 def __call__(self):
1147 def __call__(self):
1148
1148
1149 self.ui.debug('bundle2-input-stream-interrupt:'
1149 self.ui.debug('bundle2-input-stream-interrupt:'
1150 ' opening out of band context\n')
1150 ' opening out of band context\n')
1151 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1151 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1152 headerblock = self._readpartheader()
1152 headerblock = self._readpartheader()
1153 if headerblock is None:
1153 if headerblock is None:
1154 indebug(self.ui, 'no part found during interruption.')
1154 indebug(self.ui, 'no part found during interruption.')
1155 return
1155 return
1156 part = unbundlepart(self.ui, headerblock, self._fp)
1156 part = unbundlepart(self.ui, headerblock, self._fp)
1157 op = interruptoperation(self.ui)
1157 op = interruptoperation(self.ui)
1158 hardabort = False
1158 hardabort = False
1159 try:
1159 try:
1160 _processpart(op, part)
1160 _processpart(op, part)
1161 except (SystemExit, KeyboardInterrupt):
1161 except (SystemExit, KeyboardInterrupt):
1162 hardabort = True
1162 hardabort = True
1163 raise
1163 raise
1164 finally:
1164 finally:
1165 if not hardabort:
1165 if not hardabort:
1166 part.seek(0, 2)
1166 part.seek(0, 2)
1167 self.ui.debug('bundle2-input-stream-interrupt:'
1167 self.ui.debug('bundle2-input-stream-interrupt:'
1168 ' closing out of band context\n')
1168 ' closing out of band context\n')
1169
1169
1170 class interruptoperation(object):
1170 class interruptoperation(object):
1171 """A limited operation to be use by part handler during interruption
1171 """A limited operation to be use by part handler during interruption
1172
1172
1173 It only have access to an ui object.
1173 It only have access to an ui object.
1174 """
1174 """
1175
1175
1176 def __init__(self, ui):
1176 def __init__(self, ui):
1177 self.ui = ui
1177 self.ui = ui
1178 self.reply = None
1178 self.reply = None
1179 self.captureoutput = False
1179 self.captureoutput = False
1180
1180
1181 @property
1181 @property
1182 def repo(self):
1182 def repo(self):
1183 raise error.ProgrammingError('no repo access from stream interruption')
1183 raise error.ProgrammingError('no repo access from stream interruption')
1184
1184
1185 def gettransaction(self):
1185 def gettransaction(self):
1186 raise TransactionUnavailable('no repo access from stream interruption')
1186 raise TransactionUnavailable('no repo access from stream interruption')
1187
1187
1188 class unbundlepart(unpackermixin):
1188 class unbundlepart(unpackermixin):
1189 """a bundle part read from a bundle"""
1189 """a bundle part read from a bundle"""
1190
1190
1191 def __init__(self, ui, header, fp):
1191 def __init__(self, ui, header, fp):
1192 super(unbundlepart, self).__init__(fp)
1192 super(unbundlepart, self).__init__(fp)
1193 self._seekable = (util.safehasattr(fp, 'seek') and
1193 self._seekable = (util.safehasattr(fp, 'seek') and
1194 util.safehasattr(fp, 'tell'))
1194 util.safehasattr(fp, 'tell'))
1195 self.ui = ui
1195 self.ui = ui
1196 # unbundle state attr
1196 # unbundle state attr
1197 self._headerdata = header
1197 self._headerdata = header
1198 self._headeroffset = 0
1198 self._headeroffset = 0
1199 self._initialized = False
1199 self._initialized = False
1200 self.consumed = False
1200 self.consumed = False
1201 # part data
1201 # part data
1202 self.id = None
1202 self.id = None
1203 self.type = None
1203 self.type = None
1204 self.mandatoryparams = None
1204 self.mandatoryparams = None
1205 self.advisoryparams = None
1205 self.advisoryparams = None
1206 self.params = None
1206 self.params = None
1207 self.mandatorykeys = ()
1207 self.mandatorykeys = ()
1208 self._payloadstream = None
1208 self._payloadstream = None
1209 self._readheader()
1209 self._readheader()
1210 self._mandatory = None
1210 self._mandatory = None
1211 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1211 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1212 self._pos = 0
1212 self._pos = 0
1213
1213
1214 def _fromheader(self, size):
1214 def _fromheader(self, size):
1215 """return the next <size> byte from the header"""
1215 """return the next <size> byte from the header"""
1216 offset = self._headeroffset
1216 offset = self._headeroffset
1217 data = self._headerdata[offset:(offset + size)]
1217 data = self._headerdata[offset:(offset + size)]
1218 self._headeroffset = offset + size
1218 self._headeroffset = offset + size
1219 return data
1219 return data
1220
1220
1221 def _unpackheader(self, format):
1221 def _unpackheader(self, format):
1222 """read given format from header
1222 """read given format from header
1223
1223
1224 This automatically compute the size of the format to read."""
1224 This automatically compute the size of the format to read."""
1225 data = self._fromheader(struct.calcsize(format))
1225 data = self._fromheader(struct.calcsize(format))
1226 return _unpack(format, data)
1226 return _unpack(format, data)
1227
1227
1228 def _initparams(self, mandatoryparams, advisoryparams):
1228 def _initparams(self, mandatoryparams, advisoryparams):
1229 """internal function to setup all logic related parameters"""
1229 """internal function to setup all logic related parameters"""
1230 # make it read only to prevent people touching it by mistake.
1230 # make it read only to prevent people touching it by mistake.
1231 self.mandatoryparams = tuple(mandatoryparams)
1231 self.mandatoryparams = tuple(mandatoryparams)
1232 self.advisoryparams = tuple(advisoryparams)
1232 self.advisoryparams = tuple(advisoryparams)
1233 # user friendly UI
1233 # user friendly UI
1234 self.params = util.sortdict(self.mandatoryparams)
1234 self.params = util.sortdict(self.mandatoryparams)
1235 self.params.update(self.advisoryparams)
1235 self.params.update(self.advisoryparams)
1236 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1236 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1237
1237
1238 def _payloadchunks(self, chunknum=0):
1238 def _payloadchunks(self, chunknum=0):
1239 '''seek to specified chunk and start yielding data'''
1239 '''seek to specified chunk and start yielding data'''
1240 if len(self._chunkindex) == 0:
1240 if len(self._chunkindex) == 0:
1241 assert chunknum == 0, 'Must start with chunk 0'
1241 assert chunknum == 0, 'Must start with chunk 0'
1242 self._chunkindex.append((0, self._tellfp()))
1242 self._chunkindex.append((0, self._tellfp()))
1243 else:
1243 else:
1244 assert chunknum < len(self._chunkindex), \
1244 assert chunknum < len(self._chunkindex), \
1245 'Unknown chunk %d' % chunknum
1245 'Unknown chunk %d' % chunknum
1246 self._seekfp(self._chunkindex[chunknum][1])
1246 self._seekfp(self._chunkindex[chunknum][1])
1247
1247
1248 pos = self._chunkindex[chunknum][0]
1248 pos = self._chunkindex[chunknum][0]
1249 payloadsize = self._unpack(_fpayloadsize)[0]
1249 payloadsize = self._unpack(_fpayloadsize)[0]
1250 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1250 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1251 while payloadsize:
1251 while payloadsize:
1252 if payloadsize == flaginterrupt:
1252 if payloadsize == flaginterrupt:
1253 # interruption detection, the handler will now read a
1253 # interruption detection, the handler will now read a
1254 # single part and process it.
1254 # single part and process it.
1255 interrupthandler(self.ui, self._fp)()
1255 interrupthandler(self.ui, self._fp)()
1256 elif payloadsize < 0:
1256 elif payloadsize < 0:
1257 msg = 'negative payload chunk size: %i' % payloadsize
1257 msg = 'negative payload chunk size: %i' % payloadsize
1258 raise error.BundleValueError(msg)
1258 raise error.BundleValueError(msg)
1259 else:
1259 else:
1260 result = self._readexact(payloadsize)
1260 result = self._readexact(payloadsize)
1261 chunknum += 1
1261 chunknum += 1
1262 pos += payloadsize
1262 pos += payloadsize
1263 if chunknum == len(self._chunkindex):
1263 if chunknum == len(self._chunkindex):
1264 self._chunkindex.append((pos, self._tellfp()))
1264 self._chunkindex.append((pos, self._tellfp()))
1265 yield result
1265 yield result
1266 payloadsize = self._unpack(_fpayloadsize)[0]
1266 payloadsize = self._unpack(_fpayloadsize)[0]
1267 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1267 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1268
1268
1269 def _findchunk(self, pos):
1269 def _findchunk(self, pos):
1270 '''for a given payload position, return a chunk number and offset'''
1270 '''for a given payload position, return a chunk number and offset'''
1271 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1271 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1272 if ppos == pos:
1272 if ppos == pos:
1273 return chunk, 0
1273 return chunk, 0
1274 elif ppos > pos:
1274 elif ppos > pos:
1275 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1275 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1276 raise ValueError('Unknown chunk')
1276 raise ValueError('Unknown chunk')
1277
1277
1278 def _readheader(self):
1278 def _readheader(self):
1279 """read the header and setup the object"""
1279 """read the header and setup the object"""
1280 typesize = self._unpackheader(_fparttypesize)[0]
1280 typesize = self._unpackheader(_fparttypesize)[0]
1281 self.type = self._fromheader(typesize)
1281 self.type = self._fromheader(typesize)
1282 indebug(self.ui, 'part type: "%s"' % self.type)
1282 indebug(self.ui, 'part type: "%s"' % self.type)
1283 self.id = self._unpackheader(_fpartid)[0]
1283 self.id = self._unpackheader(_fpartid)[0]
1284 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1284 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1285 # extract mandatory bit from type
1285 # extract mandatory bit from type
1286 self.mandatory = (self.type != self.type.lower())
1286 self.mandatory = (self.type != self.type.lower())
1287 self.type = self.type.lower()
1287 self.type = self.type.lower()
1288 ## reading parameters
1288 ## reading parameters
1289 # param count
1289 # param count
1290 mancount, advcount = self._unpackheader(_fpartparamcount)
1290 mancount, advcount = self._unpackheader(_fpartparamcount)
1291 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1291 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1292 # param size
1292 # param size
1293 fparamsizes = _makefpartparamsizes(mancount + advcount)
1293 fparamsizes = _makefpartparamsizes(mancount + advcount)
1294 paramsizes = self._unpackheader(fparamsizes)
1294 paramsizes = self._unpackheader(fparamsizes)
1295 # make it a list of couple again
1295 # make it a list of couple again
1296 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1296 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1297 # split mandatory from advisory
1297 # split mandatory from advisory
1298 mansizes = paramsizes[:mancount]
1298 mansizes = paramsizes[:mancount]
1299 advsizes = paramsizes[mancount:]
1299 advsizes = paramsizes[mancount:]
1300 # retrieve param value
1300 # retrieve param value
1301 manparams = []
1301 manparams = []
1302 for key, value in mansizes:
1302 for key, value in mansizes:
1303 manparams.append((self._fromheader(key), self._fromheader(value)))
1303 manparams.append((self._fromheader(key), self._fromheader(value)))
1304 advparams = []
1304 advparams = []
1305 for key, value in advsizes:
1305 for key, value in advsizes:
1306 advparams.append((self._fromheader(key), self._fromheader(value)))
1306 advparams.append((self._fromheader(key), self._fromheader(value)))
1307 self._initparams(manparams, advparams)
1307 self._initparams(manparams, advparams)
1308 ## part payload
1308 ## part payload
1309 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1309 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1310 # we read the data, tell it
1310 # we read the data, tell it
1311 self._initialized = True
1311 self._initialized = True
1312
1312
1313 def read(self, size=None):
1313 def read(self, size=None):
1314 """read payload data"""
1314 """read payload data"""
1315 if not self._initialized:
1315 if not self._initialized:
1316 self._readheader()
1316 self._readheader()
1317 if size is None:
1317 if size is None:
1318 data = self._payloadstream.read()
1318 data = self._payloadstream.read()
1319 else:
1319 else:
1320 data = self._payloadstream.read(size)
1320 data = self._payloadstream.read(size)
1321 self._pos += len(data)
1321 self._pos += len(data)
1322 if size is None or len(data) < size:
1322 if size is None or len(data) < size:
1323 if not self.consumed and self._pos:
1323 if not self.consumed and self._pos:
1324 self.ui.debug('bundle2-input-part: total payload size %i\n'
1324 self.ui.debug('bundle2-input-part: total payload size %i\n'
1325 % self._pos)
1325 % self._pos)
1326 self.consumed = True
1326 self.consumed = True
1327 return data
1327 return data
1328
1328
1329 def tell(self):
1329 def tell(self):
1330 return self._pos
1330 return self._pos
1331
1331
1332 def seek(self, offset, whence=0):
1332 def seek(self, offset, whence=0):
1333 if whence == 0:
1333 if whence == 0:
1334 newpos = offset
1334 newpos = offset
1335 elif whence == 1:
1335 elif whence == 1:
1336 newpos = self._pos + offset
1336 newpos = self._pos + offset
1337 elif whence == 2:
1337 elif whence == 2:
1338 if not self.consumed:
1338 if not self.consumed:
1339 self.read()
1339 self.read()
1340 newpos = self._chunkindex[-1][0] - offset
1340 newpos = self._chunkindex[-1][0] - offset
1341 else:
1341 else:
1342 raise ValueError('Unknown whence value: %r' % (whence,))
1342 raise ValueError('Unknown whence value: %r' % (whence,))
1343
1343
1344 if newpos > self._chunkindex[-1][0] and not self.consumed:
1344 if newpos > self._chunkindex[-1][0] and not self.consumed:
1345 self.read()
1345 self.read()
1346 if not 0 <= newpos <= self._chunkindex[-1][0]:
1346 if not 0 <= newpos <= self._chunkindex[-1][0]:
1347 raise ValueError('Offset out of range')
1347 raise ValueError('Offset out of range')
1348
1348
1349 if self._pos != newpos:
1349 if self._pos != newpos:
1350 chunk, internaloffset = self._findchunk(newpos)
1350 chunk, internaloffset = self._findchunk(newpos)
1351 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1351 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1352 adjust = self.read(internaloffset)
1352 adjust = self.read(internaloffset)
1353 if len(adjust) != internaloffset:
1353 if len(adjust) != internaloffset:
1354 raise error.Abort(_('Seek failed\n'))
1354 raise error.Abort(_('Seek failed\n'))
1355 self._pos = newpos
1355 self._pos = newpos
1356
1356
1357 def _seekfp(self, offset, whence=0):
1357 def _seekfp(self, offset, whence=0):
1358 """move the underlying file pointer
1358 """move the underlying file pointer
1359
1359
1360 This method is meant for internal usage by the bundle2 protocol only.
1360 This method is meant for internal usage by the bundle2 protocol only.
1361 They directly manipulate the low level stream including bundle2 level
1361 They directly manipulate the low level stream including bundle2 level
1362 instruction.
1362 instruction.
1363
1363
1364 Do not use it to implement higher-level logic or methods."""
1364 Do not use it to implement higher-level logic or methods."""
1365 if self._seekable:
1365 if self._seekable:
1366 return self._fp.seek(offset, whence)
1366 return self._fp.seek(offset, whence)
1367 else:
1367 else:
1368 raise NotImplementedError(_('File pointer is not seekable'))
1368 raise NotImplementedError(_('File pointer is not seekable'))
1369
1369
1370 def _tellfp(self):
1370 def _tellfp(self):
1371 """return the file offset, or None if file is not seekable
1371 """return the file offset, or None if file is not seekable
1372
1372
1373 This method is meant for internal usage by the bundle2 protocol only.
1373 This method is meant for internal usage by the bundle2 protocol only.
1374 They directly manipulate the low level stream including bundle2 level
1374 They directly manipulate the low level stream including bundle2 level
1375 instruction.
1375 instruction.
1376
1376
1377 Do not use it to implement higher-level logic or methods."""
1377 Do not use it to implement higher-level logic or methods."""
1378 if self._seekable:
1378 if self._seekable:
1379 try:
1379 try:
1380 return self._fp.tell()
1380 return self._fp.tell()
1381 except IOError as e:
1381 except IOError as e:
1382 if e.errno == errno.ESPIPE:
1382 if e.errno == errno.ESPIPE:
1383 self._seekable = False
1383 self._seekable = False
1384 else:
1384 else:
1385 raise
1385 raise
1386 return None
1386 return None
1387
1387
1388 # These are only the static capabilities.
1388 # These are only the static capabilities.
1389 # Check the 'getrepocaps' function for the rest.
1389 # Check the 'getrepocaps' function for the rest.
1390 capabilities = {'HG20': (),
1390 capabilities = {'HG20': (),
1391 'error': ('abort', 'unsupportedcontent', 'pushraced',
1391 'error': ('abort', 'unsupportedcontent', 'pushraced',
1392 'pushkey'),
1392 'pushkey'),
1393 'listkeys': (),
1393 'listkeys': (),
1394 'pushkey': (),
1394 'pushkey': (),
1395 'digests': tuple(sorted(util.DIGESTS.keys())),
1395 'digests': tuple(sorted(util.DIGESTS.keys())),
1396 'remote-changegroup': ('http', 'https'),
1396 'remote-changegroup': ('http', 'https'),
1397 'hgtagsfnodes': (),
1397 'hgtagsfnodes': (),
1398 }
1398 }
1399
1399
1400 def getrepocaps(repo, allowpushback=False):
1400 def getrepocaps(repo, allowpushback=False):
1401 """return the bundle2 capabilities for a given repo
1401 """return the bundle2 capabilities for a given repo
1402
1402
1403 Exists to allow extensions (like evolution) to mutate the capabilities.
1403 Exists to allow extensions (like evolution) to mutate the capabilities.
1404 """
1404 """
1405 caps = capabilities.copy()
1405 caps = capabilities.copy()
1406 caps['changegroup'] = tuple(sorted(
1406 caps['changegroup'] = tuple(sorted(
1407 changegroup.supportedincomingversions(repo)))
1407 changegroup.supportedincomingversions(repo)))
1408 if obsolete.isenabled(repo, obsolete.exchangeopt):
1408 if obsolete.isenabled(repo, obsolete.exchangeopt):
1409 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1409 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1410 caps['obsmarkers'] = supportedformat
1410 caps['obsmarkers'] = supportedformat
1411 if allowpushback:
1411 if allowpushback:
1412 caps['pushback'] = ()
1412 caps['pushback'] = ()
1413 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1413 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1414 if cpmode == 'check-related':
1414 if cpmode == 'check-related':
1415 caps['checkheads'] = ('related',)
1415 caps['checkheads'] = ('related',)
1416 return caps
1416 return caps
1417
1417
1418 def bundle2caps(remote):
1418 def bundle2caps(remote):
1419 """return the bundle capabilities of a peer as dict"""
1419 """return the bundle capabilities of a peer as dict"""
1420 raw = remote.capable('bundle2')
1420 raw = remote.capable('bundle2')
1421 if not raw and raw != '':
1421 if not raw and raw != '':
1422 return {}
1422 return {}
1423 capsblob = urlreq.unquote(remote.capable('bundle2'))
1423 capsblob = urlreq.unquote(remote.capable('bundle2'))
1424 return decodecaps(capsblob)
1424 return decodecaps(capsblob)
1425
1425
1426 def obsmarkersversion(caps):
1426 def obsmarkersversion(caps):
1427 """extract the list of supported obsmarkers versions from a bundle2caps dict
1427 """extract the list of supported obsmarkers versions from a bundle2caps dict
1428 """
1428 """
1429 obscaps = caps.get('obsmarkers', ())
1429 obscaps = caps.get('obsmarkers', ())
1430 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1430 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1431
1431
1432 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1432 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1433 vfs=None, compression=None, compopts=None):
1433 vfs=None, compression=None, compopts=None):
1434 if bundletype.startswith('HG10'):
1434 if bundletype.startswith('HG10'):
1435 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1435 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1436 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1436 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1437 compression=compression, compopts=compopts)
1437 compression=compression, compopts=compopts)
1438 elif not bundletype.startswith('HG20'):
1438 elif not bundletype.startswith('HG20'):
1439 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1439 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1440
1440
1441 caps = {}
1441 caps = {}
1442 if 'obsolescence' in opts:
1442 if 'obsolescence' in opts:
1443 caps['obsmarkers'] = ('V1',)
1443 caps['obsmarkers'] = ('V1',)
1444 bundle = bundle20(ui, caps)
1444 bundle = bundle20(ui, caps)
1445 bundle.setcompression(compression, compopts)
1445 bundle.setcompression(compression, compopts)
1446 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1446 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1447 chunkiter = bundle.getchunks()
1447 chunkiter = bundle.getchunks()
1448
1448
1449 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1449 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1450
1450
1451 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1451 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1452 # We should eventually reconcile this logic with the one behind
1452 # We should eventually reconcile this logic with the one behind
1453 # 'exchange.getbundle2partsgenerator'.
1453 # 'exchange.getbundle2partsgenerator'.
1454 #
1454 #
1455 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1455 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1456 # different right now. So we keep them separated for now for the sake of
1456 # different right now. So we keep them separated for now for the sake of
1457 # simplicity.
1457 # simplicity.
1458
1458
1459 # we always want a changegroup in such bundle
1459 # we always want a changegroup in such bundle
1460 cgversion = opts.get('cg.version')
1460 cgversion = opts.get('cg.version')
1461 if cgversion is None:
1461 if cgversion is None:
1462 cgversion = changegroup.safeversion(repo)
1462 cgversion = changegroup.safeversion(repo)
1463 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1463 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1464 part = bundler.newpart('changegroup', data=cg.getchunks())
1464 part = bundler.newpart('changegroup', data=cg.getchunks())
1465 part.addparam('version', cg.version)
1465 part.addparam('version', cg.version)
1466 if 'clcount' in cg.extras:
1466 if 'clcount' in cg.extras:
1467 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1467 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1468 mandatory=False)
1468 mandatory=False)
1469 if opts.get('phases') and repo.revs('%ln and secret()',
1469 if opts.get('phases') and repo.revs('%ln and secret()',
1470 outgoing.missingheads):
1470 outgoing.missingheads):
1471 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1471 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1472
1472
1473 addparttagsfnodescache(repo, bundler, outgoing)
1473 addparttagsfnodescache(repo, bundler, outgoing)
1474
1474
1475 if opts.get('obsolescence', False):
1475 if opts.get('obsolescence', False):
1476 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1476 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1477 buildobsmarkerspart(bundler, obsmarkers)
1477 buildobsmarkerspart(bundler, obsmarkers)
1478
1478
1479 if opts.get('phases', False):
1479 if opts.get('phases', False):
1480 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1480 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1481 phasedata = phases.binaryencode(headsbyphase)
1481 phasedata = phases.binaryencode(headsbyphase)
1482 bundler.newpart('phase-heads', data=phasedata)
1482 bundler.newpart('phase-heads', data=phasedata)
1483
1483
1484 def addparttagsfnodescache(repo, bundler, outgoing):
1484 def addparttagsfnodescache(repo, bundler, outgoing):
1485 # we include the tags fnode cache for the bundle changeset
1485 # we include the tags fnode cache for the bundle changeset
1486 # (as an optional parts)
1486 # (as an optional parts)
1487 cache = tags.hgtagsfnodescache(repo.unfiltered())
1487 cache = tags.hgtagsfnodescache(repo.unfiltered())
1488 chunks = []
1488 chunks = []
1489
1489
1490 # .hgtags fnodes are only relevant for head changesets. While we could
1490 # .hgtags fnodes are only relevant for head changesets. While we could
1491 # transfer values for all known nodes, there will likely be little to
1491 # transfer values for all known nodes, there will likely be little to
1492 # no benefit.
1492 # no benefit.
1493 #
1493 #
1494 # We don't bother using a generator to produce output data because
1494 # We don't bother using a generator to produce output data because
1495 # a) we only have 40 bytes per head and even esoteric numbers of heads
1495 # a) we only have 40 bytes per head and even esoteric numbers of heads
1496 # consume little memory (1M heads is 40MB) b) we don't want to send the
1496 # consume little memory (1M heads is 40MB) b) we don't want to send the
1497 # part if we don't have entries and knowing if we have entries requires
1497 # part if we don't have entries and knowing if we have entries requires
1498 # cache lookups.
1498 # cache lookups.
1499 for node in outgoing.missingheads:
1499 for node in outgoing.missingheads:
1500 # Don't compute missing, as this may slow down serving.
1500 # Don't compute missing, as this may slow down serving.
1501 fnode = cache.getfnode(node, computemissing=False)
1501 fnode = cache.getfnode(node, computemissing=False)
1502 if fnode is not None:
1502 if fnode is not None:
1503 chunks.extend([node, fnode])
1503 chunks.extend([node, fnode])
1504
1504
1505 if chunks:
1505 if chunks:
1506 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1506 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1507
1507
1508 def buildobsmarkerspart(bundler, markers):
1508 def buildobsmarkerspart(bundler, markers):
1509 """add an obsmarker part to the bundler with <markers>
1509 """add an obsmarker part to the bundler with <markers>
1510
1510
1511 No part is created if markers is empty.
1511 No part is created if markers is empty.
1512 Raises ValueError if the bundler doesn't support any known obsmarker format.
1512 Raises ValueError if the bundler doesn't support any known obsmarker format.
1513 """
1513 """
1514 if not markers:
1514 if not markers:
1515 return None
1515 return None
1516
1516
1517 remoteversions = obsmarkersversion(bundler.capabilities)
1517 remoteversions = obsmarkersversion(bundler.capabilities)
1518 version = obsolete.commonversion(remoteversions)
1518 version = obsolete.commonversion(remoteversions)
1519 if version is None:
1519 if version is None:
1520 raise ValueError('bundler does not support common obsmarker format')
1520 raise ValueError('bundler does not support common obsmarker format')
1521 stream = obsolete.encodemarkers(markers, True, version=version)
1521 stream = obsolete.encodemarkers(markers, True, version=version)
1522 return bundler.newpart('obsmarkers', data=stream)
1522 return bundler.newpart('obsmarkers', data=stream)
1523
1523
1524 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1524 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1525 compopts=None):
1525 compopts=None):
1526 """Write a bundle file and return its filename.
1526 """Write a bundle file and return its filename.
1527
1527
1528 Existing files will not be overwritten.
1528 Existing files will not be overwritten.
1529 If no filename is specified, a temporary file is created.
1529 If no filename is specified, a temporary file is created.
1530 bz2 compression can be turned off.
1530 bz2 compression can be turned off.
1531 The bundle file will be deleted in case of errors.
1531 The bundle file will be deleted in case of errors.
1532 """
1532 """
1533
1533
1534 if bundletype == "HG20":
1534 if bundletype == "HG20":
1535 bundle = bundle20(ui)
1535 bundle = bundle20(ui)
1536 bundle.setcompression(compression, compopts)
1536 bundle.setcompression(compression, compopts)
1537 part = bundle.newpart('changegroup', data=cg.getchunks())
1537 part = bundle.newpart('changegroup', data=cg.getchunks())
1538 part.addparam('version', cg.version)
1538 part.addparam('version', cg.version)
1539 if 'clcount' in cg.extras:
1539 if 'clcount' in cg.extras:
1540 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1540 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1541 mandatory=False)
1541 mandatory=False)
1542 chunkiter = bundle.getchunks()
1542 chunkiter = bundle.getchunks()
1543 else:
1543 else:
1544 # compression argument is only for the bundle2 case
1544 # compression argument is only for the bundle2 case
1545 assert compression is None
1545 assert compression is None
1546 if cg.version != '01':
1546 if cg.version != '01':
1547 raise error.Abort(_('old bundle types only supports v1 '
1547 raise error.Abort(_('old bundle types only supports v1 '
1548 'changegroups'))
1548 'changegroups'))
1549 header, comp = bundletypes[bundletype]
1549 header, comp = bundletypes[bundletype]
1550 if comp not in util.compengines.supportedbundletypes:
1550 if comp not in util.compengines.supportedbundletypes:
1551 raise error.Abort(_('unknown stream compression type: %s')
1551 raise error.Abort(_('unknown stream compression type: %s')
1552 % comp)
1552 % comp)
1553 compengine = util.compengines.forbundletype(comp)
1553 compengine = util.compengines.forbundletype(comp)
1554 def chunkiter():
1554 def chunkiter():
1555 yield header
1555 yield header
1556 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1556 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1557 yield chunk
1557 yield chunk
1558 chunkiter = chunkiter()
1558 chunkiter = chunkiter()
1559
1559
1560 # parse the changegroup data, otherwise we will block
1560 # parse the changegroup data, otherwise we will block
1561 # in case of sshrepo because we don't know the end of the stream
1561 # in case of sshrepo because we don't know the end of the stream
1562 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1562 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1563
1563
1564 def combinechangegroupresults(op):
1564 def combinechangegroupresults(op):
1565 """logic to combine 0 or more addchangegroup results into one"""
1565 """logic to combine 0 or more addchangegroup results into one"""
1566 results = [r.get('return', 0)
1566 results = [r.get('return', 0)
1567 for r in op.records['changegroup']]
1567 for r in op.records['changegroup']]
1568 changedheads = 0
1568 changedheads = 0
1569 result = 1
1569 result = 1
1570 for ret in results:
1570 for ret in results:
1571 # If any changegroup result is 0, return 0
1571 # If any changegroup result is 0, return 0
1572 if ret == 0:
1572 if ret == 0:
1573 result = 0
1573 result = 0
1574 break
1574 break
1575 if ret < -1:
1575 if ret < -1:
1576 changedheads += ret + 1
1576 changedheads += ret + 1
1577 elif ret > 1:
1577 elif ret > 1:
1578 changedheads += ret - 1
1578 changedheads += ret - 1
1579 if changedheads > 0:
1579 if changedheads > 0:
1580 result = 1 + changedheads
1580 result = 1 + changedheads
1581 elif changedheads < 0:
1581 elif changedheads < 0:
1582 result = -1 + changedheads
1582 result = -1 + changedheads
1583 return result
1583 return result
1584
1584
1585 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1585 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1586 'targetphase'))
1586 'targetphase'))
1587 def handlechangegroup(op, inpart):
1587 def handlechangegroup(op, inpart):
1588 """apply a changegroup part on the repo
1588 """apply a changegroup part on the repo
1589
1589
1590 This is a very early implementation that will massive rework before being
1590 This is a very early implementation that will massive rework before being
1591 inflicted to any end-user.
1591 inflicted to any end-user.
1592 """
1592 """
1593 tr = op.gettransaction()
1593 tr = op.gettransaction()
1594 unpackerversion = inpart.params.get('version', '01')
1594 unpackerversion = inpart.params.get('version', '01')
1595 # We should raise an appropriate exception here
1595 # We should raise an appropriate exception here
1596 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1596 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1597 # the source and url passed here are overwritten by the one contained in
1597 # the source and url passed here are overwritten by the one contained in
1598 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1598 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1599 nbchangesets = None
1599 nbchangesets = None
1600 if 'nbchanges' in inpart.params:
1600 if 'nbchanges' in inpart.params:
1601 nbchangesets = int(inpart.params.get('nbchanges'))
1601 nbchangesets = int(inpart.params.get('nbchanges'))
1602 if ('treemanifest' in inpart.params and
1602 if ('treemanifest' in inpart.params and
1603 'treemanifest' not in op.repo.requirements):
1603 'treemanifest' not in op.repo.requirements):
1604 if len(op.repo.changelog) != 0:
1604 if len(op.repo.changelog) != 0:
1605 raise error.Abort(_(
1605 raise error.Abort(_(
1606 "bundle contains tree manifests, but local repo is "
1606 "bundle contains tree manifests, but local repo is "
1607 "non-empty and does not use tree manifests"))
1607 "non-empty and does not use tree manifests"))
1608 op.repo.requirements.add('treemanifest')
1608 op.repo.requirements.add('treemanifest')
1609 op.repo._applyopenerreqs()
1609 op.repo._applyopenerreqs()
1610 op.repo._writerequirements()
1610 op.repo._writerequirements()
1611 extrakwargs = {}
1611 extrakwargs = {}
1612 targetphase = inpart.params.get('targetphase')
1612 targetphase = inpart.params.get('targetphase')
1613 if targetphase is not None:
1613 if targetphase is not None:
1614 extrakwargs['targetphase'] = int(targetphase)
1614 extrakwargs['targetphase'] = int(targetphase)
1615 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1615 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1616 expectedtotal=nbchangesets, **extrakwargs)
1616 expectedtotal=nbchangesets, **extrakwargs)
1617 if op.reply is not None:
1617 if op.reply is not None:
1618 # This is definitely not the final form of this
1618 # This is definitely not the final form of this
1619 # return. But one need to start somewhere.
1619 # return. But one need to start somewhere.
1620 part = op.reply.newpart('reply:changegroup', mandatory=False)
1620 part = op.reply.newpart('reply:changegroup', mandatory=False)
1621 part.addparam(
1621 part.addparam(
1622 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1622 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1623 part.addparam('return', '%i' % ret, mandatory=False)
1623 part.addparam('return', '%i' % ret, mandatory=False)
1624 assert not inpart.read()
1624 assert not inpart.read()
1625
1625
1626 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1626 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1627 ['digest:%s' % k for k in util.DIGESTS.keys()])
1627 ['digest:%s' % k for k in util.DIGESTS.keys()])
1628 @parthandler('remote-changegroup', _remotechangegroupparams)
1628 @parthandler('remote-changegroup', _remotechangegroupparams)
1629 def handleremotechangegroup(op, inpart):
1629 def handleremotechangegroup(op, inpart):
1630 """apply a bundle10 on the repo, given an url and validation information
1630 """apply a bundle10 on the repo, given an url and validation information
1631
1631
1632 All the information about the remote bundle to import are given as
1632 All the information about the remote bundle to import are given as
1633 parameters. The parameters include:
1633 parameters. The parameters include:
1634 - url: the url to the bundle10.
1634 - url: the url to the bundle10.
1635 - size: the bundle10 file size. It is used to validate what was
1635 - size: the bundle10 file size. It is used to validate what was
1636 retrieved by the client matches the server knowledge about the bundle.
1636 retrieved by the client matches the server knowledge about the bundle.
1637 - digests: a space separated list of the digest types provided as
1637 - digests: a space separated list of the digest types provided as
1638 parameters.
1638 parameters.
1639 - digest:<digest-type>: the hexadecimal representation of the digest with
1639 - digest:<digest-type>: the hexadecimal representation of the digest with
1640 that name. Like the size, it is used to validate what was retrieved by
1640 that name. Like the size, it is used to validate what was retrieved by
1641 the client matches what the server knows about the bundle.
1641 the client matches what the server knows about the bundle.
1642
1642
1643 When multiple digest types are given, all of them are checked.
1643 When multiple digest types are given, all of them are checked.
1644 """
1644 """
1645 try:
1645 try:
1646 raw_url = inpart.params['url']
1646 raw_url = inpart.params['url']
1647 except KeyError:
1647 except KeyError:
1648 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1648 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1649 parsed_url = util.url(raw_url)
1649 parsed_url = util.url(raw_url)
1650 if parsed_url.scheme not in capabilities['remote-changegroup']:
1650 if parsed_url.scheme not in capabilities['remote-changegroup']:
1651 raise error.Abort(_('remote-changegroup does not support %s urls') %
1651 raise error.Abort(_('remote-changegroup does not support %s urls') %
1652 parsed_url.scheme)
1652 parsed_url.scheme)
1653
1653
1654 try:
1654 try:
1655 size = int(inpart.params['size'])
1655 size = int(inpart.params['size'])
1656 except ValueError:
1656 except ValueError:
1657 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1657 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1658 % 'size')
1658 % 'size')
1659 except KeyError:
1659 except KeyError:
1660 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1660 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1661
1661
1662 digests = {}
1662 digests = {}
1663 for typ in inpart.params.get('digests', '').split():
1663 for typ in inpart.params.get('digests', '').split():
1664 param = 'digest:%s' % typ
1664 param = 'digest:%s' % typ
1665 try:
1665 try:
1666 value = inpart.params[param]
1666 value = inpart.params[param]
1667 except KeyError:
1667 except KeyError:
1668 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1668 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1669 param)
1669 param)
1670 digests[typ] = value
1670 digests[typ] = value
1671
1671
1672 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1672 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1673
1673
1674 tr = op.gettransaction()
1674 tr = op.gettransaction()
1675 from . import exchange
1675 from . import exchange
1676 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1676 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1677 if not isinstance(cg, changegroup.cg1unpacker):
1677 if not isinstance(cg, changegroup.cg1unpacker):
1678 raise error.Abort(_('%s: not a bundle version 1.0') %
1678 raise error.Abort(_('%s: not a bundle version 1.0') %
1679 util.hidepassword(raw_url))
1679 util.hidepassword(raw_url))
1680 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1680 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1681 if op.reply is not None:
1681 if op.reply is not None:
1682 # This is definitely not the final form of this
1682 # This is definitely not the final form of this
1683 # return. But one need to start somewhere.
1683 # return. But one need to start somewhere.
1684 part = op.reply.newpart('reply:changegroup')
1684 part = op.reply.newpart('reply:changegroup')
1685 part.addparam(
1685 part.addparam(
1686 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1686 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1687 part.addparam('return', '%i' % ret, mandatory=False)
1687 part.addparam('return', '%i' % ret, mandatory=False)
1688 try:
1688 try:
1689 real_part.validate()
1689 real_part.validate()
1690 except error.Abort as e:
1690 except error.Abort as e:
1691 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1691 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1692 (util.hidepassword(raw_url), str(e)))
1692 (util.hidepassword(raw_url), str(e)))
1693 assert not inpart.read()
1693 assert not inpart.read()
1694
1694
1695 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1695 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1696 def handlereplychangegroup(op, inpart):
1696 def handlereplychangegroup(op, inpart):
1697 ret = int(inpart.params['return'])
1697 ret = int(inpart.params['return'])
1698 replyto = int(inpart.params['in-reply-to'])
1698 replyto = int(inpart.params['in-reply-to'])
1699 op.records.add('changegroup', {'return': ret}, replyto)
1699 op.records.add('changegroup', {'return': ret}, replyto)
1700
1700
1701 @parthandler('check:heads')
1701 @parthandler('check:heads')
1702 def handlecheckheads(op, inpart):
1702 def handlecheckheads(op, inpart):
1703 """check that head of the repo did not change
1703 """check that head of the repo did not change
1704
1704
1705 This is used to detect a push race when using unbundle.
1705 This is used to detect a push race when using unbundle.
1706 This replaces the "heads" argument of unbundle."""
1706 This replaces the "heads" argument of unbundle."""
1707 h = inpart.read(20)
1707 h = inpart.read(20)
1708 heads = []
1708 heads = []
1709 while len(h) == 20:
1709 while len(h) == 20:
1710 heads.append(h)
1710 heads.append(h)
1711 h = inpart.read(20)
1711 h = inpart.read(20)
1712 assert not h
1712 assert not h
1713 # Trigger a transaction so that we are guaranteed to have the lock now.
1713 # Trigger a transaction so that we are guaranteed to have the lock now.
1714 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1714 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1715 op.gettransaction()
1715 op.gettransaction()
1716 if sorted(heads) != sorted(op.repo.heads()):
1716 if sorted(heads) != sorted(op.repo.heads()):
1717 raise error.PushRaced('repository changed while pushing - '
1717 raise error.PushRaced('repository changed while pushing - '
1718 'please try again')
1718 'please try again')
1719
1719
1720 @parthandler('check:updated-heads')
1720 @parthandler('check:updated-heads')
1721 def handlecheckupdatedheads(op, inpart):
1721 def handlecheckupdatedheads(op, inpart):
1722 """check for race on the heads touched by a push
1722 """check for race on the heads touched by a push
1723
1723
1724 This is similar to 'check:heads' but focus on the heads actually updated
1724 This is similar to 'check:heads' but focus on the heads actually updated
1725 during the push. If other activities happen on unrelated heads, it is
1725 during the push. If other activities happen on unrelated heads, it is
1726 ignored.
1726 ignored.
1727
1727
1728 This allow server with high traffic to avoid push contention as long as
1728 This allow server with high traffic to avoid push contention as long as
1729 unrelated parts of the graph are involved."""
1729 unrelated parts of the graph are involved."""
1730 h = inpart.read(20)
1730 h = inpart.read(20)
1731 heads = []
1731 heads = []
1732 while len(h) == 20:
1732 while len(h) == 20:
1733 heads.append(h)
1733 heads.append(h)
1734 h = inpart.read(20)
1734 h = inpart.read(20)
1735 assert not h
1735 assert not h
1736 # trigger a transaction so that we are guaranteed to have the lock now.
1736 # trigger a transaction so that we are guaranteed to have the lock now.
1737 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1737 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1738 op.gettransaction()
1738 op.gettransaction()
1739
1739
1740 currentheads = set()
1740 currentheads = set()
1741 for ls in op.repo.branchmap().itervalues():
1741 for ls in op.repo.branchmap().itervalues():
1742 currentheads.update(ls)
1742 currentheads.update(ls)
1743
1743
1744 for h in heads:
1744 for h in heads:
1745 if h not in currentheads:
1745 if h not in currentheads:
1746 raise error.PushRaced('repository changed while pushing - '
1746 raise error.PushRaced('repository changed while pushing - '
1747 'please try again')
1747 'please try again')
1748
1748
1749 @parthandler('output')
1749 @parthandler('output')
1750 def handleoutput(op, inpart):
1750 def handleoutput(op, inpart):
1751 """forward output captured on the server to the client"""
1751 """forward output captured on the server to the client"""
1752 for line in inpart.read().splitlines():
1752 for line in inpart.read().splitlines():
1753 op.ui.status(_('remote: %s\n') % line)
1753 op.ui.status(_('remote: %s\n') % line)
1754
1754
1755 @parthandler('replycaps')
1755 @parthandler('replycaps')
1756 def handlereplycaps(op, inpart):
1756 def handlereplycaps(op, inpart):
1757 """Notify that a reply bundle should be created
1757 """Notify that a reply bundle should be created
1758
1758
1759 The payload contains the capabilities information for the reply"""
1759 The payload contains the capabilities information for the reply"""
1760 caps = decodecaps(inpart.read())
1760 caps = decodecaps(inpart.read())
1761 if op.reply is None:
1761 if op.reply is None:
1762 op.reply = bundle20(op.ui, caps)
1762 op.reply = bundle20(op.ui, caps)
1763
1763
1764 class AbortFromPart(error.Abort):
1764 class AbortFromPart(error.Abort):
1765 """Sub-class of Abort that denotes an error from a bundle2 part."""
1765 """Sub-class of Abort that denotes an error from a bundle2 part."""
1766
1766
1767 @parthandler('error:abort', ('message', 'hint'))
1767 @parthandler('error:abort', ('message', 'hint'))
1768 def handleerrorabort(op, inpart):
1768 def handleerrorabort(op, inpart):
1769 """Used to transmit abort error over the wire"""
1769 """Used to transmit abort error over the wire"""
1770 raise AbortFromPart(inpart.params['message'],
1770 raise AbortFromPart(inpart.params['message'],
1771 hint=inpart.params.get('hint'))
1771 hint=inpart.params.get('hint'))
1772
1772
1773 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1773 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1774 'in-reply-to'))
1774 'in-reply-to'))
1775 def handleerrorpushkey(op, inpart):
1775 def handleerrorpushkey(op, inpart):
1776 """Used to transmit failure of a mandatory pushkey over the wire"""
1776 """Used to transmit failure of a mandatory pushkey over the wire"""
1777 kwargs = {}
1777 kwargs = {}
1778 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1778 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1779 value = inpart.params.get(name)
1779 value = inpart.params.get(name)
1780 if value is not None:
1780 if value is not None:
1781 kwargs[name] = value
1781 kwargs[name] = value
1782 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1782 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1783
1783
1784 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1784 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1785 def handleerrorunsupportedcontent(op, inpart):
1785 def handleerrorunsupportedcontent(op, inpart):
1786 """Used to transmit unknown content error over the wire"""
1786 """Used to transmit unknown content error over the wire"""
1787 kwargs = {}
1787 kwargs = {}
1788 parttype = inpart.params.get('parttype')
1788 parttype = inpart.params.get('parttype')
1789 if parttype is not None:
1789 if parttype is not None:
1790 kwargs['parttype'] = parttype
1790 kwargs['parttype'] = parttype
1791 params = inpart.params.get('params')
1791 params = inpart.params.get('params')
1792 if params is not None:
1792 if params is not None:
1793 kwargs['params'] = params.split('\0')
1793 kwargs['params'] = params.split('\0')
1794
1794
1795 raise error.BundleUnknownFeatureError(**kwargs)
1795 raise error.BundleUnknownFeatureError(**kwargs)
1796
1796
1797 @parthandler('error:pushraced', ('message',))
1797 @parthandler('error:pushraced', ('message',))
1798 def handleerrorpushraced(op, inpart):
1798 def handleerrorpushraced(op, inpart):
1799 """Used to transmit push race error over the wire"""
1799 """Used to transmit push race error over the wire"""
1800 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1800 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1801
1801
1802 @parthandler('listkeys', ('namespace',))
1802 @parthandler('listkeys', ('namespace',))
1803 def handlelistkeys(op, inpart):
1803 def handlelistkeys(op, inpart):
1804 """retrieve pushkey namespace content stored in a bundle2"""
1804 """retrieve pushkey namespace content stored in a bundle2"""
1805 namespace = inpart.params['namespace']
1805 namespace = inpart.params['namespace']
1806 r = pushkey.decodekeys(inpart.read())
1806 r = pushkey.decodekeys(inpart.read())
1807 op.records.add('listkeys', (namespace, r))
1807 op.records.add('listkeys', (namespace, r))
1808
1808
1809 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1809 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1810 def handlepushkey(op, inpart):
1810 def handlepushkey(op, inpart):
1811 """process a pushkey request"""
1811 """process a pushkey request"""
1812 dec = pushkey.decode
1812 dec = pushkey.decode
1813 namespace = dec(inpart.params['namespace'])
1813 namespace = dec(inpart.params['namespace'])
1814 key = dec(inpart.params['key'])
1814 key = dec(inpart.params['key'])
1815 old = dec(inpart.params['old'])
1815 old = dec(inpart.params['old'])
1816 new = dec(inpart.params['new'])
1816 new = dec(inpart.params['new'])
1817 # Grab the transaction to ensure that we have the lock before performing the
1817 # Grab the transaction to ensure that we have the lock before performing the
1818 # pushkey.
1818 # pushkey.
1819 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1819 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1820 op.gettransaction()
1820 op.gettransaction()
1821 ret = op.repo.pushkey(namespace, key, old, new)
1821 ret = op.repo.pushkey(namespace, key, old, new)
1822 record = {'namespace': namespace,
1822 record = {'namespace': namespace,
1823 'key': key,
1823 'key': key,
1824 'old': old,
1824 'old': old,
1825 'new': new}
1825 'new': new}
1826 op.records.add('pushkey', record)
1826 op.records.add('pushkey', record)
1827 if op.reply is not None:
1827 if op.reply is not None:
1828 rpart = op.reply.newpart('reply:pushkey')
1828 rpart = op.reply.newpart('reply:pushkey')
1829 rpart.addparam(
1829 rpart.addparam(
1830 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1830 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1831 rpart.addparam('return', '%i' % ret, mandatory=False)
1831 rpart.addparam('return', '%i' % ret, mandatory=False)
1832 if inpart.mandatory and not ret:
1832 if inpart.mandatory and not ret:
1833 kwargs = {}
1833 kwargs = {}
1834 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1834 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1835 if key in inpart.params:
1835 if key in inpart.params:
1836 kwargs[key] = inpart.params[key]
1836 kwargs[key] = inpart.params[key]
1837 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1837 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1838
1838
1839 def _readphaseheads(inpart):
1840 headsbyphase = [[] for i in phases.allphases]
1841 entrysize = phases._fphasesentry.size
1842 while True:
1843 entry = inpart.read(entrysize)
1844 if len(entry) < entrysize:
1845 if entry:
1846 raise error.Abort(_('bad phase-heads bundle part'))
1847 break
1848 phase, node = phases._fphasesentry.unpack(entry)
1849 headsbyphase[phase].append(node)
1850 return headsbyphase
1851
1852 @parthandler('phase-heads')
1839 @parthandler('phase-heads')
1853 def handlephases(op, inpart):
1840 def handlephases(op, inpart):
1854 """apply phases from bundle part to repo"""
1841 """apply phases from bundle part to repo"""
1855 headsbyphase = _readphaseheads(inpart)
1842 headsbyphase = phases.binarydecode(inpart)
1856 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase)
1843 phases.updatephases(op.repo.unfiltered(), op.gettransaction(), headsbyphase)
1857 op.records.add('phase-heads', {})
1844 op.records.add('phase-heads', {})
1858
1845
1859 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1846 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1860 def handlepushkeyreply(op, inpart):
1847 def handlepushkeyreply(op, inpart):
1861 """retrieve the result of a pushkey request"""
1848 """retrieve the result of a pushkey request"""
1862 ret = int(inpart.params['return'])
1849 ret = int(inpart.params['return'])
1863 partid = int(inpart.params['in-reply-to'])
1850 partid = int(inpart.params['in-reply-to'])
1864 op.records.add('pushkey', {'return': ret}, partid)
1851 op.records.add('pushkey', {'return': ret}, partid)
1865
1852
1866 @parthandler('obsmarkers')
1853 @parthandler('obsmarkers')
1867 def handleobsmarker(op, inpart):
1854 def handleobsmarker(op, inpart):
1868 """add a stream of obsmarkers to the repo"""
1855 """add a stream of obsmarkers to the repo"""
1869 tr = op.gettransaction()
1856 tr = op.gettransaction()
1870 markerdata = inpart.read()
1857 markerdata = inpart.read()
1871 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1858 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1872 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1859 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1873 % len(markerdata))
1860 % len(markerdata))
1874 # The mergemarkers call will crash if marker creation is not enabled.
1861 # The mergemarkers call will crash if marker creation is not enabled.
1875 # we want to avoid this if the part is advisory.
1862 # we want to avoid this if the part is advisory.
1876 if not inpart.mandatory and op.repo.obsstore.readonly:
1863 if not inpart.mandatory and op.repo.obsstore.readonly:
1877 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1864 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1878 return
1865 return
1879 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1866 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1880 op.repo.invalidatevolatilesets()
1867 op.repo.invalidatevolatilesets()
1881 if new:
1868 if new:
1882 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1869 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1883 op.records.add('obsmarkers', {'new': new})
1870 op.records.add('obsmarkers', {'new': new})
1884 if op.reply is not None:
1871 if op.reply is not None:
1885 rpart = op.reply.newpart('reply:obsmarkers')
1872 rpart = op.reply.newpart('reply:obsmarkers')
1886 rpart.addparam(
1873 rpart.addparam(
1887 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1874 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1888 rpart.addparam('new', '%i' % new, mandatory=False)
1875 rpart.addparam('new', '%i' % new, mandatory=False)
1889
1876
1890
1877
1891 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1878 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1892 def handleobsmarkerreply(op, inpart):
1879 def handleobsmarkerreply(op, inpart):
1893 """retrieve the result of a pushkey request"""
1880 """retrieve the result of a pushkey request"""
1894 ret = int(inpart.params['new'])
1881 ret = int(inpart.params['new'])
1895 partid = int(inpart.params['in-reply-to'])
1882 partid = int(inpart.params['in-reply-to'])
1896 op.records.add('obsmarkers', {'new': ret}, partid)
1883 op.records.add('obsmarkers', {'new': ret}, partid)
1897
1884
1898 @parthandler('hgtagsfnodes')
1885 @parthandler('hgtagsfnodes')
1899 def handlehgtagsfnodes(op, inpart):
1886 def handlehgtagsfnodes(op, inpart):
1900 """Applies .hgtags fnodes cache entries to the local repo.
1887 """Applies .hgtags fnodes cache entries to the local repo.
1901
1888
1902 Payload is pairs of 20 byte changeset nodes and filenodes.
1889 Payload is pairs of 20 byte changeset nodes and filenodes.
1903 """
1890 """
1904 # Grab the transaction so we ensure that we have the lock at this point.
1891 # Grab the transaction so we ensure that we have the lock at this point.
1905 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1892 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1906 op.gettransaction()
1893 op.gettransaction()
1907 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1894 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1908
1895
1909 count = 0
1896 count = 0
1910 while True:
1897 while True:
1911 node = inpart.read(20)
1898 node = inpart.read(20)
1912 fnode = inpart.read(20)
1899 fnode = inpart.read(20)
1913 if len(node) < 20 or len(fnode) < 20:
1900 if len(node) < 20 or len(fnode) < 20:
1914 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1901 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1915 break
1902 break
1916 cache.setfnode(node, fnode)
1903 cache.setfnode(node, fnode)
1917 count += 1
1904 count += 1
1918
1905
1919 cache.write()
1906 cache.write()
1920 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1907 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1921
1908
1922 @parthandler('pushvars')
1909 @parthandler('pushvars')
1923 def bundle2getvars(op, part):
1910 def bundle2getvars(op, part):
1924 '''unbundle a bundle2 containing shellvars on the server'''
1911 '''unbundle a bundle2 containing shellvars on the server'''
1925 # An option to disable unbundling on server-side for security reasons
1912 # An option to disable unbundling on server-side for security reasons
1926 if op.ui.configbool('push', 'pushvars.server'):
1913 if op.ui.configbool('push', 'pushvars.server'):
1927 hookargs = {}
1914 hookargs = {}
1928 for key, value in part.advisoryparams:
1915 for key, value in part.advisoryparams:
1929 key = key.upper()
1916 key = key.upper()
1930 # We want pushed variables to have USERVAR_ prepended so we know
1917 # We want pushed variables to have USERVAR_ prepended so we know
1931 # they came from the --pushvar flag.
1918 # they came from the --pushvar flag.
1932 key = "USERVAR_" + key
1919 key = "USERVAR_" + key
1933 hookargs[key] = value
1920 hookargs[key] = value
1934 op.addhookargs(hookargs)
1921 op.addhookargs(hookargs)
@@ -1,2310 +1,2310 b''
1 # debugcommands.py - command processing for debug* commands
1 # debugcommands.py - command processing for debug* commands
2 #
2 #
3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import codecs
10 import codecs
11 import collections
11 import collections
12 import difflib
12 import difflib
13 import errno
13 import errno
14 import operator
14 import operator
15 import os
15 import os
16 import random
16 import random
17 import socket
17 import socket
18 import ssl
18 import ssl
19 import string
19 import string
20 import sys
20 import sys
21 import tempfile
21 import tempfile
22 import time
22 import time
23
23
24 from .i18n import _
24 from .i18n import _
25 from .node import (
25 from .node import (
26 bin,
26 bin,
27 hex,
27 hex,
28 nullhex,
28 nullhex,
29 nullid,
29 nullid,
30 nullrev,
30 nullrev,
31 short,
31 short,
32 )
32 )
33 from . import (
33 from . import (
34 bundle2,
34 bundle2,
35 changegroup,
35 changegroup,
36 cmdutil,
36 cmdutil,
37 color,
37 color,
38 context,
38 context,
39 dagparser,
39 dagparser,
40 dagutil,
40 dagutil,
41 encoding,
41 encoding,
42 error,
42 error,
43 exchange,
43 exchange,
44 extensions,
44 extensions,
45 filemerge,
45 filemerge,
46 fileset,
46 fileset,
47 formatter,
47 formatter,
48 hg,
48 hg,
49 localrepo,
49 localrepo,
50 lock as lockmod,
50 lock as lockmod,
51 merge as mergemod,
51 merge as mergemod,
52 obsolete,
52 obsolete,
53 obsutil,
53 obsutil,
54 phases,
54 phases,
55 policy,
55 policy,
56 pvec,
56 pvec,
57 pycompat,
57 pycompat,
58 registrar,
58 registrar,
59 repair,
59 repair,
60 revlog,
60 revlog,
61 revset,
61 revset,
62 revsetlang,
62 revsetlang,
63 scmutil,
63 scmutil,
64 setdiscovery,
64 setdiscovery,
65 simplemerge,
65 simplemerge,
66 smartset,
66 smartset,
67 sslutil,
67 sslutil,
68 streamclone,
68 streamclone,
69 templater,
69 templater,
70 treediscovery,
70 treediscovery,
71 upgrade,
71 upgrade,
72 util,
72 util,
73 vfs as vfsmod,
73 vfs as vfsmod,
74 )
74 )
75
75
76 release = lockmod.release
76 release = lockmod.release
77
77
78 command = registrar.command()
78 command = registrar.command()
79
79
80 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
80 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
81 def debugancestor(ui, repo, *args):
81 def debugancestor(ui, repo, *args):
82 """find the ancestor revision of two revisions in a given index"""
82 """find the ancestor revision of two revisions in a given index"""
83 if len(args) == 3:
83 if len(args) == 3:
84 index, rev1, rev2 = args
84 index, rev1, rev2 = args
85 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False), index)
85 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False), index)
86 lookup = r.lookup
86 lookup = r.lookup
87 elif len(args) == 2:
87 elif len(args) == 2:
88 if not repo:
88 if not repo:
89 raise error.Abort(_('there is no Mercurial repository here '
89 raise error.Abort(_('there is no Mercurial repository here '
90 '(.hg not found)'))
90 '(.hg not found)'))
91 rev1, rev2 = args
91 rev1, rev2 = args
92 r = repo.changelog
92 r = repo.changelog
93 lookup = repo.lookup
93 lookup = repo.lookup
94 else:
94 else:
95 raise error.Abort(_('either two or three arguments required'))
95 raise error.Abort(_('either two or three arguments required'))
96 a = r.ancestor(lookup(rev1), lookup(rev2))
96 a = r.ancestor(lookup(rev1), lookup(rev2))
97 ui.write('%d:%s\n' % (r.rev(a), hex(a)))
97 ui.write('%d:%s\n' % (r.rev(a), hex(a)))
98
98
99 @command('debugapplystreamclonebundle', [], 'FILE')
99 @command('debugapplystreamclonebundle', [], 'FILE')
100 def debugapplystreamclonebundle(ui, repo, fname):
100 def debugapplystreamclonebundle(ui, repo, fname):
101 """apply a stream clone bundle file"""
101 """apply a stream clone bundle file"""
102 f = hg.openpath(ui, fname)
102 f = hg.openpath(ui, fname)
103 gen = exchange.readbundle(ui, f, fname)
103 gen = exchange.readbundle(ui, f, fname)
104 gen.apply(repo)
104 gen.apply(repo)
105
105
106 @command('debugbuilddag',
106 @command('debugbuilddag',
107 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
107 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
108 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
108 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
109 ('n', 'new-file', None, _('add new file at each rev'))],
109 ('n', 'new-file', None, _('add new file at each rev'))],
110 _('[OPTION]... [TEXT]'))
110 _('[OPTION]... [TEXT]'))
111 def debugbuilddag(ui, repo, text=None,
111 def debugbuilddag(ui, repo, text=None,
112 mergeable_file=False,
112 mergeable_file=False,
113 overwritten_file=False,
113 overwritten_file=False,
114 new_file=False):
114 new_file=False):
115 """builds a repo with a given DAG from scratch in the current empty repo
115 """builds a repo with a given DAG from scratch in the current empty repo
116
116
117 The description of the DAG is read from stdin if not given on the
117 The description of the DAG is read from stdin if not given on the
118 command line.
118 command line.
119
119
120 Elements:
120 Elements:
121
121
122 - "+n" is a linear run of n nodes based on the current default parent
122 - "+n" is a linear run of n nodes based on the current default parent
123 - "." is a single node based on the current default parent
123 - "." is a single node based on the current default parent
124 - "$" resets the default parent to null (implied at the start);
124 - "$" resets the default parent to null (implied at the start);
125 otherwise the default parent is always the last node created
125 otherwise the default parent is always the last node created
126 - "<p" sets the default parent to the backref p
126 - "<p" sets the default parent to the backref p
127 - "*p" is a fork at parent p, which is a backref
127 - "*p" is a fork at parent p, which is a backref
128 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
128 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
129 - "/p2" is a merge of the preceding node and p2
129 - "/p2" is a merge of the preceding node and p2
130 - ":tag" defines a local tag for the preceding node
130 - ":tag" defines a local tag for the preceding node
131 - "@branch" sets the named branch for subsequent nodes
131 - "@branch" sets the named branch for subsequent nodes
132 - "#...\\n" is a comment up to the end of the line
132 - "#...\\n" is a comment up to the end of the line
133
133
134 Whitespace between the above elements is ignored.
134 Whitespace between the above elements is ignored.
135
135
136 A backref is either
136 A backref is either
137
137
138 - a number n, which references the node curr-n, where curr is the current
138 - a number n, which references the node curr-n, where curr is the current
139 node, or
139 node, or
140 - the name of a local tag you placed earlier using ":tag", or
140 - the name of a local tag you placed earlier using ":tag", or
141 - empty to denote the default parent.
141 - empty to denote the default parent.
142
142
143 All string valued-elements are either strictly alphanumeric, or must
143 All string valued-elements are either strictly alphanumeric, or must
144 be enclosed in double quotes ("..."), with "\\" as escape character.
144 be enclosed in double quotes ("..."), with "\\" as escape character.
145 """
145 """
146
146
147 if text is None:
147 if text is None:
148 ui.status(_("reading DAG from stdin\n"))
148 ui.status(_("reading DAG from stdin\n"))
149 text = ui.fin.read()
149 text = ui.fin.read()
150
150
151 cl = repo.changelog
151 cl = repo.changelog
152 if len(cl) > 0:
152 if len(cl) > 0:
153 raise error.Abort(_('repository is not empty'))
153 raise error.Abort(_('repository is not empty'))
154
154
155 # determine number of revs in DAG
155 # determine number of revs in DAG
156 total = 0
156 total = 0
157 for type, data in dagparser.parsedag(text):
157 for type, data in dagparser.parsedag(text):
158 if type == 'n':
158 if type == 'n':
159 total += 1
159 total += 1
160
160
161 if mergeable_file:
161 if mergeable_file:
162 linesperrev = 2
162 linesperrev = 2
163 # make a file with k lines per rev
163 # make a file with k lines per rev
164 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
164 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
165 initialmergedlines.append("")
165 initialmergedlines.append("")
166
166
167 tags = []
167 tags = []
168
168
169 wlock = lock = tr = None
169 wlock = lock = tr = None
170 try:
170 try:
171 wlock = repo.wlock()
171 wlock = repo.wlock()
172 lock = repo.lock()
172 lock = repo.lock()
173 tr = repo.transaction("builddag")
173 tr = repo.transaction("builddag")
174
174
175 at = -1
175 at = -1
176 atbranch = 'default'
176 atbranch = 'default'
177 nodeids = []
177 nodeids = []
178 id = 0
178 id = 0
179 ui.progress(_('building'), id, unit=_('revisions'), total=total)
179 ui.progress(_('building'), id, unit=_('revisions'), total=total)
180 for type, data in dagparser.parsedag(text):
180 for type, data in dagparser.parsedag(text):
181 if type == 'n':
181 if type == 'n':
182 ui.note(('node %s\n' % str(data)))
182 ui.note(('node %s\n' % str(data)))
183 id, ps = data
183 id, ps = data
184
184
185 files = []
185 files = []
186 fctxs = {}
186 fctxs = {}
187
187
188 p2 = None
188 p2 = None
189 if mergeable_file:
189 if mergeable_file:
190 fn = "mf"
190 fn = "mf"
191 p1 = repo[ps[0]]
191 p1 = repo[ps[0]]
192 if len(ps) > 1:
192 if len(ps) > 1:
193 p2 = repo[ps[1]]
193 p2 = repo[ps[1]]
194 pa = p1.ancestor(p2)
194 pa = p1.ancestor(p2)
195 base, local, other = [x[fn].data() for x in (pa, p1,
195 base, local, other = [x[fn].data() for x in (pa, p1,
196 p2)]
196 p2)]
197 m3 = simplemerge.Merge3Text(base, local, other)
197 m3 = simplemerge.Merge3Text(base, local, other)
198 ml = [l.strip() for l in m3.merge_lines()]
198 ml = [l.strip() for l in m3.merge_lines()]
199 ml.append("")
199 ml.append("")
200 elif at > 0:
200 elif at > 0:
201 ml = p1[fn].data().split("\n")
201 ml = p1[fn].data().split("\n")
202 else:
202 else:
203 ml = initialmergedlines
203 ml = initialmergedlines
204 ml[id * linesperrev] += " r%i" % id
204 ml[id * linesperrev] += " r%i" % id
205 mergedtext = "\n".join(ml)
205 mergedtext = "\n".join(ml)
206 files.append(fn)
206 files.append(fn)
207 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
207 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
208
208
209 if overwritten_file:
209 if overwritten_file:
210 fn = "of"
210 fn = "of"
211 files.append(fn)
211 files.append(fn)
212 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
212 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
213
213
214 if new_file:
214 if new_file:
215 fn = "nf%i" % id
215 fn = "nf%i" % id
216 files.append(fn)
216 files.append(fn)
217 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
217 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
218 if len(ps) > 1:
218 if len(ps) > 1:
219 if not p2:
219 if not p2:
220 p2 = repo[ps[1]]
220 p2 = repo[ps[1]]
221 for fn in p2:
221 for fn in p2:
222 if fn.startswith("nf"):
222 if fn.startswith("nf"):
223 files.append(fn)
223 files.append(fn)
224 fctxs[fn] = p2[fn]
224 fctxs[fn] = p2[fn]
225
225
226 def fctxfn(repo, cx, path):
226 def fctxfn(repo, cx, path):
227 return fctxs.get(path)
227 return fctxs.get(path)
228
228
229 if len(ps) == 0 or ps[0] < 0:
229 if len(ps) == 0 or ps[0] < 0:
230 pars = [None, None]
230 pars = [None, None]
231 elif len(ps) == 1:
231 elif len(ps) == 1:
232 pars = [nodeids[ps[0]], None]
232 pars = [nodeids[ps[0]], None]
233 else:
233 else:
234 pars = [nodeids[p] for p in ps]
234 pars = [nodeids[p] for p in ps]
235 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
235 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
236 date=(id, 0),
236 date=(id, 0),
237 user="debugbuilddag",
237 user="debugbuilddag",
238 extra={'branch': atbranch})
238 extra={'branch': atbranch})
239 nodeid = repo.commitctx(cx)
239 nodeid = repo.commitctx(cx)
240 nodeids.append(nodeid)
240 nodeids.append(nodeid)
241 at = id
241 at = id
242 elif type == 'l':
242 elif type == 'l':
243 id, name = data
243 id, name = data
244 ui.note(('tag %s\n' % name))
244 ui.note(('tag %s\n' % name))
245 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
245 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
246 elif type == 'a':
246 elif type == 'a':
247 ui.note(('branch %s\n' % data))
247 ui.note(('branch %s\n' % data))
248 atbranch = data
248 atbranch = data
249 ui.progress(_('building'), id, unit=_('revisions'), total=total)
249 ui.progress(_('building'), id, unit=_('revisions'), total=total)
250 tr.close()
250 tr.close()
251
251
252 if tags:
252 if tags:
253 repo.vfs.write("localtags", "".join(tags))
253 repo.vfs.write("localtags", "".join(tags))
254 finally:
254 finally:
255 ui.progress(_('building'), None)
255 ui.progress(_('building'), None)
256 release(tr, lock, wlock)
256 release(tr, lock, wlock)
257
257
258 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
258 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
259 indent_string = ' ' * indent
259 indent_string = ' ' * indent
260 if all:
260 if all:
261 ui.write(("%sformat: id, p1, p2, cset, delta base, len(delta)\n")
261 ui.write(("%sformat: id, p1, p2, cset, delta base, len(delta)\n")
262 % indent_string)
262 % indent_string)
263
263
264 def showchunks(named):
264 def showchunks(named):
265 ui.write("\n%s%s\n" % (indent_string, named))
265 ui.write("\n%s%s\n" % (indent_string, named))
266 for deltadata in gen.deltaiter():
266 for deltadata in gen.deltaiter():
267 node, p1, p2, cs, deltabase, delta, flags = deltadata
267 node, p1, p2, cs, deltabase, delta, flags = deltadata
268 ui.write("%s%s %s %s %s %s %s\n" %
268 ui.write("%s%s %s %s %s %s %s\n" %
269 (indent_string, hex(node), hex(p1), hex(p2),
269 (indent_string, hex(node), hex(p1), hex(p2),
270 hex(cs), hex(deltabase), len(delta)))
270 hex(cs), hex(deltabase), len(delta)))
271
271
272 chunkdata = gen.changelogheader()
272 chunkdata = gen.changelogheader()
273 showchunks("changelog")
273 showchunks("changelog")
274 chunkdata = gen.manifestheader()
274 chunkdata = gen.manifestheader()
275 showchunks("manifest")
275 showchunks("manifest")
276 for chunkdata in iter(gen.filelogheader, {}):
276 for chunkdata in iter(gen.filelogheader, {}):
277 fname = chunkdata['filename']
277 fname = chunkdata['filename']
278 showchunks(fname)
278 showchunks(fname)
279 else:
279 else:
280 if isinstance(gen, bundle2.unbundle20):
280 if isinstance(gen, bundle2.unbundle20):
281 raise error.Abort(_('use debugbundle2 for this file'))
281 raise error.Abort(_('use debugbundle2 for this file'))
282 chunkdata = gen.changelogheader()
282 chunkdata = gen.changelogheader()
283 for deltadata in gen.deltaiter():
283 for deltadata in gen.deltaiter():
284 node, p1, p2, cs, deltabase, delta, flags = deltadata
284 node, p1, p2, cs, deltabase, delta, flags = deltadata
285 ui.write("%s%s\n" % (indent_string, hex(node)))
285 ui.write("%s%s\n" % (indent_string, hex(node)))
286
286
287 def _debugobsmarkers(ui, part, indent=0, **opts):
287 def _debugobsmarkers(ui, part, indent=0, **opts):
288 """display version and markers contained in 'data'"""
288 """display version and markers contained in 'data'"""
289 opts = pycompat.byteskwargs(opts)
289 opts = pycompat.byteskwargs(opts)
290 data = part.read()
290 data = part.read()
291 indent_string = ' ' * indent
291 indent_string = ' ' * indent
292 try:
292 try:
293 version, markers = obsolete._readmarkers(data)
293 version, markers = obsolete._readmarkers(data)
294 except error.UnknownVersion as exc:
294 except error.UnknownVersion as exc:
295 msg = "%sunsupported version: %s (%d bytes)\n"
295 msg = "%sunsupported version: %s (%d bytes)\n"
296 msg %= indent_string, exc.version, len(data)
296 msg %= indent_string, exc.version, len(data)
297 ui.write(msg)
297 ui.write(msg)
298 else:
298 else:
299 msg = "%sversion: %s (%d bytes)\n"
299 msg = "%sversion: %s (%d bytes)\n"
300 msg %= indent_string, version, len(data)
300 msg %= indent_string, version, len(data)
301 ui.write(msg)
301 ui.write(msg)
302 fm = ui.formatter('debugobsolete', opts)
302 fm = ui.formatter('debugobsolete', opts)
303 for rawmarker in sorted(markers):
303 for rawmarker in sorted(markers):
304 m = obsutil.marker(None, rawmarker)
304 m = obsutil.marker(None, rawmarker)
305 fm.startitem()
305 fm.startitem()
306 fm.plain(indent_string)
306 fm.plain(indent_string)
307 cmdutil.showmarker(fm, m)
307 cmdutil.showmarker(fm, m)
308 fm.end()
308 fm.end()
309
309
310 def _debugphaseheads(ui, data, indent=0):
310 def _debugphaseheads(ui, data, indent=0):
311 """display version and markers contained in 'data'"""
311 """display version and markers contained in 'data'"""
312 indent_string = ' ' * indent
312 indent_string = ' ' * indent
313 headsbyphase = bundle2._readphaseheads(data)
313 headsbyphase = phases.binarydecode(data)
314 for phase in phases.allphases:
314 for phase in phases.allphases:
315 for head in headsbyphase[phase]:
315 for head in headsbyphase[phase]:
316 ui.write(indent_string)
316 ui.write(indent_string)
317 ui.write('%s %s\n' % (hex(head), phases.phasenames[phase]))
317 ui.write('%s %s\n' % (hex(head), phases.phasenames[phase]))
318
318
319 def _quasirepr(thing):
319 def _quasirepr(thing):
320 if isinstance(thing, (dict, util.sortdict, collections.OrderedDict)):
320 if isinstance(thing, (dict, util.sortdict, collections.OrderedDict)):
321 return '{%s}' % (
321 return '{%s}' % (
322 b', '.join(b'%s: %s' % (k, thing[k]) for k in sorted(thing)))
322 b', '.join(b'%s: %s' % (k, thing[k]) for k in sorted(thing)))
323 return pycompat.bytestr(repr(thing))
323 return pycompat.bytestr(repr(thing))
324
324
325 def _debugbundle2(ui, gen, all=None, **opts):
325 def _debugbundle2(ui, gen, all=None, **opts):
326 """lists the contents of a bundle2"""
326 """lists the contents of a bundle2"""
327 if not isinstance(gen, bundle2.unbundle20):
327 if not isinstance(gen, bundle2.unbundle20):
328 raise error.Abort(_('not a bundle2 file'))
328 raise error.Abort(_('not a bundle2 file'))
329 ui.write(('Stream params: %s\n' % _quasirepr(gen.params)))
329 ui.write(('Stream params: %s\n' % _quasirepr(gen.params)))
330 parttypes = opts.get(r'part_type', [])
330 parttypes = opts.get(r'part_type', [])
331 for part in gen.iterparts():
331 for part in gen.iterparts():
332 if parttypes and part.type not in parttypes:
332 if parttypes and part.type not in parttypes:
333 continue
333 continue
334 ui.write('%s -- %s\n' % (part.type, _quasirepr(part.params)))
334 ui.write('%s -- %s\n' % (part.type, _quasirepr(part.params)))
335 if part.type == 'changegroup':
335 if part.type == 'changegroup':
336 version = part.params.get('version', '01')
336 version = part.params.get('version', '01')
337 cg = changegroup.getunbundler(version, part, 'UN')
337 cg = changegroup.getunbundler(version, part, 'UN')
338 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
338 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
339 if part.type == 'obsmarkers':
339 if part.type == 'obsmarkers':
340 _debugobsmarkers(ui, part, indent=4, **opts)
340 _debugobsmarkers(ui, part, indent=4, **opts)
341 if part.type == 'phase-heads':
341 if part.type == 'phase-heads':
342 _debugphaseheads(ui, part, indent=4)
342 _debugphaseheads(ui, part, indent=4)
343
343
344 @command('debugbundle',
344 @command('debugbundle',
345 [('a', 'all', None, _('show all details')),
345 [('a', 'all', None, _('show all details')),
346 ('', 'part-type', [], _('show only the named part type')),
346 ('', 'part-type', [], _('show only the named part type')),
347 ('', 'spec', None, _('print the bundlespec of the bundle'))],
347 ('', 'spec', None, _('print the bundlespec of the bundle'))],
348 _('FILE'),
348 _('FILE'),
349 norepo=True)
349 norepo=True)
350 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
350 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
351 """lists the contents of a bundle"""
351 """lists the contents of a bundle"""
352 with hg.openpath(ui, bundlepath) as f:
352 with hg.openpath(ui, bundlepath) as f:
353 if spec:
353 if spec:
354 spec = exchange.getbundlespec(ui, f)
354 spec = exchange.getbundlespec(ui, f)
355 ui.write('%s\n' % spec)
355 ui.write('%s\n' % spec)
356 return
356 return
357
357
358 gen = exchange.readbundle(ui, f, bundlepath)
358 gen = exchange.readbundle(ui, f, bundlepath)
359 if isinstance(gen, bundle2.unbundle20):
359 if isinstance(gen, bundle2.unbundle20):
360 return _debugbundle2(ui, gen, all=all, **opts)
360 return _debugbundle2(ui, gen, all=all, **opts)
361 _debugchangegroup(ui, gen, all=all, **opts)
361 _debugchangegroup(ui, gen, all=all, **opts)
362
362
363 @command('debugcheckstate', [], '')
363 @command('debugcheckstate', [], '')
364 def debugcheckstate(ui, repo):
364 def debugcheckstate(ui, repo):
365 """validate the correctness of the current dirstate"""
365 """validate the correctness of the current dirstate"""
366 parent1, parent2 = repo.dirstate.parents()
366 parent1, parent2 = repo.dirstate.parents()
367 m1 = repo[parent1].manifest()
367 m1 = repo[parent1].manifest()
368 m2 = repo[parent2].manifest()
368 m2 = repo[parent2].manifest()
369 errors = 0
369 errors = 0
370 for f in repo.dirstate:
370 for f in repo.dirstate:
371 state = repo.dirstate[f]
371 state = repo.dirstate[f]
372 if state in "nr" and f not in m1:
372 if state in "nr" and f not in m1:
373 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
373 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
374 errors += 1
374 errors += 1
375 if state in "a" and f in m1:
375 if state in "a" and f in m1:
376 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
376 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
377 errors += 1
377 errors += 1
378 if state in "m" and f not in m1 and f not in m2:
378 if state in "m" and f not in m1 and f not in m2:
379 ui.warn(_("%s in state %s, but not in either manifest\n") %
379 ui.warn(_("%s in state %s, but not in either manifest\n") %
380 (f, state))
380 (f, state))
381 errors += 1
381 errors += 1
382 for f in m1:
382 for f in m1:
383 state = repo.dirstate[f]
383 state = repo.dirstate[f]
384 if state not in "nrm":
384 if state not in "nrm":
385 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
385 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
386 errors += 1
386 errors += 1
387 if errors:
387 if errors:
388 error = _(".hg/dirstate inconsistent with current parent's manifest")
388 error = _(".hg/dirstate inconsistent with current parent's manifest")
389 raise error.Abort(error)
389 raise error.Abort(error)
390
390
391 @command('debugcolor',
391 @command('debugcolor',
392 [('', 'style', None, _('show all configured styles'))],
392 [('', 'style', None, _('show all configured styles'))],
393 'hg debugcolor')
393 'hg debugcolor')
394 def debugcolor(ui, repo, **opts):
394 def debugcolor(ui, repo, **opts):
395 """show available color, effects or style"""
395 """show available color, effects or style"""
396 ui.write(('color mode: %s\n') % ui._colormode)
396 ui.write(('color mode: %s\n') % ui._colormode)
397 if opts.get(r'style'):
397 if opts.get(r'style'):
398 return _debugdisplaystyle(ui)
398 return _debugdisplaystyle(ui)
399 else:
399 else:
400 return _debugdisplaycolor(ui)
400 return _debugdisplaycolor(ui)
401
401
402 def _debugdisplaycolor(ui):
402 def _debugdisplaycolor(ui):
403 ui = ui.copy()
403 ui = ui.copy()
404 ui._styles.clear()
404 ui._styles.clear()
405 for effect in color._activeeffects(ui).keys():
405 for effect in color._activeeffects(ui).keys():
406 ui._styles[effect] = effect
406 ui._styles[effect] = effect
407 if ui._terminfoparams:
407 if ui._terminfoparams:
408 for k, v in ui.configitems('color'):
408 for k, v in ui.configitems('color'):
409 if k.startswith('color.'):
409 if k.startswith('color.'):
410 ui._styles[k] = k[6:]
410 ui._styles[k] = k[6:]
411 elif k.startswith('terminfo.'):
411 elif k.startswith('terminfo.'):
412 ui._styles[k] = k[9:]
412 ui._styles[k] = k[9:]
413 ui.write(_('available colors:\n'))
413 ui.write(_('available colors:\n'))
414 # sort label with a '_' after the other to group '_background' entry.
414 # sort label with a '_' after the other to group '_background' entry.
415 items = sorted(ui._styles.items(),
415 items = sorted(ui._styles.items(),
416 key=lambda i: ('_' in i[0], i[0], i[1]))
416 key=lambda i: ('_' in i[0], i[0], i[1]))
417 for colorname, label in items:
417 for colorname, label in items:
418 ui.write(('%s\n') % colorname, label=label)
418 ui.write(('%s\n') % colorname, label=label)
419
419
420 def _debugdisplaystyle(ui):
420 def _debugdisplaystyle(ui):
421 ui.write(_('available style:\n'))
421 ui.write(_('available style:\n'))
422 width = max(len(s) for s in ui._styles)
422 width = max(len(s) for s in ui._styles)
423 for label, effects in sorted(ui._styles.items()):
423 for label, effects in sorted(ui._styles.items()):
424 ui.write('%s' % label, label=label)
424 ui.write('%s' % label, label=label)
425 if effects:
425 if effects:
426 # 50
426 # 50
427 ui.write(': ')
427 ui.write(': ')
428 ui.write(' ' * (max(0, width - len(label))))
428 ui.write(' ' * (max(0, width - len(label))))
429 ui.write(', '.join(ui.label(e, e) for e in effects.split()))
429 ui.write(', '.join(ui.label(e, e) for e in effects.split()))
430 ui.write('\n')
430 ui.write('\n')
431
431
432 @command('debugcreatestreamclonebundle', [], 'FILE')
432 @command('debugcreatestreamclonebundle', [], 'FILE')
433 def debugcreatestreamclonebundle(ui, repo, fname):
433 def debugcreatestreamclonebundle(ui, repo, fname):
434 """create a stream clone bundle file
434 """create a stream clone bundle file
435
435
436 Stream bundles are special bundles that are essentially archives of
436 Stream bundles are special bundles that are essentially archives of
437 revlog files. They are commonly used for cloning very quickly.
437 revlog files. They are commonly used for cloning very quickly.
438 """
438 """
439 # TODO we may want to turn this into an abort when this functionality
439 # TODO we may want to turn this into an abort when this functionality
440 # is moved into `hg bundle`.
440 # is moved into `hg bundle`.
441 if phases.hassecret(repo):
441 if phases.hassecret(repo):
442 ui.warn(_('(warning: stream clone bundle will contain secret '
442 ui.warn(_('(warning: stream clone bundle will contain secret '
443 'revisions)\n'))
443 'revisions)\n'))
444
444
445 requirements, gen = streamclone.generatebundlev1(repo)
445 requirements, gen = streamclone.generatebundlev1(repo)
446 changegroup.writechunks(ui, gen, fname)
446 changegroup.writechunks(ui, gen, fname)
447
447
448 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
448 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
449
449
450 @command('debugdag',
450 @command('debugdag',
451 [('t', 'tags', None, _('use tags as labels')),
451 [('t', 'tags', None, _('use tags as labels')),
452 ('b', 'branches', None, _('annotate with branch names')),
452 ('b', 'branches', None, _('annotate with branch names')),
453 ('', 'dots', None, _('use dots for runs')),
453 ('', 'dots', None, _('use dots for runs')),
454 ('s', 'spaces', None, _('separate elements by spaces'))],
454 ('s', 'spaces', None, _('separate elements by spaces'))],
455 _('[OPTION]... [FILE [REV]...]'),
455 _('[OPTION]... [FILE [REV]...]'),
456 optionalrepo=True)
456 optionalrepo=True)
457 def debugdag(ui, repo, file_=None, *revs, **opts):
457 def debugdag(ui, repo, file_=None, *revs, **opts):
458 """format the changelog or an index DAG as a concise textual description
458 """format the changelog or an index DAG as a concise textual description
459
459
460 If you pass a revlog index, the revlog's DAG is emitted. If you list
460 If you pass a revlog index, the revlog's DAG is emitted. If you list
461 revision numbers, they get labeled in the output as rN.
461 revision numbers, they get labeled in the output as rN.
462
462
463 Otherwise, the changelog DAG of the current repo is emitted.
463 Otherwise, the changelog DAG of the current repo is emitted.
464 """
464 """
465 spaces = opts.get(r'spaces')
465 spaces = opts.get(r'spaces')
466 dots = opts.get(r'dots')
466 dots = opts.get(r'dots')
467 if file_:
467 if file_:
468 rlog = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
468 rlog = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
469 file_)
469 file_)
470 revs = set((int(r) for r in revs))
470 revs = set((int(r) for r in revs))
471 def events():
471 def events():
472 for r in rlog:
472 for r in rlog:
473 yield 'n', (r, list(p for p in rlog.parentrevs(r)
473 yield 'n', (r, list(p for p in rlog.parentrevs(r)
474 if p != -1))
474 if p != -1))
475 if r in revs:
475 if r in revs:
476 yield 'l', (r, "r%i" % r)
476 yield 'l', (r, "r%i" % r)
477 elif repo:
477 elif repo:
478 cl = repo.changelog
478 cl = repo.changelog
479 tags = opts.get(r'tags')
479 tags = opts.get(r'tags')
480 branches = opts.get(r'branches')
480 branches = opts.get(r'branches')
481 if tags:
481 if tags:
482 labels = {}
482 labels = {}
483 for l, n in repo.tags().items():
483 for l, n in repo.tags().items():
484 labels.setdefault(cl.rev(n), []).append(l)
484 labels.setdefault(cl.rev(n), []).append(l)
485 def events():
485 def events():
486 b = "default"
486 b = "default"
487 for r in cl:
487 for r in cl:
488 if branches:
488 if branches:
489 newb = cl.read(cl.node(r))[5]['branch']
489 newb = cl.read(cl.node(r))[5]['branch']
490 if newb != b:
490 if newb != b:
491 yield 'a', newb
491 yield 'a', newb
492 b = newb
492 b = newb
493 yield 'n', (r, list(p for p in cl.parentrevs(r)
493 yield 'n', (r, list(p for p in cl.parentrevs(r)
494 if p != -1))
494 if p != -1))
495 if tags:
495 if tags:
496 ls = labels.get(r)
496 ls = labels.get(r)
497 if ls:
497 if ls:
498 for l in ls:
498 for l in ls:
499 yield 'l', (r, l)
499 yield 'l', (r, l)
500 else:
500 else:
501 raise error.Abort(_('need repo for changelog dag'))
501 raise error.Abort(_('need repo for changelog dag'))
502
502
503 for line in dagparser.dagtextlines(events(),
503 for line in dagparser.dagtextlines(events(),
504 addspaces=spaces,
504 addspaces=spaces,
505 wraplabels=True,
505 wraplabels=True,
506 wrapannotations=True,
506 wrapannotations=True,
507 wrapnonlinear=dots,
507 wrapnonlinear=dots,
508 usedots=dots,
508 usedots=dots,
509 maxlinewidth=70):
509 maxlinewidth=70):
510 ui.write(line)
510 ui.write(line)
511 ui.write("\n")
511 ui.write("\n")
512
512
513 @command('debugdata', cmdutil.debugrevlogopts, _('-c|-m|FILE REV'))
513 @command('debugdata', cmdutil.debugrevlogopts, _('-c|-m|FILE REV'))
514 def debugdata(ui, repo, file_, rev=None, **opts):
514 def debugdata(ui, repo, file_, rev=None, **opts):
515 """dump the contents of a data file revision"""
515 """dump the contents of a data file revision"""
516 opts = pycompat.byteskwargs(opts)
516 opts = pycompat.byteskwargs(opts)
517 if opts.get('changelog') or opts.get('manifest') or opts.get('dir'):
517 if opts.get('changelog') or opts.get('manifest') or opts.get('dir'):
518 if rev is not None:
518 if rev is not None:
519 raise error.CommandError('debugdata', _('invalid arguments'))
519 raise error.CommandError('debugdata', _('invalid arguments'))
520 file_, rev = None, file_
520 file_, rev = None, file_
521 elif rev is None:
521 elif rev is None:
522 raise error.CommandError('debugdata', _('invalid arguments'))
522 raise error.CommandError('debugdata', _('invalid arguments'))
523 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
523 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
524 try:
524 try:
525 ui.write(r.revision(r.lookup(rev), raw=True))
525 ui.write(r.revision(r.lookup(rev), raw=True))
526 except KeyError:
526 except KeyError:
527 raise error.Abort(_('invalid revision identifier %s') % rev)
527 raise error.Abort(_('invalid revision identifier %s') % rev)
528
528
529 @command('debugdate',
529 @command('debugdate',
530 [('e', 'extended', None, _('try extended date formats'))],
530 [('e', 'extended', None, _('try extended date formats'))],
531 _('[-e] DATE [RANGE]'),
531 _('[-e] DATE [RANGE]'),
532 norepo=True, optionalrepo=True)
532 norepo=True, optionalrepo=True)
533 def debugdate(ui, date, range=None, **opts):
533 def debugdate(ui, date, range=None, **opts):
534 """parse and display a date"""
534 """parse and display a date"""
535 if opts[r"extended"]:
535 if opts[r"extended"]:
536 d = util.parsedate(date, util.extendeddateformats)
536 d = util.parsedate(date, util.extendeddateformats)
537 else:
537 else:
538 d = util.parsedate(date)
538 d = util.parsedate(date)
539 ui.write(("internal: %s %s\n") % d)
539 ui.write(("internal: %s %s\n") % d)
540 ui.write(("standard: %s\n") % util.datestr(d))
540 ui.write(("standard: %s\n") % util.datestr(d))
541 if range:
541 if range:
542 m = util.matchdate(range)
542 m = util.matchdate(range)
543 ui.write(("match: %s\n") % m(d[0]))
543 ui.write(("match: %s\n") % m(d[0]))
544
544
545 @command('debugdeltachain',
545 @command('debugdeltachain',
546 cmdutil.debugrevlogopts + cmdutil.formatteropts,
546 cmdutil.debugrevlogopts + cmdutil.formatteropts,
547 _('-c|-m|FILE'),
547 _('-c|-m|FILE'),
548 optionalrepo=True)
548 optionalrepo=True)
549 def debugdeltachain(ui, repo, file_=None, **opts):
549 def debugdeltachain(ui, repo, file_=None, **opts):
550 """dump information about delta chains in a revlog
550 """dump information about delta chains in a revlog
551
551
552 Output can be templatized. Available template keywords are:
552 Output can be templatized. Available template keywords are:
553
553
554 :``rev``: revision number
554 :``rev``: revision number
555 :``chainid``: delta chain identifier (numbered by unique base)
555 :``chainid``: delta chain identifier (numbered by unique base)
556 :``chainlen``: delta chain length to this revision
556 :``chainlen``: delta chain length to this revision
557 :``prevrev``: previous revision in delta chain
557 :``prevrev``: previous revision in delta chain
558 :``deltatype``: role of delta / how it was computed
558 :``deltatype``: role of delta / how it was computed
559 :``compsize``: compressed size of revision
559 :``compsize``: compressed size of revision
560 :``uncompsize``: uncompressed size of revision
560 :``uncompsize``: uncompressed size of revision
561 :``chainsize``: total size of compressed revisions in chain
561 :``chainsize``: total size of compressed revisions in chain
562 :``chainratio``: total chain size divided by uncompressed revision size
562 :``chainratio``: total chain size divided by uncompressed revision size
563 (new delta chains typically start at ratio 2.00)
563 (new delta chains typically start at ratio 2.00)
564 :``lindist``: linear distance from base revision in delta chain to end
564 :``lindist``: linear distance from base revision in delta chain to end
565 of this revision
565 of this revision
566 :``extradist``: total size of revisions not part of this delta chain from
566 :``extradist``: total size of revisions not part of this delta chain from
567 base of delta chain to end of this revision; a measurement
567 base of delta chain to end of this revision; a measurement
568 of how much extra data we need to read/seek across to read
568 of how much extra data we need to read/seek across to read
569 the delta chain for this revision
569 the delta chain for this revision
570 :``extraratio``: extradist divided by chainsize; another representation of
570 :``extraratio``: extradist divided by chainsize; another representation of
571 how much unrelated data is needed to load this delta chain
571 how much unrelated data is needed to load this delta chain
572 """
572 """
573 opts = pycompat.byteskwargs(opts)
573 opts = pycompat.byteskwargs(opts)
574 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
574 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
575 index = r.index
575 index = r.index
576 generaldelta = r.version & revlog.FLAG_GENERALDELTA
576 generaldelta = r.version & revlog.FLAG_GENERALDELTA
577
577
578 def revinfo(rev):
578 def revinfo(rev):
579 e = index[rev]
579 e = index[rev]
580 compsize = e[1]
580 compsize = e[1]
581 uncompsize = e[2]
581 uncompsize = e[2]
582 chainsize = 0
582 chainsize = 0
583
583
584 if generaldelta:
584 if generaldelta:
585 if e[3] == e[5]:
585 if e[3] == e[5]:
586 deltatype = 'p1'
586 deltatype = 'p1'
587 elif e[3] == e[6]:
587 elif e[3] == e[6]:
588 deltatype = 'p2'
588 deltatype = 'p2'
589 elif e[3] == rev - 1:
589 elif e[3] == rev - 1:
590 deltatype = 'prev'
590 deltatype = 'prev'
591 elif e[3] == rev:
591 elif e[3] == rev:
592 deltatype = 'base'
592 deltatype = 'base'
593 else:
593 else:
594 deltatype = 'other'
594 deltatype = 'other'
595 else:
595 else:
596 if e[3] == rev:
596 if e[3] == rev:
597 deltatype = 'base'
597 deltatype = 'base'
598 else:
598 else:
599 deltatype = 'prev'
599 deltatype = 'prev'
600
600
601 chain = r._deltachain(rev)[0]
601 chain = r._deltachain(rev)[0]
602 for iterrev in chain:
602 for iterrev in chain:
603 e = index[iterrev]
603 e = index[iterrev]
604 chainsize += e[1]
604 chainsize += e[1]
605
605
606 return compsize, uncompsize, deltatype, chain, chainsize
606 return compsize, uncompsize, deltatype, chain, chainsize
607
607
608 fm = ui.formatter('debugdeltachain', opts)
608 fm = ui.formatter('debugdeltachain', opts)
609
609
610 fm.plain(' rev chain# chainlen prev delta '
610 fm.plain(' rev chain# chainlen prev delta '
611 'size rawsize chainsize ratio lindist extradist '
611 'size rawsize chainsize ratio lindist extradist '
612 'extraratio\n')
612 'extraratio\n')
613
613
614 chainbases = {}
614 chainbases = {}
615 for rev in r:
615 for rev in r:
616 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
616 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
617 chainbase = chain[0]
617 chainbase = chain[0]
618 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
618 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
619 basestart = r.start(chainbase)
619 basestart = r.start(chainbase)
620 revstart = r.start(rev)
620 revstart = r.start(rev)
621 lineardist = revstart + comp - basestart
621 lineardist = revstart + comp - basestart
622 extradist = lineardist - chainsize
622 extradist = lineardist - chainsize
623 try:
623 try:
624 prevrev = chain[-2]
624 prevrev = chain[-2]
625 except IndexError:
625 except IndexError:
626 prevrev = -1
626 prevrev = -1
627
627
628 chainratio = float(chainsize) / float(uncomp)
628 chainratio = float(chainsize) / float(uncomp)
629 extraratio = float(extradist) / float(chainsize)
629 extraratio = float(extradist) / float(chainsize)
630
630
631 fm.startitem()
631 fm.startitem()
632 fm.write('rev chainid chainlen prevrev deltatype compsize '
632 fm.write('rev chainid chainlen prevrev deltatype compsize '
633 'uncompsize chainsize chainratio lindist extradist '
633 'uncompsize chainsize chainratio lindist extradist '
634 'extraratio',
634 'extraratio',
635 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
635 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
636 rev, chainid, len(chain), prevrev, deltatype, comp,
636 rev, chainid, len(chain), prevrev, deltatype, comp,
637 uncomp, chainsize, chainratio, lineardist, extradist,
637 uncomp, chainsize, chainratio, lineardist, extradist,
638 extraratio,
638 extraratio,
639 rev=rev, chainid=chainid, chainlen=len(chain),
639 rev=rev, chainid=chainid, chainlen=len(chain),
640 prevrev=prevrev, deltatype=deltatype, compsize=comp,
640 prevrev=prevrev, deltatype=deltatype, compsize=comp,
641 uncompsize=uncomp, chainsize=chainsize,
641 uncompsize=uncomp, chainsize=chainsize,
642 chainratio=chainratio, lindist=lineardist,
642 chainratio=chainratio, lindist=lineardist,
643 extradist=extradist, extraratio=extraratio)
643 extradist=extradist, extraratio=extraratio)
644
644
645 fm.end()
645 fm.end()
646
646
647 @command('debugdirstate|debugstate',
647 @command('debugdirstate|debugstate',
648 [('', 'nodates', None, _('do not display the saved mtime')),
648 [('', 'nodates', None, _('do not display the saved mtime')),
649 ('', 'datesort', None, _('sort by saved mtime'))],
649 ('', 'datesort', None, _('sort by saved mtime'))],
650 _('[OPTION]...'))
650 _('[OPTION]...'))
651 def debugstate(ui, repo, **opts):
651 def debugstate(ui, repo, **opts):
652 """show the contents of the current dirstate"""
652 """show the contents of the current dirstate"""
653
653
654 nodates = opts.get(r'nodates')
654 nodates = opts.get(r'nodates')
655 datesort = opts.get(r'datesort')
655 datesort = opts.get(r'datesort')
656
656
657 timestr = ""
657 timestr = ""
658 if datesort:
658 if datesort:
659 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
659 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
660 else:
660 else:
661 keyfunc = None # sort by filename
661 keyfunc = None # sort by filename
662 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
662 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
663 if ent[3] == -1:
663 if ent[3] == -1:
664 timestr = 'unset '
664 timestr = 'unset '
665 elif nodates:
665 elif nodates:
666 timestr = 'set '
666 timestr = 'set '
667 else:
667 else:
668 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
668 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
669 time.localtime(ent[3]))
669 time.localtime(ent[3]))
670 if ent[1] & 0o20000:
670 if ent[1] & 0o20000:
671 mode = 'lnk'
671 mode = 'lnk'
672 else:
672 else:
673 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
673 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
674 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
674 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
675 for f in repo.dirstate.copies():
675 for f in repo.dirstate.copies():
676 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
676 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
677
677
678 @command('debugdiscovery',
678 @command('debugdiscovery',
679 [('', 'old', None, _('use old-style discovery')),
679 [('', 'old', None, _('use old-style discovery')),
680 ('', 'nonheads', None,
680 ('', 'nonheads', None,
681 _('use old-style discovery with non-heads included')),
681 _('use old-style discovery with non-heads included')),
682 ] + cmdutil.remoteopts,
682 ] + cmdutil.remoteopts,
683 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
683 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
684 def debugdiscovery(ui, repo, remoteurl="default", **opts):
684 def debugdiscovery(ui, repo, remoteurl="default", **opts):
685 """runs the changeset discovery protocol in isolation"""
685 """runs the changeset discovery protocol in isolation"""
686 opts = pycompat.byteskwargs(opts)
686 opts = pycompat.byteskwargs(opts)
687 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
687 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
688 opts.get('branch'))
688 opts.get('branch'))
689 remote = hg.peer(repo, opts, remoteurl)
689 remote = hg.peer(repo, opts, remoteurl)
690 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
690 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
691
691
692 # make sure tests are repeatable
692 # make sure tests are repeatable
693 random.seed(12323)
693 random.seed(12323)
694
694
695 def doit(localheads, remoteheads, remote=remote):
695 def doit(localheads, remoteheads, remote=remote):
696 if opts.get('old'):
696 if opts.get('old'):
697 if localheads:
697 if localheads:
698 raise error.Abort('cannot use localheads with old style '
698 raise error.Abort('cannot use localheads with old style '
699 'discovery')
699 'discovery')
700 if not util.safehasattr(remote, 'branches'):
700 if not util.safehasattr(remote, 'branches'):
701 # enable in-client legacy support
701 # enable in-client legacy support
702 remote = localrepo.locallegacypeer(remote.local())
702 remote = localrepo.locallegacypeer(remote.local())
703 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
703 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
704 force=True)
704 force=True)
705 common = set(common)
705 common = set(common)
706 if not opts.get('nonheads'):
706 if not opts.get('nonheads'):
707 ui.write(("unpruned common: %s\n") %
707 ui.write(("unpruned common: %s\n") %
708 " ".join(sorted(short(n) for n in common)))
708 " ".join(sorted(short(n) for n in common)))
709 dag = dagutil.revlogdag(repo.changelog)
709 dag = dagutil.revlogdag(repo.changelog)
710 all = dag.ancestorset(dag.internalizeall(common))
710 all = dag.ancestorset(dag.internalizeall(common))
711 common = dag.externalizeall(dag.headsetofconnecteds(all))
711 common = dag.externalizeall(dag.headsetofconnecteds(all))
712 else:
712 else:
713 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
713 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
714 common = set(common)
714 common = set(common)
715 rheads = set(hds)
715 rheads = set(hds)
716 lheads = set(repo.heads())
716 lheads = set(repo.heads())
717 ui.write(("common heads: %s\n") %
717 ui.write(("common heads: %s\n") %
718 " ".join(sorted(short(n) for n in common)))
718 " ".join(sorted(short(n) for n in common)))
719 if lheads <= common:
719 if lheads <= common:
720 ui.write(("local is subset\n"))
720 ui.write(("local is subset\n"))
721 elif rheads <= common:
721 elif rheads <= common:
722 ui.write(("remote is subset\n"))
722 ui.write(("remote is subset\n"))
723
723
724 serverlogs = opts.get('serverlog')
724 serverlogs = opts.get('serverlog')
725 if serverlogs:
725 if serverlogs:
726 for filename in serverlogs:
726 for filename in serverlogs:
727 with open(filename, 'r') as logfile:
727 with open(filename, 'r') as logfile:
728 line = logfile.readline()
728 line = logfile.readline()
729 while line:
729 while line:
730 parts = line.strip().split(';')
730 parts = line.strip().split(';')
731 op = parts[1]
731 op = parts[1]
732 if op == 'cg':
732 if op == 'cg':
733 pass
733 pass
734 elif op == 'cgss':
734 elif op == 'cgss':
735 doit(parts[2].split(' '), parts[3].split(' '))
735 doit(parts[2].split(' '), parts[3].split(' '))
736 elif op == 'unb':
736 elif op == 'unb':
737 doit(parts[3].split(' '), parts[2].split(' '))
737 doit(parts[3].split(' '), parts[2].split(' '))
738 line = logfile.readline()
738 line = logfile.readline()
739 else:
739 else:
740 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
740 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
741 opts.get('remote_head'))
741 opts.get('remote_head'))
742 localrevs = opts.get('local_head')
742 localrevs = opts.get('local_head')
743 doit(localrevs, remoterevs)
743 doit(localrevs, remoterevs)
744
744
745 @command('debugextensions', cmdutil.formatteropts, [], norepo=True)
745 @command('debugextensions', cmdutil.formatteropts, [], norepo=True)
746 def debugextensions(ui, **opts):
746 def debugextensions(ui, **opts):
747 '''show information about active extensions'''
747 '''show information about active extensions'''
748 opts = pycompat.byteskwargs(opts)
748 opts = pycompat.byteskwargs(opts)
749 exts = extensions.extensions(ui)
749 exts = extensions.extensions(ui)
750 hgver = util.version()
750 hgver = util.version()
751 fm = ui.formatter('debugextensions', opts)
751 fm = ui.formatter('debugextensions', opts)
752 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
752 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
753 isinternal = extensions.ismoduleinternal(extmod)
753 isinternal = extensions.ismoduleinternal(extmod)
754 extsource = pycompat.fsencode(extmod.__file__)
754 extsource = pycompat.fsencode(extmod.__file__)
755 if isinternal:
755 if isinternal:
756 exttestedwith = [] # never expose magic string to users
756 exttestedwith = [] # never expose magic string to users
757 else:
757 else:
758 exttestedwith = getattr(extmod, 'testedwith', '').split()
758 exttestedwith = getattr(extmod, 'testedwith', '').split()
759 extbuglink = getattr(extmod, 'buglink', None)
759 extbuglink = getattr(extmod, 'buglink', None)
760
760
761 fm.startitem()
761 fm.startitem()
762
762
763 if ui.quiet or ui.verbose:
763 if ui.quiet or ui.verbose:
764 fm.write('name', '%s\n', extname)
764 fm.write('name', '%s\n', extname)
765 else:
765 else:
766 fm.write('name', '%s', extname)
766 fm.write('name', '%s', extname)
767 if isinternal or hgver in exttestedwith:
767 if isinternal or hgver in exttestedwith:
768 fm.plain('\n')
768 fm.plain('\n')
769 elif not exttestedwith:
769 elif not exttestedwith:
770 fm.plain(_(' (untested!)\n'))
770 fm.plain(_(' (untested!)\n'))
771 else:
771 else:
772 lasttestedversion = exttestedwith[-1]
772 lasttestedversion = exttestedwith[-1]
773 fm.plain(' (%s!)\n' % lasttestedversion)
773 fm.plain(' (%s!)\n' % lasttestedversion)
774
774
775 fm.condwrite(ui.verbose and extsource, 'source',
775 fm.condwrite(ui.verbose and extsource, 'source',
776 _(' location: %s\n'), extsource or "")
776 _(' location: %s\n'), extsource or "")
777
777
778 if ui.verbose:
778 if ui.verbose:
779 fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal])
779 fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal])
780 fm.data(bundled=isinternal)
780 fm.data(bundled=isinternal)
781
781
782 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
782 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
783 _(' tested with: %s\n'),
783 _(' tested with: %s\n'),
784 fm.formatlist(exttestedwith, name='ver'))
784 fm.formatlist(exttestedwith, name='ver'))
785
785
786 fm.condwrite(ui.verbose and extbuglink, 'buglink',
786 fm.condwrite(ui.verbose and extbuglink, 'buglink',
787 _(' bug reporting: %s\n'), extbuglink or "")
787 _(' bug reporting: %s\n'), extbuglink or "")
788
788
789 fm.end()
789 fm.end()
790
790
791 @command('debugfileset',
791 @command('debugfileset',
792 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
792 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
793 _('[-r REV] FILESPEC'))
793 _('[-r REV] FILESPEC'))
794 def debugfileset(ui, repo, expr, **opts):
794 def debugfileset(ui, repo, expr, **opts):
795 '''parse and apply a fileset specification'''
795 '''parse and apply a fileset specification'''
796 ctx = scmutil.revsingle(repo, opts.get(r'rev'), None)
796 ctx = scmutil.revsingle(repo, opts.get(r'rev'), None)
797 if ui.verbose:
797 if ui.verbose:
798 tree = fileset.parse(expr)
798 tree = fileset.parse(expr)
799 ui.note(fileset.prettyformat(tree), "\n")
799 ui.note(fileset.prettyformat(tree), "\n")
800
800
801 for f in ctx.getfileset(expr):
801 for f in ctx.getfileset(expr):
802 ui.write("%s\n" % f)
802 ui.write("%s\n" % f)
803
803
804 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
804 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
805 def debugfsinfo(ui, path="."):
805 def debugfsinfo(ui, path="."):
806 """show information detected about current filesystem"""
806 """show information detected about current filesystem"""
807 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
807 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
808 ui.write(('fstype: %s\n') % (util.getfstype(path) or '(unknown)'))
808 ui.write(('fstype: %s\n') % (util.getfstype(path) or '(unknown)'))
809 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
809 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
810 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
810 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
811 casesensitive = '(unknown)'
811 casesensitive = '(unknown)'
812 try:
812 try:
813 with tempfile.NamedTemporaryFile(prefix='.debugfsinfo', dir=path) as f:
813 with tempfile.NamedTemporaryFile(prefix='.debugfsinfo', dir=path) as f:
814 casesensitive = util.fscasesensitive(f.name) and 'yes' or 'no'
814 casesensitive = util.fscasesensitive(f.name) and 'yes' or 'no'
815 except OSError:
815 except OSError:
816 pass
816 pass
817 ui.write(('case-sensitive: %s\n') % casesensitive)
817 ui.write(('case-sensitive: %s\n') % casesensitive)
818
818
819 @command('debuggetbundle',
819 @command('debuggetbundle',
820 [('H', 'head', [], _('id of head node'), _('ID')),
820 [('H', 'head', [], _('id of head node'), _('ID')),
821 ('C', 'common', [], _('id of common node'), _('ID')),
821 ('C', 'common', [], _('id of common node'), _('ID')),
822 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
822 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
823 _('REPO FILE [-H|-C ID]...'),
823 _('REPO FILE [-H|-C ID]...'),
824 norepo=True)
824 norepo=True)
825 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
825 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
826 """retrieves a bundle from a repo
826 """retrieves a bundle from a repo
827
827
828 Every ID must be a full-length hex node id string. Saves the bundle to the
828 Every ID must be a full-length hex node id string. Saves the bundle to the
829 given file.
829 given file.
830 """
830 """
831 opts = pycompat.byteskwargs(opts)
831 opts = pycompat.byteskwargs(opts)
832 repo = hg.peer(ui, opts, repopath)
832 repo = hg.peer(ui, opts, repopath)
833 if not repo.capable('getbundle'):
833 if not repo.capable('getbundle'):
834 raise error.Abort("getbundle() not supported by target repository")
834 raise error.Abort("getbundle() not supported by target repository")
835 args = {}
835 args = {}
836 if common:
836 if common:
837 args[r'common'] = [bin(s) for s in common]
837 args[r'common'] = [bin(s) for s in common]
838 if head:
838 if head:
839 args[r'heads'] = [bin(s) for s in head]
839 args[r'heads'] = [bin(s) for s in head]
840 # TODO: get desired bundlecaps from command line.
840 # TODO: get desired bundlecaps from command line.
841 args[r'bundlecaps'] = None
841 args[r'bundlecaps'] = None
842 bundle = repo.getbundle('debug', **args)
842 bundle = repo.getbundle('debug', **args)
843
843
844 bundletype = opts.get('type', 'bzip2').lower()
844 bundletype = opts.get('type', 'bzip2').lower()
845 btypes = {'none': 'HG10UN',
845 btypes = {'none': 'HG10UN',
846 'bzip2': 'HG10BZ',
846 'bzip2': 'HG10BZ',
847 'gzip': 'HG10GZ',
847 'gzip': 'HG10GZ',
848 'bundle2': 'HG20'}
848 'bundle2': 'HG20'}
849 bundletype = btypes.get(bundletype)
849 bundletype = btypes.get(bundletype)
850 if bundletype not in bundle2.bundletypes:
850 if bundletype not in bundle2.bundletypes:
851 raise error.Abort(_('unknown bundle type specified with --type'))
851 raise error.Abort(_('unknown bundle type specified with --type'))
852 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
852 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
853
853
854 @command('debugignore', [], '[FILE]')
854 @command('debugignore', [], '[FILE]')
855 def debugignore(ui, repo, *files, **opts):
855 def debugignore(ui, repo, *files, **opts):
856 """display the combined ignore pattern and information about ignored files
856 """display the combined ignore pattern and information about ignored files
857
857
858 With no argument display the combined ignore pattern.
858 With no argument display the combined ignore pattern.
859
859
860 Given space separated file names, shows if the given file is ignored and
860 Given space separated file names, shows if the given file is ignored and
861 if so, show the ignore rule (file and line number) that matched it.
861 if so, show the ignore rule (file and line number) that matched it.
862 """
862 """
863 ignore = repo.dirstate._ignore
863 ignore = repo.dirstate._ignore
864 if not files:
864 if not files:
865 # Show all the patterns
865 # Show all the patterns
866 ui.write("%s\n" % repr(ignore))
866 ui.write("%s\n" % repr(ignore))
867 else:
867 else:
868 m = scmutil.match(repo[None], pats=files)
868 m = scmutil.match(repo[None], pats=files)
869 for f in m.files():
869 for f in m.files():
870 nf = util.normpath(f)
870 nf = util.normpath(f)
871 ignored = None
871 ignored = None
872 ignoredata = None
872 ignoredata = None
873 if nf != '.':
873 if nf != '.':
874 if ignore(nf):
874 if ignore(nf):
875 ignored = nf
875 ignored = nf
876 ignoredata = repo.dirstate._ignorefileandline(nf)
876 ignoredata = repo.dirstate._ignorefileandline(nf)
877 else:
877 else:
878 for p in util.finddirs(nf):
878 for p in util.finddirs(nf):
879 if ignore(p):
879 if ignore(p):
880 ignored = p
880 ignored = p
881 ignoredata = repo.dirstate._ignorefileandline(p)
881 ignoredata = repo.dirstate._ignorefileandline(p)
882 break
882 break
883 if ignored:
883 if ignored:
884 if ignored == nf:
884 if ignored == nf:
885 ui.write(_("%s is ignored\n") % m.uipath(f))
885 ui.write(_("%s is ignored\n") % m.uipath(f))
886 else:
886 else:
887 ui.write(_("%s is ignored because of "
887 ui.write(_("%s is ignored because of "
888 "containing folder %s\n")
888 "containing folder %s\n")
889 % (m.uipath(f), ignored))
889 % (m.uipath(f), ignored))
890 ignorefile, lineno, line = ignoredata
890 ignorefile, lineno, line = ignoredata
891 ui.write(_("(ignore rule in %s, line %d: '%s')\n")
891 ui.write(_("(ignore rule in %s, line %d: '%s')\n")
892 % (ignorefile, lineno, line))
892 % (ignorefile, lineno, line))
893 else:
893 else:
894 ui.write(_("%s is not ignored\n") % m.uipath(f))
894 ui.write(_("%s is not ignored\n") % m.uipath(f))
895
895
896 @command('debugindex', cmdutil.debugrevlogopts +
896 @command('debugindex', cmdutil.debugrevlogopts +
897 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
897 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
898 _('[-f FORMAT] -c|-m|FILE'),
898 _('[-f FORMAT] -c|-m|FILE'),
899 optionalrepo=True)
899 optionalrepo=True)
900 def debugindex(ui, repo, file_=None, **opts):
900 def debugindex(ui, repo, file_=None, **opts):
901 """dump the contents of an index file"""
901 """dump the contents of an index file"""
902 opts = pycompat.byteskwargs(opts)
902 opts = pycompat.byteskwargs(opts)
903 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
903 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
904 format = opts.get('format', 0)
904 format = opts.get('format', 0)
905 if format not in (0, 1):
905 if format not in (0, 1):
906 raise error.Abort(_("unknown format %d") % format)
906 raise error.Abort(_("unknown format %d") % format)
907
907
908 generaldelta = r.version & revlog.FLAG_GENERALDELTA
908 generaldelta = r.version & revlog.FLAG_GENERALDELTA
909 if generaldelta:
909 if generaldelta:
910 basehdr = ' delta'
910 basehdr = ' delta'
911 else:
911 else:
912 basehdr = ' base'
912 basehdr = ' base'
913
913
914 if ui.debugflag:
914 if ui.debugflag:
915 shortfn = hex
915 shortfn = hex
916 else:
916 else:
917 shortfn = short
917 shortfn = short
918
918
919 # There might not be anything in r, so have a sane default
919 # There might not be anything in r, so have a sane default
920 idlen = 12
920 idlen = 12
921 for i in r:
921 for i in r:
922 idlen = len(shortfn(r.node(i)))
922 idlen = len(shortfn(r.node(i)))
923 break
923 break
924
924
925 if format == 0:
925 if format == 0:
926 ui.write((" rev offset length " + basehdr + " linkrev"
926 ui.write((" rev offset length " + basehdr + " linkrev"
927 " %s %s p2\n") % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
927 " %s %s p2\n") % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
928 elif format == 1:
928 elif format == 1:
929 ui.write((" rev flag offset length"
929 ui.write((" rev flag offset length"
930 " size " + basehdr + " link p1 p2"
930 " size " + basehdr + " link p1 p2"
931 " %s\n") % "nodeid".rjust(idlen))
931 " %s\n") % "nodeid".rjust(idlen))
932
932
933 for i in r:
933 for i in r:
934 node = r.node(i)
934 node = r.node(i)
935 if generaldelta:
935 if generaldelta:
936 base = r.deltaparent(i)
936 base = r.deltaparent(i)
937 else:
937 else:
938 base = r.chainbase(i)
938 base = r.chainbase(i)
939 if format == 0:
939 if format == 0:
940 try:
940 try:
941 pp = r.parents(node)
941 pp = r.parents(node)
942 except Exception:
942 except Exception:
943 pp = [nullid, nullid]
943 pp = [nullid, nullid]
944 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
944 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
945 i, r.start(i), r.length(i), base, r.linkrev(i),
945 i, r.start(i), r.length(i), base, r.linkrev(i),
946 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
946 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
947 elif format == 1:
947 elif format == 1:
948 pr = r.parentrevs(i)
948 pr = r.parentrevs(i)
949 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
949 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
950 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
950 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
951 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
951 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
952
952
953 @command('debugindexdot', cmdutil.debugrevlogopts,
953 @command('debugindexdot', cmdutil.debugrevlogopts,
954 _('-c|-m|FILE'), optionalrepo=True)
954 _('-c|-m|FILE'), optionalrepo=True)
955 def debugindexdot(ui, repo, file_=None, **opts):
955 def debugindexdot(ui, repo, file_=None, **opts):
956 """dump an index DAG as a graphviz dot file"""
956 """dump an index DAG as a graphviz dot file"""
957 opts = pycompat.byteskwargs(opts)
957 opts = pycompat.byteskwargs(opts)
958 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
958 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
959 ui.write(("digraph G {\n"))
959 ui.write(("digraph G {\n"))
960 for i in r:
960 for i in r:
961 node = r.node(i)
961 node = r.node(i)
962 pp = r.parents(node)
962 pp = r.parents(node)
963 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
963 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
964 if pp[1] != nullid:
964 if pp[1] != nullid:
965 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
965 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
966 ui.write("}\n")
966 ui.write("}\n")
967
967
968 @command('debuginstall', [] + cmdutil.formatteropts, '', norepo=True)
968 @command('debuginstall', [] + cmdutil.formatteropts, '', norepo=True)
969 def debuginstall(ui, **opts):
969 def debuginstall(ui, **opts):
970 '''test Mercurial installation
970 '''test Mercurial installation
971
971
972 Returns 0 on success.
972 Returns 0 on success.
973 '''
973 '''
974 opts = pycompat.byteskwargs(opts)
974 opts = pycompat.byteskwargs(opts)
975
975
976 def writetemp(contents):
976 def writetemp(contents):
977 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
977 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
978 f = os.fdopen(fd, pycompat.sysstr("wb"))
978 f = os.fdopen(fd, pycompat.sysstr("wb"))
979 f.write(contents)
979 f.write(contents)
980 f.close()
980 f.close()
981 return name
981 return name
982
982
983 problems = 0
983 problems = 0
984
984
985 fm = ui.formatter('debuginstall', opts)
985 fm = ui.formatter('debuginstall', opts)
986 fm.startitem()
986 fm.startitem()
987
987
988 # encoding
988 # encoding
989 fm.write('encoding', _("checking encoding (%s)...\n"), encoding.encoding)
989 fm.write('encoding', _("checking encoding (%s)...\n"), encoding.encoding)
990 err = None
990 err = None
991 try:
991 try:
992 codecs.lookup(pycompat.sysstr(encoding.encoding))
992 codecs.lookup(pycompat.sysstr(encoding.encoding))
993 except LookupError as inst:
993 except LookupError as inst:
994 err = util.forcebytestr(inst)
994 err = util.forcebytestr(inst)
995 problems += 1
995 problems += 1
996 fm.condwrite(err, 'encodingerror', _(" %s\n"
996 fm.condwrite(err, 'encodingerror', _(" %s\n"
997 " (check that your locale is properly set)\n"), err)
997 " (check that your locale is properly set)\n"), err)
998
998
999 # Python
999 # Python
1000 fm.write('pythonexe', _("checking Python executable (%s)\n"),
1000 fm.write('pythonexe', _("checking Python executable (%s)\n"),
1001 pycompat.sysexecutable)
1001 pycompat.sysexecutable)
1002 fm.write('pythonver', _("checking Python version (%s)\n"),
1002 fm.write('pythonver', _("checking Python version (%s)\n"),
1003 ("%d.%d.%d" % sys.version_info[:3]))
1003 ("%d.%d.%d" % sys.version_info[:3]))
1004 fm.write('pythonlib', _("checking Python lib (%s)...\n"),
1004 fm.write('pythonlib', _("checking Python lib (%s)...\n"),
1005 os.path.dirname(pycompat.fsencode(os.__file__)))
1005 os.path.dirname(pycompat.fsencode(os.__file__)))
1006
1006
1007 security = set(sslutil.supportedprotocols)
1007 security = set(sslutil.supportedprotocols)
1008 if sslutil.hassni:
1008 if sslutil.hassni:
1009 security.add('sni')
1009 security.add('sni')
1010
1010
1011 fm.write('pythonsecurity', _("checking Python security support (%s)\n"),
1011 fm.write('pythonsecurity', _("checking Python security support (%s)\n"),
1012 fm.formatlist(sorted(security), name='protocol',
1012 fm.formatlist(sorted(security), name='protocol',
1013 fmt='%s', sep=','))
1013 fmt='%s', sep=','))
1014
1014
1015 # These are warnings, not errors. So don't increment problem count. This
1015 # These are warnings, not errors. So don't increment problem count. This
1016 # may change in the future.
1016 # may change in the future.
1017 if 'tls1.2' not in security:
1017 if 'tls1.2' not in security:
1018 fm.plain(_(' TLS 1.2 not supported by Python install; '
1018 fm.plain(_(' TLS 1.2 not supported by Python install; '
1019 'network connections lack modern security\n'))
1019 'network connections lack modern security\n'))
1020 if 'sni' not in security:
1020 if 'sni' not in security:
1021 fm.plain(_(' SNI not supported by Python install; may have '
1021 fm.plain(_(' SNI not supported by Python install; may have '
1022 'connectivity issues with some servers\n'))
1022 'connectivity issues with some servers\n'))
1023
1023
1024 # TODO print CA cert info
1024 # TODO print CA cert info
1025
1025
1026 # hg version
1026 # hg version
1027 hgver = util.version()
1027 hgver = util.version()
1028 fm.write('hgver', _("checking Mercurial version (%s)\n"),
1028 fm.write('hgver', _("checking Mercurial version (%s)\n"),
1029 hgver.split('+')[0])
1029 hgver.split('+')[0])
1030 fm.write('hgverextra', _("checking Mercurial custom build (%s)\n"),
1030 fm.write('hgverextra', _("checking Mercurial custom build (%s)\n"),
1031 '+'.join(hgver.split('+')[1:]))
1031 '+'.join(hgver.split('+')[1:]))
1032
1032
1033 # compiled modules
1033 # compiled modules
1034 fm.write('hgmodulepolicy', _("checking module policy (%s)\n"),
1034 fm.write('hgmodulepolicy', _("checking module policy (%s)\n"),
1035 policy.policy)
1035 policy.policy)
1036 fm.write('hgmodules', _("checking installed modules (%s)...\n"),
1036 fm.write('hgmodules', _("checking installed modules (%s)...\n"),
1037 os.path.dirname(pycompat.fsencode(__file__)))
1037 os.path.dirname(pycompat.fsencode(__file__)))
1038
1038
1039 if policy.policy in ('c', 'allow'):
1039 if policy.policy in ('c', 'allow'):
1040 err = None
1040 err = None
1041 try:
1041 try:
1042 from .cext import (
1042 from .cext import (
1043 base85,
1043 base85,
1044 bdiff,
1044 bdiff,
1045 mpatch,
1045 mpatch,
1046 osutil,
1046 osutil,
1047 )
1047 )
1048 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
1048 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
1049 except Exception as inst:
1049 except Exception as inst:
1050 err = util.forcebytestr(inst)
1050 err = util.forcebytestr(inst)
1051 problems += 1
1051 problems += 1
1052 fm.condwrite(err, 'extensionserror', " %s\n", err)
1052 fm.condwrite(err, 'extensionserror', " %s\n", err)
1053
1053
1054 compengines = util.compengines._engines.values()
1054 compengines = util.compengines._engines.values()
1055 fm.write('compengines', _('checking registered compression engines (%s)\n'),
1055 fm.write('compengines', _('checking registered compression engines (%s)\n'),
1056 fm.formatlist(sorted(e.name() for e in compengines),
1056 fm.formatlist(sorted(e.name() for e in compengines),
1057 name='compengine', fmt='%s', sep=', '))
1057 name='compengine', fmt='%s', sep=', '))
1058 fm.write('compenginesavail', _('checking available compression engines '
1058 fm.write('compenginesavail', _('checking available compression engines '
1059 '(%s)\n'),
1059 '(%s)\n'),
1060 fm.formatlist(sorted(e.name() for e in compengines
1060 fm.formatlist(sorted(e.name() for e in compengines
1061 if e.available()),
1061 if e.available()),
1062 name='compengine', fmt='%s', sep=', '))
1062 name='compengine', fmt='%s', sep=', '))
1063 wirecompengines = util.compengines.supportedwireengines(util.SERVERROLE)
1063 wirecompengines = util.compengines.supportedwireengines(util.SERVERROLE)
1064 fm.write('compenginesserver', _('checking available compression engines '
1064 fm.write('compenginesserver', _('checking available compression engines '
1065 'for wire protocol (%s)\n'),
1065 'for wire protocol (%s)\n'),
1066 fm.formatlist([e.name() for e in wirecompengines
1066 fm.formatlist([e.name() for e in wirecompengines
1067 if e.wireprotosupport()],
1067 if e.wireprotosupport()],
1068 name='compengine', fmt='%s', sep=', '))
1068 name='compengine', fmt='%s', sep=', '))
1069
1069
1070 # templates
1070 # templates
1071 p = templater.templatepaths()
1071 p = templater.templatepaths()
1072 fm.write('templatedirs', 'checking templates (%s)...\n', ' '.join(p))
1072 fm.write('templatedirs', 'checking templates (%s)...\n', ' '.join(p))
1073 fm.condwrite(not p, '', _(" no template directories found\n"))
1073 fm.condwrite(not p, '', _(" no template directories found\n"))
1074 if p:
1074 if p:
1075 m = templater.templatepath("map-cmdline.default")
1075 m = templater.templatepath("map-cmdline.default")
1076 if m:
1076 if m:
1077 # template found, check if it is working
1077 # template found, check if it is working
1078 err = None
1078 err = None
1079 try:
1079 try:
1080 templater.templater.frommapfile(m)
1080 templater.templater.frommapfile(m)
1081 except Exception as inst:
1081 except Exception as inst:
1082 err = util.forcebytestr(inst)
1082 err = util.forcebytestr(inst)
1083 p = None
1083 p = None
1084 fm.condwrite(err, 'defaulttemplateerror', " %s\n", err)
1084 fm.condwrite(err, 'defaulttemplateerror', " %s\n", err)
1085 else:
1085 else:
1086 p = None
1086 p = None
1087 fm.condwrite(p, 'defaulttemplate',
1087 fm.condwrite(p, 'defaulttemplate',
1088 _("checking default template (%s)\n"), m)
1088 _("checking default template (%s)\n"), m)
1089 fm.condwrite(not m, 'defaulttemplatenotfound',
1089 fm.condwrite(not m, 'defaulttemplatenotfound',
1090 _(" template '%s' not found\n"), "default")
1090 _(" template '%s' not found\n"), "default")
1091 if not p:
1091 if not p:
1092 problems += 1
1092 problems += 1
1093 fm.condwrite(not p, '',
1093 fm.condwrite(not p, '',
1094 _(" (templates seem to have been installed incorrectly)\n"))
1094 _(" (templates seem to have been installed incorrectly)\n"))
1095
1095
1096 # editor
1096 # editor
1097 editor = ui.geteditor()
1097 editor = ui.geteditor()
1098 editor = util.expandpath(editor)
1098 editor = util.expandpath(editor)
1099 fm.write('editor', _("checking commit editor... (%s)\n"), editor)
1099 fm.write('editor', _("checking commit editor... (%s)\n"), editor)
1100 cmdpath = util.findexe(pycompat.shlexsplit(editor)[0])
1100 cmdpath = util.findexe(pycompat.shlexsplit(editor)[0])
1101 fm.condwrite(not cmdpath and editor == 'vi', 'vinotfound',
1101 fm.condwrite(not cmdpath and editor == 'vi', 'vinotfound',
1102 _(" No commit editor set and can't find %s in PATH\n"
1102 _(" No commit editor set and can't find %s in PATH\n"
1103 " (specify a commit editor in your configuration"
1103 " (specify a commit editor in your configuration"
1104 " file)\n"), not cmdpath and editor == 'vi' and editor)
1104 " file)\n"), not cmdpath and editor == 'vi' and editor)
1105 fm.condwrite(not cmdpath and editor != 'vi', 'editornotfound',
1105 fm.condwrite(not cmdpath and editor != 'vi', 'editornotfound',
1106 _(" Can't find editor '%s' in PATH\n"
1106 _(" Can't find editor '%s' in PATH\n"
1107 " (specify a commit editor in your configuration"
1107 " (specify a commit editor in your configuration"
1108 " file)\n"), not cmdpath and editor)
1108 " file)\n"), not cmdpath and editor)
1109 if not cmdpath and editor != 'vi':
1109 if not cmdpath and editor != 'vi':
1110 problems += 1
1110 problems += 1
1111
1111
1112 # check username
1112 # check username
1113 username = None
1113 username = None
1114 err = None
1114 err = None
1115 try:
1115 try:
1116 username = ui.username()
1116 username = ui.username()
1117 except error.Abort as e:
1117 except error.Abort as e:
1118 err = util.forcebytestr(e)
1118 err = util.forcebytestr(e)
1119 problems += 1
1119 problems += 1
1120
1120
1121 fm.condwrite(username, 'username', _("checking username (%s)\n"), username)
1121 fm.condwrite(username, 'username', _("checking username (%s)\n"), username)
1122 fm.condwrite(err, 'usernameerror', _("checking username...\n %s\n"
1122 fm.condwrite(err, 'usernameerror', _("checking username...\n %s\n"
1123 " (specify a username in your configuration file)\n"), err)
1123 " (specify a username in your configuration file)\n"), err)
1124
1124
1125 fm.condwrite(not problems, '',
1125 fm.condwrite(not problems, '',
1126 _("no problems detected\n"))
1126 _("no problems detected\n"))
1127 if not problems:
1127 if not problems:
1128 fm.data(problems=problems)
1128 fm.data(problems=problems)
1129 fm.condwrite(problems, 'problems',
1129 fm.condwrite(problems, 'problems',
1130 _("%d problems detected,"
1130 _("%d problems detected,"
1131 " please check your install!\n"), problems)
1131 " please check your install!\n"), problems)
1132 fm.end()
1132 fm.end()
1133
1133
1134 return problems
1134 return problems
1135
1135
1136 @command('debugknown', [], _('REPO ID...'), norepo=True)
1136 @command('debugknown', [], _('REPO ID...'), norepo=True)
1137 def debugknown(ui, repopath, *ids, **opts):
1137 def debugknown(ui, repopath, *ids, **opts):
1138 """test whether node ids are known to a repo
1138 """test whether node ids are known to a repo
1139
1139
1140 Every ID must be a full-length hex node id string. Returns a list of 0s
1140 Every ID must be a full-length hex node id string. Returns a list of 0s
1141 and 1s indicating unknown/known.
1141 and 1s indicating unknown/known.
1142 """
1142 """
1143 opts = pycompat.byteskwargs(opts)
1143 opts = pycompat.byteskwargs(opts)
1144 repo = hg.peer(ui, opts, repopath)
1144 repo = hg.peer(ui, opts, repopath)
1145 if not repo.capable('known'):
1145 if not repo.capable('known'):
1146 raise error.Abort("known() not supported by target repository")
1146 raise error.Abort("known() not supported by target repository")
1147 flags = repo.known([bin(s) for s in ids])
1147 flags = repo.known([bin(s) for s in ids])
1148 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
1148 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
1149
1149
1150 @command('debuglabelcomplete', [], _('LABEL...'))
1150 @command('debuglabelcomplete', [], _('LABEL...'))
1151 def debuglabelcomplete(ui, repo, *args):
1151 def debuglabelcomplete(ui, repo, *args):
1152 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1152 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1153 debugnamecomplete(ui, repo, *args)
1153 debugnamecomplete(ui, repo, *args)
1154
1154
1155 @command('debuglocks',
1155 @command('debuglocks',
1156 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
1156 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
1157 ('W', 'force-wlock', None,
1157 ('W', 'force-wlock', None,
1158 _('free the working state lock (DANGEROUS)'))],
1158 _('free the working state lock (DANGEROUS)'))],
1159 _('[OPTION]...'))
1159 _('[OPTION]...'))
1160 def debuglocks(ui, repo, **opts):
1160 def debuglocks(ui, repo, **opts):
1161 """show or modify state of locks
1161 """show or modify state of locks
1162
1162
1163 By default, this command will show which locks are held. This
1163 By default, this command will show which locks are held. This
1164 includes the user and process holding the lock, the amount of time
1164 includes the user and process holding the lock, the amount of time
1165 the lock has been held, and the machine name where the process is
1165 the lock has been held, and the machine name where the process is
1166 running if it's not local.
1166 running if it's not local.
1167
1167
1168 Locks protect the integrity of Mercurial's data, so should be
1168 Locks protect the integrity of Mercurial's data, so should be
1169 treated with care. System crashes or other interruptions may cause
1169 treated with care. System crashes or other interruptions may cause
1170 locks to not be properly released, though Mercurial will usually
1170 locks to not be properly released, though Mercurial will usually
1171 detect and remove such stale locks automatically.
1171 detect and remove such stale locks automatically.
1172
1172
1173 However, detecting stale locks may not always be possible (for
1173 However, detecting stale locks may not always be possible (for
1174 instance, on a shared filesystem). Removing locks may also be
1174 instance, on a shared filesystem). Removing locks may also be
1175 blocked by filesystem permissions.
1175 blocked by filesystem permissions.
1176
1176
1177 Returns 0 if no locks are held.
1177 Returns 0 if no locks are held.
1178
1178
1179 """
1179 """
1180
1180
1181 if opts.get(r'force_lock'):
1181 if opts.get(r'force_lock'):
1182 repo.svfs.unlink('lock')
1182 repo.svfs.unlink('lock')
1183 if opts.get(r'force_wlock'):
1183 if opts.get(r'force_wlock'):
1184 repo.vfs.unlink('wlock')
1184 repo.vfs.unlink('wlock')
1185 if opts.get(r'force_lock') or opts.get(r'force_lock'):
1185 if opts.get(r'force_lock') or opts.get(r'force_lock'):
1186 return 0
1186 return 0
1187
1187
1188 now = time.time()
1188 now = time.time()
1189 held = 0
1189 held = 0
1190
1190
1191 def report(vfs, name, method):
1191 def report(vfs, name, method):
1192 # this causes stale locks to get reaped for more accurate reporting
1192 # this causes stale locks to get reaped for more accurate reporting
1193 try:
1193 try:
1194 l = method(False)
1194 l = method(False)
1195 except error.LockHeld:
1195 except error.LockHeld:
1196 l = None
1196 l = None
1197
1197
1198 if l:
1198 if l:
1199 l.release()
1199 l.release()
1200 else:
1200 else:
1201 try:
1201 try:
1202 stat = vfs.lstat(name)
1202 stat = vfs.lstat(name)
1203 age = now - stat.st_mtime
1203 age = now - stat.st_mtime
1204 user = util.username(stat.st_uid)
1204 user = util.username(stat.st_uid)
1205 locker = vfs.readlock(name)
1205 locker = vfs.readlock(name)
1206 if ":" in locker:
1206 if ":" in locker:
1207 host, pid = locker.split(':')
1207 host, pid = locker.split(':')
1208 if host == socket.gethostname():
1208 if host == socket.gethostname():
1209 locker = 'user %s, process %s' % (user, pid)
1209 locker = 'user %s, process %s' % (user, pid)
1210 else:
1210 else:
1211 locker = 'user %s, process %s, host %s' \
1211 locker = 'user %s, process %s, host %s' \
1212 % (user, pid, host)
1212 % (user, pid, host)
1213 ui.write(("%-6s %s (%ds)\n") % (name + ":", locker, age))
1213 ui.write(("%-6s %s (%ds)\n") % (name + ":", locker, age))
1214 return 1
1214 return 1
1215 except OSError as e:
1215 except OSError as e:
1216 if e.errno != errno.ENOENT:
1216 if e.errno != errno.ENOENT:
1217 raise
1217 raise
1218
1218
1219 ui.write(("%-6s free\n") % (name + ":"))
1219 ui.write(("%-6s free\n") % (name + ":"))
1220 return 0
1220 return 0
1221
1221
1222 held += report(repo.svfs, "lock", repo.lock)
1222 held += report(repo.svfs, "lock", repo.lock)
1223 held += report(repo.vfs, "wlock", repo.wlock)
1223 held += report(repo.vfs, "wlock", repo.wlock)
1224
1224
1225 return held
1225 return held
1226
1226
1227 @command('debugmergestate', [], '')
1227 @command('debugmergestate', [], '')
1228 def debugmergestate(ui, repo, *args):
1228 def debugmergestate(ui, repo, *args):
1229 """print merge state
1229 """print merge state
1230
1230
1231 Use --verbose to print out information about whether v1 or v2 merge state
1231 Use --verbose to print out information about whether v1 or v2 merge state
1232 was chosen."""
1232 was chosen."""
1233 def _hashornull(h):
1233 def _hashornull(h):
1234 if h == nullhex:
1234 if h == nullhex:
1235 return 'null'
1235 return 'null'
1236 else:
1236 else:
1237 return h
1237 return h
1238
1238
1239 def printrecords(version):
1239 def printrecords(version):
1240 ui.write(('* version %s records\n') % version)
1240 ui.write(('* version %s records\n') % version)
1241 if version == 1:
1241 if version == 1:
1242 records = v1records
1242 records = v1records
1243 else:
1243 else:
1244 records = v2records
1244 records = v2records
1245
1245
1246 for rtype, record in records:
1246 for rtype, record in records:
1247 # pretty print some record types
1247 # pretty print some record types
1248 if rtype == 'L':
1248 if rtype == 'L':
1249 ui.write(('local: %s\n') % record)
1249 ui.write(('local: %s\n') % record)
1250 elif rtype == 'O':
1250 elif rtype == 'O':
1251 ui.write(('other: %s\n') % record)
1251 ui.write(('other: %s\n') % record)
1252 elif rtype == 'm':
1252 elif rtype == 'm':
1253 driver, mdstate = record.split('\0', 1)
1253 driver, mdstate = record.split('\0', 1)
1254 ui.write(('merge driver: %s (state "%s")\n')
1254 ui.write(('merge driver: %s (state "%s")\n')
1255 % (driver, mdstate))
1255 % (driver, mdstate))
1256 elif rtype in 'FDC':
1256 elif rtype in 'FDC':
1257 r = record.split('\0')
1257 r = record.split('\0')
1258 f, state, hash, lfile, afile, anode, ofile = r[0:7]
1258 f, state, hash, lfile, afile, anode, ofile = r[0:7]
1259 if version == 1:
1259 if version == 1:
1260 onode = 'not stored in v1 format'
1260 onode = 'not stored in v1 format'
1261 flags = r[7]
1261 flags = r[7]
1262 else:
1262 else:
1263 onode, flags = r[7:9]
1263 onode, flags = r[7:9]
1264 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
1264 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
1265 % (f, rtype, state, _hashornull(hash)))
1265 % (f, rtype, state, _hashornull(hash)))
1266 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
1266 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
1267 ui.write((' ancestor path: %s (node %s)\n')
1267 ui.write((' ancestor path: %s (node %s)\n')
1268 % (afile, _hashornull(anode)))
1268 % (afile, _hashornull(anode)))
1269 ui.write((' other path: %s (node %s)\n')
1269 ui.write((' other path: %s (node %s)\n')
1270 % (ofile, _hashornull(onode)))
1270 % (ofile, _hashornull(onode)))
1271 elif rtype == 'f':
1271 elif rtype == 'f':
1272 filename, rawextras = record.split('\0', 1)
1272 filename, rawextras = record.split('\0', 1)
1273 extras = rawextras.split('\0')
1273 extras = rawextras.split('\0')
1274 i = 0
1274 i = 0
1275 extrastrings = []
1275 extrastrings = []
1276 while i < len(extras):
1276 while i < len(extras):
1277 extrastrings.append('%s = %s' % (extras[i], extras[i + 1]))
1277 extrastrings.append('%s = %s' % (extras[i], extras[i + 1]))
1278 i += 2
1278 i += 2
1279
1279
1280 ui.write(('file extras: %s (%s)\n')
1280 ui.write(('file extras: %s (%s)\n')
1281 % (filename, ', '.join(extrastrings)))
1281 % (filename, ', '.join(extrastrings)))
1282 elif rtype == 'l':
1282 elif rtype == 'l':
1283 labels = record.split('\0', 2)
1283 labels = record.split('\0', 2)
1284 labels = [l for l in labels if len(l) > 0]
1284 labels = [l for l in labels if len(l) > 0]
1285 ui.write(('labels:\n'))
1285 ui.write(('labels:\n'))
1286 ui.write((' local: %s\n' % labels[0]))
1286 ui.write((' local: %s\n' % labels[0]))
1287 ui.write((' other: %s\n' % labels[1]))
1287 ui.write((' other: %s\n' % labels[1]))
1288 if len(labels) > 2:
1288 if len(labels) > 2:
1289 ui.write((' base: %s\n' % labels[2]))
1289 ui.write((' base: %s\n' % labels[2]))
1290 else:
1290 else:
1291 ui.write(('unrecognized entry: %s\t%s\n')
1291 ui.write(('unrecognized entry: %s\t%s\n')
1292 % (rtype, record.replace('\0', '\t')))
1292 % (rtype, record.replace('\0', '\t')))
1293
1293
1294 # Avoid mergestate.read() since it may raise an exception for unsupported
1294 # Avoid mergestate.read() since it may raise an exception for unsupported
1295 # merge state records. We shouldn't be doing this, but this is OK since this
1295 # merge state records. We shouldn't be doing this, but this is OK since this
1296 # command is pretty low-level.
1296 # command is pretty low-level.
1297 ms = mergemod.mergestate(repo)
1297 ms = mergemod.mergestate(repo)
1298
1298
1299 # sort so that reasonable information is on top
1299 # sort so that reasonable information is on top
1300 v1records = ms._readrecordsv1()
1300 v1records = ms._readrecordsv1()
1301 v2records = ms._readrecordsv2()
1301 v2records = ms._readrecordsv2()
1302 order = 'LOml'
1302 order = 'LOml'
1303 def key(r):
1303 def key(r):
1304 idx = order.find(r[0])
1304 idx = order.find(r[0])
1305 if idx == -1:
1305 if idx == -1:
1306 return (1, r[1])
1306 return (1, r[1])
1307 else:
1307 else:
1308 return (0, idx)
1308 return (0, idx)
1309 v1records.sort(key=key)
1309 v1records.sort(key=key)
1310 v2records.sort(key=key)
1310 v2records.sort(key=key)
1311
1311
1312 if not v1records and not v2records:
1312 if not v1records and not v2records:
1313 ui.write(('no merge state found\n'))
1313 ui.write(('no merge state found\n'))
1314 elif not v2records:
1314 elif not v2records:
1315 ui.note(('no version 2 merge state\n'))
1315 ui.note(('no version 2 merge state\n'))
1316 printrecords(1)
1316 printrecords(1)
1317 elif ms._v1v2match(v1records, v2records):
1317 elif ms._v1v2match(v1records, v2records):
1318 ui.note(('v1 and v2 states match: using v2\n'))
1318 ui.note(('v1 and v2 states match: using v2\n'))
1319 printrecords(2)
1319 printrecords(2)
1320 else:
1320 else:
1321 ui.note(('v1 and v2 states mismatch: using v1\n'))
1321 ui.note(('v1 and v2 states mismatch: using v1\n'))
1322 printrecords(1)
1322 printrecords(1)
1323 if ui.verbose:
1323 if ui.verbose:
1324 printrecords(2)
1324 printrecords(2)
1325
1325
1326 @command('debugnamecomplete', [], _('NAME...'))
1326 @command('debugnamecomplete', [], _('NAME...'))
1327 def debugnamecomplete(ui, repo, *args):
1327 def debugnamecomplete(ui, repo, *args):
1328 '''complete "names" - tags, open branch names, bookmark names'''
1328 '''complete "names" - tags, open branch names, bookmark names'''
1329
1329
1330 names = set()
1330 names = set()
1331 # since we previously only listed open branches, we will handle that
1331 # since we previously only listed open branches, we will handle that
1332 # specially (after this for loop)
1332 # specially (after this for loop)
1333 for name, ns in repo.names.iteritems():
1333 for name, ns in repo.names.iteritems():
1334 if name != 'branches':
1334 if name != 'branches':
1335 names.update(ns.listnames(repo))
1335 names.update(ns.listnames(repo))
1336 names.update(tag for (tag, heads, tip, closed)
1336 names.update(tag for (tag, heads, tip, closed)
1337 in repo.branchmap().iterbranches() if not closed)
1337 in repo.branchmap().iterbranches() if not closed)
1338 completions = set()
1338 completions = set()
1339 if not args:
1339 if not args:
1340 args = ['']
1340 args = ['']
1341 for a in args:
1341 for a in args:
1342 completions.update(n for n in names if n.startswith(a))
1342 completions.update(n for n in names if n.startswith(a))
1343 ui.write('\n'.join(sorted(completions)))
1343 ui.write('\n'.join(sorted(completions)))
1344 ui.write('\n')
1344 ui.write('\n')
1345
1345
1346 @command('debugobsolete',
1346 @command('debugobsolete',
1347 [('', 'flags', 0, _('markers flag')),
1347 [('', 'flags', 0, _('markers flag')),
1348 ('', 'record-parents', False,
1348 ('', 'record-parents', False,
1349 _('record parent information for the precursor')),
1349 _('record parent information for the precursor')),
1350 ('r', 'rev', [], _('display markers relevant to REV')),
1350 ('r', 'rev', [], _('display markers relevant to REV')),
1351 ('', 'exclusive', False, _('restrict display to markers only '
1351 ('', 'exclusive', False, _('restrict display to markers only '
1352 'relevant to REV')),
1352 'relevant to REV')),
1353 ('', 'index', False, _('display index of the marker')),
1353 ('', 'index', False, _('display index of the marker')),
1354 ('', 'delete', [], _('delete markers specified by indices')),
1354 ('', 'delete', [], _('delete markers specified by indices')),
1355 ] + cmdutil.commitopts2 + cmdutil.formatteropts,
1355 ] + cmdutil.commitopts2 + cmdutil.formatteropts,
1356 _('[OBSOLETED [REPLACEMENT ...]]'))
1356 _('[OBSOLETED [REPLACEMENT ...]]'))
1357 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
1357 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
1358 """create arbitrary obsolete marker
1358 """create arbitrary obsolete marker
1359
1359
1360 With no arguments, displays the list of obsolescence markers."""
1360 With no arguments, displays the list of obsolescence markers."""
1361
1361
1362 opts = pycompat.byteskwargs(opts)
1362 opts = pycompat.byteskwargs(opts)
1363
1363
1364 def parsenodeid(s):
1364 def parsenodeid(s):
1365 try:
1365 try:
1366 # We do not use revsingle/revrange functions here to accept
1366 # We do not use revsingle/revrange functions here to accept
1367 # arbitrary node identifiers, possibly not present in the
1367 # arbitrary node identifiers, possibly not present in the
1368 # local repository.
1368 # local repository.
1369 n = bin(s)
1369 n = bin(s)
1370 if len(n) != len(nullid):
1370 if len(n) != len(nullid):
1371 raise TypeError()
1371 raise TypeError()
1372 return n
1372 return n
1373 except TypeError:
1373 except TypeError:
1374 raise error.Abort('changeset references must be full hexadecimal '
1374 raise error.Abort('changeset references must be full hexadecimal '
1375 'node identifiers')
1375 'node identifiers')
1376
1376
1377 if opts.get('delete'):
1377 if opts.get('delete'):
1378 indices = []
1378 indices = []
1379 for v in opts.get('delete'):
1379 for v in opts.get('delete'):
1380 try:
1380 try:
1381 indices.append(int(v))
1381 indices.append(int(v))
1382 except ValueError:
1382 except ValueError:
1383 raise error.Abort(_('invalid index value: %r') % v,
1383 raise error.Abort(_('invalid index value: %r') % v,
1384 hint=_('use integers for indices'))
1384 hint=_('use integers for indices'))
1385
1385
1386 if repo.currenttransaction():
1386 if repo.currenttransaction():
1387 raise error.Abort(_('cannot delete obsmarkers in the middle '
1387 raise error.Abort(_('cannot delete obsmarkers in the middle '
1388 'of transaction.'))
1388 'of transaction.'))
1389
1389
1390 with repo.lock():
1390 with repo.lock():
1391 n = repair.deleteobsmarkers(repo.obsstore, indices)
1391 n = repair.deleteobsmarkers(repo.obsstore, indices)
1392 ui.write(_('deleted %i obsolescence markers\n') % n)
1392 ui.write(_('deleted %i obsolescence markers\n') % n)
1393
1393
1394 return
1394 return
1395
1395
1396 if precursor is not None:
1396 if precursor is not None:
1397 if opts['rev']:
1397 if opts['rev']:
1398 raise error.Abort('cannot select revision when creating marker')
1398 raise error.Abort('cannot select revision when creating marker')
1399 metadata = {}
1399 metadata = {}
1400 metadata['user'] = opts['user'] or ui.username()
1400 metadata['user'] = opts['user'] or ui.username()
1401 succs = tuple(parsenodeid(succ) for succ in successors)
1401 succs = tuple(parsenodeid(succ) for succ in successors)
1402 l = repo.lock()
1402 l = repo.lock()
1403 try:
1403 try:
1404 tr = repo.transaction('debugobsolete')
1404 tr = repo.transaction('debugobsolete')
1405 try:
1405 try:
1406 date = opts.get('date')
1406 date = opts.get('date')
1407 if date:
1407 if date:
1408 date = util.parsedate(date)
1408 date = util.parsedate(date)
1409 else:
1409 else:
1410 date = None
1410 date = None
1411 prec = parsenodeid(precursor)
1411 prec = parsenodeid(precursor)
1412 parents = None
1412 parents = None
1413 if opts['record_parents']:
1413 if opts['record_parents']:
1414 if prec not in repo.unfiltered():
1414 if prec not in repo.unfiltered():
1415 raise error.Abort('cannot used --record-parents on '
1415 raise error.Abort('cannot used --record-parents on '
1416 'unknown changesets')
1416 'unknown changesets')
1417 parents = repo.unfiltered()[prec].parents()
1417 parents = repo.unfiltered()[prec].parents()
1418 parents = tuple(p.node() for p in parents)
1418 parents = tuple(p.node() for p in parents)
1419 repo.obsstore.create(tr, prec, succs, opts['flags'],
1419 repo.obsstore.create(tr, prec, succs, opts['flags'],
1420 parents=parents, date=date,
1420 parents=parents, date=date,
1421 metadata=metadata, ui=ui)
1421 metadata=metadata, ui=ui)
1422 tr.close()
1422 tr.close()
1423 except ValueError as exc:
1423 except ValueError as exc:
1424 raise error.Abort(_('bad obsmarker input: %s') % exc)
1424 raise error.Abort(_('bad obsmarker input: %s') % exc)
1425 finally:
1425 finally:
1426 tr.release()
1426 tr.release()
1427 finally:
1427 finally:
1428 l.release()
1428 l.release()
1429 else:
1429 else:
1430 if opts['rev']:
1430 if opts['rev']:
1431 revs = scmutil.revrange(repo, opts['rev'])
1431 revs = scmutil.revrange(repo, opts['rev'])
1432 nodes = [repo[r].node() for r in revs]
1432 nodes = [repo[r].node() for r in revs]
1433 markers = list(obsutil.getmarkers(repo, nodes=nodes,
1433 markers = list(obsutil.getmarkers(repo, nodes=nodes,
1434 exclusive=opts['exclusive']))
1434 exclusive=opts['exclusive']))
1435 markers.sort(key=lambda x: x._data)
1435 markers.sort(key=lambda x: x._data)
1436 else:
1436 else:
1437 markers = obsutil.getmarkers(repo)
1437 markers = obsutil.getmarkers(repo)
1438
1438
1439 markerstoiter = markers
1439 markerstoiter = markers
1440 isrelevant = lambda m: True
1440 isrelevant = lambda m: True
1441 if opts.get('rev') and opts.get('index'):
1441 if opts.get('rev') and opts.get('index'):
1442 markerstoiter = obsutil.getmarkers(repo)
1442 markerstoiter = obsutil.getmarkers(repo)
1443 markerset = set(markers)
1443 markerset = set(markers)
1444 isrelevant = lambda m: m in markerset
1444 isrelevant = lambda m: m in markerset
1445
1445
1446 fm = ui.formatter('debugobsolete', opts)
1446 fm = ui.formatter('debugobsolete', opts)
1447 for i, m in enumerate(markerstoiter):
1447 for i, m in enumerate(markerstoiter):
1448 if not isrelevant(m):
1448 if not isrelevant(m):
1449 # marker can be irrelevant when we're iterating over a set
1449 # marker can be irrelevant when we're iterating over a set
1450 # of markers (markerstoiter) which is bigger than the set
1450 # of markers (markerstoiter) which is bigger than the set
1451 # of markers we want to display (markers)
1451 # of markers we want to display (markers)
1452 # this can happen if both --index and --rev options are
1452 # this can happen if both --index and --rev options are
1453 # provided and thus we need to iterate over all of the markers
1453 # provided and thus we need to iterate over all of the markers
1454 # to get the correct indices, but only display the ones that
1454 # to get the correct indices, but only display the ones that
1455 # are relevant to --rev value
1455 # are relevant to --rev value
1456 continue
1456 continue
1457 fm.startitem()
1457 fm.startitem()
1458 ind = i if opts.get('index') else None
1458 ind = i if opts.get('index') else None
1459 cmdutil.showmarker(fm, m, index=ind)
1459 cmdutil.showmarker(fm, m, index=ind)
1460 fm.end()
1460 fm.end()
1461
1461
1462 @command('debugpathcomplete',
1462 @command('debugpathcomplete',
1463 [('f', 'full', None, _('complete an entire path')),
1463 [('f', 'full', None, _('complete an entire path')),
1464 ('n', 'normal', None, _('show only normal files')),
1464 ('n', 'normal', None, _('show only normal files')),
1465 ('a', 'added', None, _('show only added files')),
1465 ('a', 'added', None, _('show only added files')),
1466 ('r', 'removed', None, _('show only removed files'))],
1466 ('r', 'removed', None, _('show only removed files'))],
1467 _('FILESPEC...'))
1467 _('FILESPEC...'))
1468 def debugpathcomplete(ui, repo, *specs, **opts):
1468 def debugpathcomplete(ui, repo, *specs, **opts):
1469 '''complete part or all of a tracked path
1469 '''complete part or all of a tracked path
1470
1470
1471 This command supports shells that offer path name completion. It
1471 This command supports shells that offer path name completion. It
1472 currently completes only files already known to the dirstate.
1472 currently completes only files already known to the dirstate.
1473
1473
1474 Completion extends only to the next path segment unless
1474 Completion extends only to the next path segment unless
1475 --full is specified, in which case entire paths are used.'''
1475 --full is specified, in which case entire paths are used.'''
1476
1476
1477 def complete(path, acceptable):
1477 def complete(path, acceptable):
1478 dirstate = repo.dirstate
1478 dirstate = repo.dirstate
1479 spec = os.path.normpath(os.path.join(pycompat.getcwd(), path))
1479 spec = os.path.normpath(os.path.join(pycompat.getcwd(), path))
1480 rootdir = repo.root + pycompat.ossep
1480 rootdir = repo.root + pycompat.ossep
1481 if spec != repo.root and not spec.startswith(rootdir):
1481 if spec != repo.root and not spec.startswith(rootdir):
1482 return [], []
1482 return [], []
1483 if os.path.isdir(spec):
1483 if os.path.isdir(spec):
1484 spec += '/'
1484 spec += '/'
1485 spec = spec[len(rootdir):]
1485 spec = spec[len(rootdir):]
1486 fixpaths = pycompat.ossep != '/'
1486 fixpaths = pycompat.ossep != '/'
1487 if fixpaths:
1487 if fixpaths:
1488 spec = spec.replace(pycompat.ossep, '/')
1488 spec = spec.replace(pycompat.ossep, '/')
1489 speclen = len(spec)
1489 speclen = len(spec)
1490 fullpaths = opts[r'full']
1490 fullpaths = opts[r'full']
1491 files, dirs = set(), set()
1491 files, dirs = set(), set()
1492 adddir, addfile = dirs.add, files.add
1492 adddir, addfile = dirs.add, files.add
1493 for f, st in dirstate.iteritems():
1493 for f, st in dirstate.iteritems():
1494 if f.startswith(spec) and st[0] in acceptable:
1494 if f.startswith(spec) and st[0] in acceptable:
1495 if fixpaths:
1495 if fixpaths:
1496 f = f.replace('/', pycompat.ossep)
1496 f = f.replace('/', pycompat.ossep)
1497 if fullpaths:
1497 if fullpaths:
1498 addfile(f)
1498 addfile(f)
1499 continue
1499 continue
1500 s = f.find(pycompat.ossep, speclen)
1500 s = f.find(pycompat.ossep, speclen)
1501 if s >= 0:
1501 if s >= 0:
1502 adddir(f[:s])
1502 adddir(f[:s])
1503 else:
1503 else:
1504 addfile(f)
1504 addfile(f)
1505 return files, dirs
1505 return files, dirs
1506
1506
1507 acceptable = ''
1507 acceptable = ''
1508 if opts[r'normal']:
1508 if opts[r'normal']:
1509 acceptable += 'nm'
1509 acceptable += 'nm'
1510 if opts[r'added']:
1510 if opts[r'added']:
1511 acceptable += 'a'
1511 acceptable += 'a'
1512 if opts[r'removed']:
1512 if opts[r'removed']:
1513 acceptable += 'r'
1513 acceptable += 'r'
1514 cwd = repo.getcwd()
1514 cwd = repo.getcwd()
1515 if not specs:
1515 if not specs:
1516 specs = ['.']
1516 specs = ['.']
1517
1517
1518 files, dirs = set(), set()
1518 files, dirs = set(), set()
1519 for spec in specs:
1519 for spec in specs:
1520 f, d = complete(spec, acceptable or 'nmar')
1520 f, d = complete(spec, acceptable or 'nmar')
1521 files.update(f)
1521 files.update(f)
1522 dirs.update(d)
1522 dirs.update(d)
1523 files.update(dirs)
1523 files.update(dirs)
1524 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
1524 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
1525 ui.write('\n')
1525 ui.write('\n')
1526
1526
1527 @command('debugpickmergetool',
1527 @command('debugpickmergetool',
1528 [('r', 'rev', '', _('check for files in this revision'), _('REV')),
1528 [('r', 'rev', '', _('check for files in this revision'), _('REV')),
1529 ('', 'changedelete', None, _('emulate merging change and delete')),
1529 ('', 'changedelete', None, _('emulate merging change and delete')),
1530 ] + cmdutil.walkopts + cmdutil.mergetoolopts,
1530 ] + cmdutil.walkopts + cmdutil.mergetoolopts,
1531 _('[PATTERN]...'),
1531 _('[PATTERN]...'),
1532 inferrepo=True)
1532 inferrepo=True)
1533 def debugpickmergetool(ui, repo, *pats, **opts):
1533 def debugpickmergetool(ui, repo, *pats, **opts):
1534 """examine which merge tool is chosen for specified file
1534 """examine which merge tool is chosen for specified file
1535
1535
1536 As described in :hg:`help merge-tools`, Mercurial examines
1536 As described in :hg:`help merge-tools`, Mercurial examines
1537 configurations below in this order to decide which merge tool is
1537 configurations below in this order to decide which merge tool is
1538 chosen for specified file.
1538 chosen for specified file.
1539
1539
1540 1. ``--tool`` option
1540 1. ``--tool`` option
1541 2. ``HGMERGE`` environment variable
1541 2. ``HGMERGE`` environment variable
1542 3. configurations in ``merge-patterns`` section
1542 3. configurations in ``merge-patterns`` section
1543 4. configuration of ``ui.merge``
1543 4. configuration of ``ui.merge``
1544 5. configurations in ``merge-tools`` section
1544 5. configurations in ``merge-tools`` section
1545 6. ``hgmerge`` tool (for historical reason only)
1545 6. ``hgmerge`` tool (for historical reason only)
1546 7. default tool for fallback (``:merge`` or ``:prompt``)
1546 7. default tool for fallback (``:merge`` or ``:prompt``)
1547
1547
1548 This command writes out examination result in the style below::
1548 This command writes out examination result in the style below::
1549
1549
1550 FILE = MERGETOOL
1550 FILE = MERGETOOL
1551
1551
1552 By default, all files known in the first parent context of the
1552 By default, all files known in the first parent context of the
1553 working directory are examined. Use file patterns and/or -I/-X
1553 working directory are examined. Use file patterns and/or -I/-X
1554 options to limit target files. -r/--rev is also useful to examine
1554 options to limit target files. -r/--rev is also useful to examine
1555 files in another context without actual updating to it.
1555 files in another context without actual updating to it.
1556
1556
1557 With --debug, this command shows warning messages while matching
1557 With --debug, this command shows warning messages while matching
1558 against ``merge-patterns`` and so on, too. It is recommended to
1558 against ``merge-patterns`` and so on, too. It is recommended to
1559 use this option with explicit file patterns and/or -I/-X options,
1559 use this option with explicit file patterns and/or -I/-X options,
1560 because this option increases amount of output per file according
1560 because this option increases amount of output per file according
1561 to configurations in hgrc.
1561 to configurations in hgrc.
1562
1562
1563 With -v/--verbose, this command shows configurations below at
1563 With -v/--verbose, this command shows configurations below at
1564 first (only if specified).
1564 first (only if specified).
1565
1565
1566 - ``--tool`` option
1566 - ``--tool`` option
1567 - ``HGMERGE`` environment variable
1567 - ``HGMERGE`` environment variable
1568 - configuration of ``ui.merge``
1568 - configuration of ``ui.merge``
1569
1569
1570 If merge tool is chosen before matching against
1570 If merge tool is chosen before matching against
1571 ``merge-patterns``, this command can't show any helpful
1571 ``merge-patterns``, this command can't show any helpful
1572 information, even with --debug. In such case, information above is
1572 information, even with --debug. In such case, information above is
1573 useful to know why a merge tool is chosen.
1573 useful to know why a merge tool is chosen.
1574 """
1574 """
1575 opts = pycompat.byteskwargs(opts)
1575 opts = pycompat.byteskwargs(opts)
1576 overrides = {}
1576 overrides = {}
1577 if opts['tool']:
1577 if opts['tool']:
1578 overrides[('ui', 'forcemerge')] = opts['tool']
1578 overrides[('ui', 'forcemerge')] = opts['tool']
1579 ui.note(('with --tool %r\n') % (opts['tool']))
1579 ui.note(('with --tool %r\n') % (opts['tool']))
1580
1580
1581 with ui.configoverride(overrides, 'debugmergepatterns'):
1581 with ui.configoverride(overrides, 'debugmergepatterns'):
1582 hgmerge = encoding.environ.get("HGMERGE")
1582 hgmerge = encoding.environ.get("HGMERGE")
1583 if hgmerge is not None:
1583 if hgmerge is not None:
1584 ui.note(('with HGMERGE=%r\n') % (hgmerge))
1584 ui.note(('with HGMERGE=%r\n') % (hgmerge))
1585 uimerge = ui.config("ui", "merge")
1585 uimerge = ui.config("ui", "merge")
1586 if uimerge:
1586 if uimerge:
1587 ui.note(('with ui.merge=%r\n') % (uimerge))
1587 ui.note(('with ui.merge=%r\n') % (uimerge))
1588
1588
1589 ctx = scmutil.revsingle(repo, opts.get('rev'))
1589 ctx = scmutil.revsingle(repo, opts.get('rev'))
1590 m = scmutil.match(ctx, pats, opts)
1590 m = scmutil.match(ctx, pats, opts)
1591 changedelete = opts['changedelete']
1591 changedelete = opts['changedelete']
1592 for path in ctx.walk(m):
1592 for path in ctx.walk(m):
1593 fctx = ctx[path]
1593 fctx = ctx[path]
1594 try:
1594 try:
1595 if not ui.debugflag:
1595 if not ui.debugflag:
1596 ui.pushbuffer(error=True)
1596 ui.pushbuffer(error=True)
1597 tool, toolpath = filemerge._picktool(repo, ui, path,
1597 tool, toolpath = filemerge._picktool(repo, ui, path,
1598 fctx.isbinary(),
1598 fctx.isbinary(),
1599 'l' in fctx.flags(),
1599 'l' in fctx.flags(),
1600 changedelete)
1600 changedelete)
1601 finally:
1601 finally:
1602 if not ui.debugflag:
1602 if not ui.debugflag:
1603 ui.popbuffer()
1603 ui.popbuffer()
1604 ui.write(('%s = %s\n') % (path, tool))
1604 ui.write(('%s = %s\n') % (path, tool))
1605
1605
1606 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
1606 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
1607 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
1607 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
1608 '''access the pushkey key/value protocol
1608 '''access the pushkey key/value protocol
1609
1609
1610 With two args, list the keys in the given namespace.
1610 With two args, list the keys in the given namespace.
1611
1611
1612 With five args, set a key to new if it currently is set to old.
1612 With five args, set a key to new if it currently is set to old.
1613 Reports success or failure.
1613 Reports success or failure.
1614 '''
1614 '''
1615
1615
1616 target = hg.peer(ui, {}, repopath)
1616 target = hg.peer(ui, {}, repopath)
1617 if keyinfo:
1617 if keyinfo:
1618 key, old, new = keyinfo
1618 key, old, new = keyinfo
1619 r = target.pushkey(namespace, key, old, new)
1619 r = target.pushkey(namespace, key, old, new)
1620 ui.status(str(r) + '\n')
1620 ui.status(str(r) + '\n')
1621 return not r
1621 return not r
1622 else:
1622 else:
1623 for k, v in sorted(target.listkeys(namespace).iteritems()):
1623 for k, v in sorted(target.listkeys(namespace).iteritems()):
1624 ui.write("%s\t%s\n" % (util.escapestr(k),
1624 ui.write("%s\t%s\n" % (util.escapestr(k),
1625 util.escapestr(v)))
1625 util.escapestr(v)))
1626
1626
1627 @command('debugpvec', [], _('A B'))
1627 @command('debugpvec', [], _('A B'))
1628 def debugpvec(ui, repo, a, b=None):
1628 def debugpvec(ui, repo, a, b=None):
1629 ca = scmutil.revsingle(repo, a)
1629 ca = scmutil.revsingle(repo, a)
1630 cb = scmutil.revsingle(repo, b)
1630 cb = scmutil.revsingle(repo, b)
1631 pa = pvec.ctxpvec(ca)
1631 pa = pvec.ctxpvec(ca)
1632 pb = pvec.ctxpvec(cb)
1632 pb = pvec.ctxpvec(cb)
1633 if pa == pb:
1633 if pa == pb:
1634 rel = "="
1634 rel = "="
1635 elif pa > pb:
1635 elif pa > pb:
1636 rel = ">"
1636 rel = ">"
1637 elif pa < pb:
1637 elif pa < pb:
1638 rel = "<"
1638 rel = "<"
1639 elif pa | pb:
1639 elif pa | pb:
1640 rel = "|"
1640 rel = "|"
1641 ui.write(_("a: %s\n") % pa)
1641 ui.write(_("a: %s\n") % pa)
1642 ui.write(_("b: %s\n") % pb)
1642 ui.write(_("b: %s\n") % pb)
1643 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
1643 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
1644 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
1644 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
1645 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
1645 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
1646 pa.distance(pb), rel))
1646 pa.distance(pb), rel))
1647
1647
1648 @command('debugrebuilddirstate|debugrebuildstate',
1648 @command('debugrebuilddirstate|debugrebuildstate',
1649 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
1649 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
1650 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
1650 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
1651 'the working copy parent')),
1651 'the working copy parent')),
1652 ],
1652 ],
1653 _('[-r REV]'))
1653 _('[-r REV]'))
1654 def debugrebuilddirstate(ui, repo, rev, **opts):
1654 def debugrebuilddirstate(ui, repo, rev, **opts):
1655 """rebuild the dirstate as it would look like for the given revision
1655 """rebuild the dirstate as it would look like for the given revision
1656
1656
1657 If no revision is specified the first current parent will be used.
1657 If no revision is specified the first current parent will be used.
1658
1658
1659 The dirstate will be set to the files of the given revision.
1659 The dirstate will be set to the files of the given revision.
1660 The actual working directory content or existing dirstate
1660 The actual working directory content or existing dirstate
1661 information such as adds or removes is not considered.
1661 information such as adds or removes is not considered.
1662
1662
1663 ``minimal`` will only rebuild the dirstate status for files that claim to be
1663 ``minimal`` will only rebuild the dirstate status for files that claim to be
1664 tracked but are not in the parent manifest, or that exist in the parent
1664 tracked but are not in the parent manifest, or that exist in the parent
1665 manifest but are not in the dirstate. It will not change adds, removes, or
1665 manifest but are not in the dirstate. It will not change adds, removes, or
1666 modified files that are in the working copy parent.
1666 modified files that are in the working copy parent.
1667
1667
1668 One use of this command is to make the next :hg:`status` invocation
1668 One use of this command is to make the next :hg:`status` invocation
1669 check the actual file content.
1669 check the actual file content.
1670 """
1670 """
1671 ctx = scmutil.revsingle(repo, rev)
1671 ctx = scmutil.revsingle(repo, rev)
1672 with repo.wlock():
1672 with repo.wlock():
1673 dirstate = repo.dirstate
1673 dirstate = repo.dirstate
1674 changedfiles = None
1674 changedfiles = None
1675 # See command doc for what minimal does.
1675 # See command doc for what minimal does.
1676 if opts.get(r'minimal'):
1676 if opts.get(r'minimal'):
1677 manifestfiles = set(ctx.manifest().keys())
1677 manifestfiles = set(ctx.manifest().keys())
1678 dirstatefiles = set(dirstate)
1678 dirstatefiles = set(dirstate)
1679 manifestonly = manifestfiles - dirstatefiles
1679 manifestonly = manifestfiles - dirstatefiles
1680 dsonly = dirstatefiles - manifestfiles
1680 dsonly = dirstatefiles - manifestfiles
1681 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
1681 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
1682 changedfiles = manifestonly | dsnotadded
1682 changedfiles = manifestonly | dsnotadded
1683
1683
1684 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
1684 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
1685
1685
1686 @command('debugrebuildfncache', [], '')
1686 @command('debugrebuildfncache', [], '')
1687 def debugrebuildfncache(ui, repo):
1687 def debugrebuildfncache(ui, repo):
1688 """rebuild the fncache file"""
1688 """rebuild the fncache file"""
1689 repair.rebuildfncache(ui, repo)
1689 repair.rebuildfncache(ui, repo)
1690
1690
1691 @command('debugrename',
1691 @command('debugrename',
1692 [('r', 'rev', '', _('revision to debug'), _('REV'))],
1692 [('r', 'rev', '', _('revision to debug'), _('REV'))],
1693 _('[-r REV] FILE'))
1693 _('[-r REV] FILE'))
1694 def debugrename(ui, repo, file1, *pats, **opts):
1694 def debugrename(ui, repo, file1, *pats, **opts):
1695 """dump rename information"""
1695 """dump rename information"""
1696
1696
1697 opts = pycompat.byteskwargs(opts)
1697 opts = pycompat.byteskwargs(opts)
1698 ctx = scmutil.revsingle(repo, opts.get('rev'))
1698 ctx = scmutil.revsingle(repo, opts.get('rev'))
1699 m = scmutil.match(ctx, (file1,) + pats, opts)
1699 m = scmutil.match(ctx, (file1,) + pats, opts)
1700 for abs in ctx.walk(m):
1700 for abs in ctx.walk(m):
1701 fctx = ctx[abs]
1701 fctx = ctx[abs]
1702 o = fctx.filelog().renamed(fctx.filenode())
1702 o = fctx.filelog().renamed(fctx.filenode())
1703 rel = m.rel(abs)
1703 rel = m.rel(abs)
1704 if o:
1704 if o:
1705 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
1705 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
1706 else:
1706 else:
1707 ui.write(_("%s not renamed\n") % rel)
1707 ui.write(_("%s not renamed\n") % rel)
1708
1708
1709 @command('debugrevlog', cmdutil.debugrevlogopts +
1709 @command('debugrevlog', cmdutil.debugrevlogopts +
1710 [('d', 'dump', False, _('dump index data'))],
1710 [('d', 'dump', False, _('dump index data'))],
1711 _('-c|-m|FILE'),
1711 _('-c|-m|FILE'),
1712 optionalrepo=True)
1712 optionalrepo=True)
1713 def debugrevlog(ui, repo, file_=None, **opts):
1713 def debugrevlog(ui, repo, file_=None, **opts):
1714 """show data and statistics about a revlog"""
1714 """show data and statistics about a revlog"""
1715 opts = pycompat.byteskwargs(opts)
1715 opts = pycompat.byteskwargs(opts)
1716 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
1716 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
1717
1717
1718 if opts.get("dump"):
1718 if opts.get("dump"):
1719 numrevs = len(r)
1719 numrevs = len(r)
1720 ui.write(("# rev p1rev p2rev start end deltastart base p1 p2"
1720 ui.write(("# rev p1rev p2rev start end deltastart base p1 p2"
1721 " rawsize totalsize compression heads chainlen\n"))
1721 " rawsize totalsize compression heads chainlen\n"))
1722 ts = 0
1722 ts = 0
1723 heads = set()
1723 heads = set()
1724
1724
1725 for rev in xrange(numrevs):
1725 for rev in xrange(numrevs):
1726 dbase = r.deltaparent(rev)
1726 dbase = r.deltaparent(rev)
1727 if dbase == -1:
1727 if dbase == -1:
1728 dbase = rev
1728 dbase = rev
1729 cbase = r.chainbase(rev)
1729 cbase = r.chainbase(rev)
1730 clen = r.chainlen(rev)
1730 clen = r.chainlen(rev)
1731 p1, p2 = r.parentrevs(rev)
1731 p1, p2 = r.parentrevs(rev)
1732 rs = r.rawsize(rev)
1732 rs = r.rawsize(rev)
1733 ts = ts + rs
1733 ts = ts + rs
1734 heads -= set(r.parentrevs(rev))
1734 heads -= set(r.parentrevs(rev))
1735 heads.add(rev)
1735 heads.add(rev)
1736 try:
1736 try:
1737 compression = ts / r.end(rev)
1737 compression = ts / r.end(rev)
1738 except ZeroDivisionError:
1738 except ZeroDivisionError:
1739 compression = 0
1739 compression = 0
1740 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
1740 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
1741 "%11d %5d %8d\n" %
1741 "%11d %5d %8d\n" %
1742 (rev, p1, p2, r.start(rev), r.end(rev),
1742 (rev, p1, p2, r.start(rev), r.end(rev),
1743 r.start(dbase), r.start(cbase),
1743 r.start(dbase), r.start(cbase),
1744 r.start(p1), r.start(p2),
1744 r.start(p1), r.start(p2),
1745 rs, ts, compression, len(heads), clen))
1745 rs, ts, compression, len(heads), clen))
1746 return 0
1746 return 0
1747
1747
1748 v = r.version
1748 v = r.version
1749 format = v & 0xFFFF
1749 format = v & 0xFFFF
1750 flags = []
1750 flags = []
1751 gdelta = False
1751 gdelta = False
1752 if v & revlog.FLAG_INLINE_DATA:
1752 if v & revlog.FLAG_INLINE_DATA:
1753 flags.append('inline')
1753 flags.append('inline')
1754 if v & revlog.FLAG_GENERALDELTA:
1754 if v & revlog.FLAG_GENERALDELTA:
1755 gdelta = True
1755 gdelta = True
1756 flags.append('generaldelta')
1756 flags.append('generaldelta')
1757 if not flags:
1757 if not flags:
1758 flags = ['(none)']
1758 flags = ['(none)']
1759
1759
1760 nummerges = 0
1760 nummerges = 0
1761 numfull = 0
1761 numfull = 0
1762 numprev = 0
1762 numprev = 0
1763 nump1 = 0
1763 nump1 = 0
1764 nump2 = 0
1764 nump2 = 0
1765 numother = 0
1765 numother = 0
1766 nump1prev = 0
1766 nump1prev = 0
1767 nump2prev = 0
1767 nump2prev = 0
1768 chainlengths = []
1768 chainlengths = []
1769 chainbases = []
1769 chainbases = []
1770 chainspans = []
1770 chainspans = []
1771
1771
1772 datasize = [None, 0, 0]
1772 datasize = [None, 0, 0]
1773 fullsize = [None, 0, 0]
1773 fullsize = [None, 0, 0]
1774 deltasize = [None, 0, 0]
1774 deltasize = [None, 0, 0]
1775 chunktypecounts = {}
1775 chunktypecounts = {}
1776 chunktypesizes = {}
1776 chunktypesizes = {}
1777
1777
1778 def addsize(size, l):
1778 def addsize(size, l):
1779 if l[0] is None or size < l[0]:
1779 if l[0] is None or size < l[0]:
1780 l[0] = size
1780 l[0] = size
1781 if size > l[1]:
1781 if size > l[1]:
1782 l[1] = size
1782 l[1] = size
1783 l[2] += size
1783 l[2] += size
1784
1784
1785 numrevs = len(r)
1785 numrevs = len(r)
1786 for rev in xrange(numrevs):
1786 for rev in xrange(numrevs):
1787 p1, p2 = r.parentrevs(rev)
1787 p1, p2 = r.parentrevs(rev)
1788 delta = r.deltaparent(rev)
1788 delta = r.deltaparent(rev)
1789 if format > 0:
1789 if format > 0:
1790 addsize(r.rawsize(rev), datasize)
1790 addsize(r.rawsize(rev), datasize)
1791 if p2 != nullrev:
1791 if p2 != nullrev:
1792 nummerges += 1
1792 nummerges += 1
1793 size = r.length(rev)
1793 size = r.length(rev)
1794 if delta == nullrev:
1794 if delta == nullrev:
1795 chainlengths.append(0)
1795 chainlengths.append(0)
1796 chainbases.append(r.start(rev))
1796 chainbases.append(r.start(rev))
1797 chainspans.append(size)
1797 chainspans.append(size)
1798 numfull += 1
1798 numfull += 1
1799 addsize(size, fullsize)
1799 addsize(size, fullsize)
1800 else:
1800 else:
1801 chainlengths.append(chainlengths[delta] + 1)
1801 chainlengths.append(chainlengths[delta] + 1)
1802 baseaddr = chainbases[delta]
1802 baseaddr = chainbases[delta]
1803 revaddr = r.start(rev)
1803 revaddr = r.start(rev)
1804 chainbases.append(baseaddr)
1804 chainbases.append(baseaddr)
1805 chainspans.append((revaddr - baseaddr) + size)
1805 chainspans.append((revaddr - baseaddr) + size)
1806 addsize(size, deltasize)
1806 addsize(size, deltasize)
1807 if delta == rev - 1:
1807 if delta == rev - 1:
1808 numprev += 1
1808 numprev += 1
1809 if delta == p1:
1809 if delta == p1:
1810 nump1prev += 1
1810 nump1prev += 1
1811 elif delta == p2:
1811 elif delta == p2:
1812 nump2prev += 1
1812 nump2prev += 1
1813 elif delta == p1:
1813 elif delta == p1:
1814 nump1 += 1
1814 nump1 += 1
1815 elif delta == p2:
1815 elif delta == p2:
1816 nump2 += 1
1816 nump2 += 1
1817 elif delta != nullrev:
1817 elif delta != nullrev:
1818 numother += 1
1818 numother += 1
1819
1819
1820 # Obtain data on the raw chunks in the revlog.
1820 # Obtain data on the raw chunks in the revlog.
1821 segment = r._getsegmentforrevs(rev, rev)[1]
1821 segment = r._getsegmentforrevs(rev, rev)[1]
1822 if segment:
1822 if segment:
1823 chunktype = bytes(segment[0:1])
1823 chunktype = bytes(segment[0:1])
1824 else:
1824 else:
1825 chunktype = 'empty'
1825 chunktype = 'empty'
1826
1826
1827 if chunktype not in chunktypecounts:
1827 if chunktype not in chunktypecounts:
1828 chunktypecounts[chunktype] = 0
1828 chunktypecounts[chunktype] = 0
1829 chunktypesizes[chunktype] = 0
1829 chunktypesizes[chunktype] = 0
1830
1830
1831 chunktypecounts[chunktype] += 1
1831 chunktypecounts[chunktype] += 1
1832 chunktypesizes[chunktype] += size
1832 chunktypesizes[chunktype] += size
1833
1833
1834 # Adjust size min value for empty cases
1834 # Adjust size min value for empty cases
1835 for size in (datasize, fullsize, deltasize):
1835 for size in (datasize, fullsize, deltasize):
1836 if size[0] is None:
1836 if size[0] is None:
1837 size[0] = 0
1837 size[0] = 0
1838
1838
1839 numdeltas = numrevs - numfull
1839 numdeltas = numrevs - numfull
1840 numoprev = numprev - nump1prev - nump2prev
1840 numoprev = numprev - nump1prev - nump2prev
1841 totalrawsize = datasize[2]
1841 totalrawsize = datasize[2]
1842 datasize[2] /= numrevs
1842 datasize[2] /= numrevs
1843 fulltotal = fullsize[2]
1843 fulltotal = fullsize[2]
1844 fullsize[2] /= numfull
1844 fullsize[2] /= numfull
1845 deltatotal = deltasize[2]
1845 deltatotal = deltasize[2]
1846 if numrevs - numfull > 0:
1846 if numrevs - numfull > 0:
1847 deltasize[2] /= numrevs - numfull
1847 deltasize[2] /= numrevs - numfull
1848 totalsize = fulltotal + deltatotal
1848 totalsize = fulltotal + deltatotal
1849 avgchainlen = sum(chainlengths) / numrevs
1849 avgchainlen = sum(chainlengths) / numrevs
1850 maxchainlen = max(chainlengths)
1850 maxchainlen = max(chainlengths)
1851 maxchainspan = max(chainspans)
1851 maxchainspan = max(chainspans)
1852 compratio = 1
1852 compratio = 1
1853 if totalsize:
1853 if totalsize:
1854 compratio = totalrawsize / totalsize
1854 compratio = totalrawsize / totalsize
1855
1855
1856 basedfmtstr = '%%%dd\n'
1856 basedfmtstr = '%%%dd\n'
1857 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
1857 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
1858
1858
1859 def dfmtstr(max):
1859 def dfmtstr(max):
1860 return basedfmtstr % len(str(max))
1860 return basedfmtstr % len(str(max))
1861 def pcfmtstr(max, padding=0):
1861 def pcfmtstr(max, padding=0):
1862 return basepcfmtstr % (len(str(max)), ' ' * padding)
1862 return basepcfmtstr % (len(str(max)), ' ' * padding)
1863
1863
1864 def pcfmt(value, total):
1864 def pcfmt(value, total):
1865 if total:
1865 if total:
1866 return (value, 100 * float(value) / total)
1866 return (value, 100 * float(value) / total)
1867 else:
1867 else:
1868 return value, 100.0
1868 return value, 100.0
1869
1869
1870 ui.write(('format : %d\n') % format)
1870 ui.write(('format : %d\n') % format)
1871 ui.write(('flags : %s\n') % ', '.join(flags))
1871 ui.write(('flags : %s\n') % ', '.join(flags))
1872
1872
1873 ui.write('\n')
1873 ui.write('\n')
1874 fmt = pcfmtstr(totalsize)
1874 fmt = pcfmtstr(totalsize)
1875 fmt2 = dfmtstr(totalsize)
1875 fmt2 = dfmtstr(totalsize)
1876 ui.write(('revisions : ') + fmt2 % numrevs)
1876 ui.write(('revisions : ') + fmt2 % numrevs)
1877 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
1877 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
1878 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
1878 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
1879 ui.write(('revisions : ') + fmt2 % numrevs)
1879 ui.write(('revisions : ') + fmt2 % numrevs)
1880 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
1880 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
1881 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
1881 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
1882 ui.write(('revision size : ') + fmt2 % totalsize)
1882 ui.write(('revision size : ') + fmt2 % totalsize)
1883 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
1883 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
1884 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
1884 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
1885
1885
1886 def fmtchunktype(chunktype):
1886 def fmtchunktype(chunktype):
1887 if chunktype == 'empty':
1887 if chunktype == 'empty':
1888 return ' %s : ' % chunktype
1888 return ' %s : ' % chunktype
1889 elif chunktype in pycompat.bytestr(string.ascii_letters):
1889 elif chunktype in pycompat.bytestr(string.ascii_letters):
1890 return ' 0x%s (%s) : ' % (hex(chunktype), chunktype)
1890 return ' 0x%s (%s) : ' % (hex(chunktype), chunktype)
1891 else:
1891 else:
1892 return ' 0x%s : ' % hex(chunktype)
1892 return ' 0x%s : ' % hex(chunktype)
1893
1893
1894 ui.write('\n')
1894 ui.write('\n')
1895 ui.write(('chunks : ') + fmt2 % numrevs)
1895 ui.write(('chunks : ') + fmt2 % numrevs)
1896 for chunktype in sorted(chunktypecounts):
1896 for chunktype in sorted(chunktypecounts):
1897 ui.write(fmtchunktype(chunktype))
1897 ui.write(fmtchunktype(chunktype))
1898 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
1898 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
1899 ui.write(('chunks size : ') + fmt2 % totalsize)
1899 ui.write(('chunks size : ') + fmt2 % totalsize)
1900 for chunktype in sorted(chunktypecounts):
1900 for chunktype in sorted(chunktypecounts):
1901 ui.write(fmtchunktype(chunktype))
1901 ui.write(fmtchunktype(chunktype))
1902 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
1902 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
1903
1903
1904 ui.write('\n')
1904 ui.write('\n')
1905 fmt = dfmtstr(max(avgchainlen, maxchainlen, maxchainspan, compratio))
1905 fmt = dfmtstr(max(avgchainlen, maxchainlen, maxchainspan, compratio))
1906 ui.write(('avg chain length : ') + fmt % avgchainlen)
1906 ui.write(('avg chain length : ') + fmt % avgchainlen)
1907 ui.write(('max chain length : ') + fmt % maxchainlen)
1907 ui.write(('max chain length : ') + fmt % maxchainlen)
1908 ui.write(('max chain reach : ') + fmt % maxchainspan)
1908 ui.write(('max chain reach : ') + fmt % maxchainspan)
1909 ui.write(('compression ratio : ') + fmt % compratio)
1909 ui.write(('compression ratio : ') + fmt % compratio)
1910
1910
1911 if format > 0:
1911 if format > 0:
1912 ui.write('\n')
1912 ui.write('\n')
1913 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
1913 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
1914 % tuple(datasize))
1914 % tuple(datasize))
1915 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
1915 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
1916 % tuple(fullsize))
1916 % tuple(fullsize))
1917 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
1917 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
1918 % tuple(deltasize))
1918 % tuple(deltasize))
1919
1919
1920 if numdeltas > 0:
1920 if numdeltas > 0:
1921 ui.write('\n')
1921 ui.write('\n')
1922 fmt = pcfmtstr(numdeltas)
1922 fmt = pcfmtstr(numdeltas)
1923 fmt2 = pcfmtstr(numdeltas, 4)
1923 fmt2 = pcfmtstr(numdeltas, 4)
1924 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
1924 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
1925 if numprev > 0:
1925 if numprev > 0:
1926 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
1926 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
1927 numprev))
1927 numprev))
1928 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
1928 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
1929 numprev))
1929 numprev))
1930 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
1930 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
1931 numprev))
1931 numprev))
1932 if gdelta:
1932 if gdelta:
1933 ui.write(('deltas against p1 : ')
1933 ui.write(('deltas against p1 : ')
1934 + fmt % pcfmt(nump1, numdeltas))
1934 + fmt % pcfmt(nump1, numdeltas))
1935 ui.write(('deltas against p2 : ')
1935 ui.write(('deltas against p2 : ')
1936 + fmt % pcfmt(nump2, numdeltas))
1936 + fmt % pcfmt(nump2, numdeltas))
1937 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
1937 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
1938 numdeltas))
1938 numdeltas))
1939
1939
1940 @command('debugrevspec',
1940 @command('debugrevspec',
1941 [('', 'optimize', None,
1941 [('', 'optimize', None,
1942 _('print parsed tree after optimizing (DEPRECATED)')),
1942 _('print parsed tree after optimizing (DEPRECATED)')),
1943 ('', 'show-revs', True, _('print list of result revisions (default)')),
1943 ('', 'show-revs', True, _('print list of result revisions (default)')),
1944 ('s', 'show-set', None, _('print internal representation of result set')),
1944 ('s', 'show-set', None, _('print internal representation of result set')),
1945 ('p', 'show-stage', [],
1945 ('p', 'show-stage', [],
1946 _('print parsed tree at the given stage'), _('NAME')),
1946 _('print parsed tree at the given stage'), _('NAME')),
1947 ('', 'no-optimized', False, _('evaluate tree without optimization')),
1947 ('', 'no-optimized', False, _('evaluate tree without optimization')),
1948 ('', 'verify-optimized', False, _('verify optimized result')),
1948 ('', 'verify-optimized', False, _('verify optimized result')),
1949 ],
1949 ],
1950 ('REVSPEC'))
1950 ('REVSPEC'))
1951 def debugrevspec(ui, repo, expr, **opts):
1951 def debugrevspec(ui, repo, expr, **opts):
1952 """parse and apply a revision specification
1952 """parse and apply a revision specification
1953
1953
1954 Use -p/--show-stage option to print the parsed tree at the given stages.
1954 Use -p/--show-stage option to print the parsed tree at the given stages.
1955 Use -p all to print tree at every stage.
1955 Use -p all to print tree at every stage.
1956
1956
1957 Use --no-show-revs option with -s or -p to print only the set
1957 Use --no-show-revs option with -s or -p to print only the set
1958 representation or the parsed tree respectively.
1958 representation or the parsed tree respectively.
1959
1959
1960 Use --verify-optimized to compare the optimized result with the unoptimized
1960 Use --verify-optimized to compare the optimized result with the unoptimized
1961 one. Returns 1 if the optimized result differs.
1961 one. Returns 1 if the optimized result differs.
1962 """
1962 """
1963 opts = pycompat.byteskwargs(opts)
1963 opts = pycompat.byteskwargs(opts)
1964 aliases = ui.configitems('revsetalias')
1964 aliases = ui.configitems('revsetalias')
1965 stages = [
1965 stages = [
1966 ('parsed', lambda tree: tree),
1966 ('parsed', lambda tree: tree),
1967 ('expanded', lambda tree: revsetlang.expandaliases(tree, aliases,
1967 ('expanded', lambda tree: revsetlang.expandaliases(tree, aliases,
1968 ui.warn)),
1968 ui.warn)),
1969 ('concatenated', revsetlang.foldconcat),
1969 ('concatenated', revsetlang.foldconcat),
1970 ('analyzed', revsetlang.analyze),
1970 ('analyzed', revsetlang.analyze),
1971 ('optimized', revsetlang.optimize),
1971 ('optimized', revsetlang.optimize),
1972 ]
1972 ]
1973 if opts['no_optimized']:
1973 if opts['no_optimized']:
1974 stages = stages[:-1]
1974 stages = stages[:-1]
1975 if opts['verify_optimized'] and opts['no_optimized']:
1975 if opts['verify_optimized'] and opts['no_optimized']:
1976 raise error.Abort(_('cannot use --verify-optimized with '
1976 raise error.Abort(_('cannot use --verify-optimized with '
1977 '--no-optimized'))
1977 '--no-optimized'))
1978 stagenames = set(n for n, f in stages)
1978 stagenames = set(n for n, f in stages)
1979
1979
1980 showalways = set()
1980 showalways = set()
1981 showchanged = set()
1981 showchanged = set()
1982 if ui.verbose and not opts['show_stage']:
1982 if ui.verbose and not opts['show_stage']:
1983 # show parsed tree by --verbose (deprecated)
1983 # show parsed tree by --verbose (deprecated)
1984 showalways.add('parsed')
1984 showalways.add('parsed')
1985 showchanged.update(['expanded', 'concatenated'])
1985 showchanged.update(['expanded', 'concatenated'])
1986 if opts['optimize']:
1986 if opts['optimize']:
1987 showalways.add('optimized')
1987 showalways.add('optimized')
1988 if opts['show_stage'] and opts['optimize']:
1988 if opts['show_stage'] and opts['optimize']:
1989 raise error.Abort(_('cannot use --optimize with --show-stage'))
1989 raise error.Abort(_('cannot use --optimize with --show-stage'))
1990 if opts['show_stage'] == ['all']:
1990 if opts['show_stage'] == ['all']:
1991 showalways.update(stagenames)
1991 showalways.update(stagenames)
1992 else:
1992 else:
1993 for n in opts['show_stage']:
1993 for n in opts['show_stage']:
1994 if n not in stagenames:
1994 if n not in stagenames:
1995 raise error.Abort(_('invalid stage name: %s') % n)
1995 raise error.Abort(_('invalid stage name: %s') % n)
1996 showalways.update(opts['show_stage'])
1996 showalways.update(opts['show_stage'])
1997
1997
1998 treebystage = {}
1998 treebystage = {}
1999 printedtree = None
1999 printedtree = None
2000 tree = revsetlang.parse(expr, lookup=repo.__contains__)
2000 tree = revsetlang.parse(expr, lookup=repo.__contains__)
2001 for n, f in stages:
2001 for n, f in stages:
2002 treebystage[n] = tree = f(tree)
2002 treebystage[n] = tree = f(tree)
2003 if n in showalways or (n in showchanged and tree != printedtree):
2003 if n in showalways or (n in showchanged and tree != printedtree):
2004 if opts['show_stage'] or n != 'parsed':
2004 if opts['show_stage'] or n != 'parsed':
2005 ui.write(("* %s:\n") % n)
2005 ui.write(("* %s:\n") % n)
2006 ui.write(revsetlang.prettyformat(tree), "\n")
2006 ui.write(revsetlang.prettyformat(tree), "\n")
2007 printedtree = tree
2007 printedtree = tree
2008
2008
2009 if opts['verify_optimized']:
2009 if opts['verify_optimized']:
2010 arevs = revset.makematcher(treebystage['analyzed'])(repo)
2010 arevs = revset.makematcher(treebystage['analyzed'])(repo)
2011 brevs = revset.makematcher(treebystage['optimized'])(repo)
2011 brevs = revset.makematcher(treebystage['optimized'])(repo)
2012 if opts['show_set'] or (opts['show_set'] is None and ui.verbose):
2012 if opts['show_set'] or (opts['show_set'] is None and ui.verbose):
2013 ui.write(("* analyzed set:\n"), smartset.prettyformat(arevs), "\n")
2013 ui.write(("* analyzed set:\n"), smartset.prettyformat(arevs), "\n")
2014 ui.write(("* optimized set:\n"), smartset.prettyformat(brevs), "\n")
2014 ui.write(("* optimized set:\n"), smartset.prettyformat(brevs), "\n")
2015 arevs = list(arevs)
2015 arevs = list(arevs)
2016 brevs = list(brevs)
2016 brevs = list(brevs)
2017 if arevs == brevs:
2017 if arevs == brevs:
2018 return 0
2018 return 0
2019 ui.write(('--- analyzed\n'), label='diff.file_a')
2019 ui.write(('--- analyzed\n'), label='diff.file_a')
2020 ui.write(('+++ optimized\n'), label='diff.file_b')
2020 ui.write(('+++ optimized\n'), label='diff.file_b')
2021 sm = difflib.SequenceMatcher(None, arevs, brevs)
2021 sm = difflib.SequenceMatcher(None, arevs, brevs)
2022 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2022 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
2023 if tag in ('delete', 'replace'):
2023 if tag in ('delete', 'replace'):
2024 for c in arevs[alo:ahi]:
2024 for c in arevs[alo:ahi]:
2025 ui.write('-%s\n' % c, label='diff.deleted')
2025 ui.write('-%s\n' % c, label='diff.deleted')
2026 if tag in ('insert', 'replace'):
2026 if tag in ('insert', 'replace'):
2027 for c in brevs[blo:bhi]:
2027 for c in brevs[blo:bhi]:
2028 ui.write('+%s\n' % c, label='diff.inserted')
2028 ui.write('+%s\n' % c, label='diff.inserted')
2029 if tag == 'equal':
2029 if tag == 'equal':
2030 for c in arevs[alo:ahi]:
2030 for c in arevs[alo:ahi]:
2031 ui.write(' %s\n' % c)
2031 ui.write(' %s\n' % c)
2032 return 1
2032 return 1
2033
2033
2034 func = revset.makematcher(tree)
2034 func = revset.makematcher(tree)
2035 revs = func(repo)
2035 revs = func(repo)
2036 if opts['show_set'] or (opts['show_set'] is None and ui.verbose):
2036 if opts['show_set'] or (opts['show_set'] is None and ui.verbose):
2037 ui.write(("* set:\n"), smartset.prettyformat(revs), "\n")
2037 ui.write(("* set:\n"), smartset.prettyformat(revs), "\n")
2038 if not opts['show_revs']:
2038 if not opts['show_revs']:
2039 return
2039 return
2040 for c in revs:
2040 for c in revs:
2041 ui.write("%s\n" % c)
2041 ui.write("%s\n" % c)
2042
2042
2043 @command('debugsetparents', [], _('REV1 [REV2]'))
2043 @command('debugsetparents', [], _('REV1 [REV2]'))
2044 def debugsetparents(ui, repo, rev1, rev2=None):
2044 def debugsetparents(ui, repo, rev1, rev2=None):
2045 """manually set the parents of the current working directory
2045 """manually set the parents of the current working directory
2046
2046
2047 This is useful for writing repository conversion tools, but should
2047 This is useful for writing repository conversion tools, but should
2048 be used with care. For example, neither the working directory nor the
2048 be used with care. For example, neither the working directory nor the
2049 dirstate is updated, so file status may be incorrect after running this
2049 dirstate is updated, so file status may be incorrect after running this
2050 command.
2050 command.
2051
2051
2052 Returns 0 on success.
2052 Returns 0 on success.
2053 """
2053 """
2054
2054
2055 r1 = scmutil.revsingle(repo, rev1).node()
2055 r1 = scmutil.revsingle(repo, rev1).node()
2056 r2 = scmutil.revsingle(repo, rev2, 'null').node()
2056 r2 = scmutil.revsingle(repo, rev2, 'null').node()
2057
2057
2058 with repo.wlock():
2058 with repo.wlock():
2059 repo.setparents(r1, r2)
2059 repo.setparents(r1, r2)
2060
2060
2061 @command('debugssl', [], '[SOURCE]', optionalrepo=True)
2061 @command('debugssl', [], '[SOURCE]', optionalrepo=True)
2062 def debugssl(ui, repo, source=None, **opts):
2062 def debugssl(ui, repo, source=None, **opts):
2063 '''test a secure connection to a server
2063 '''test a secure connection to a server
2064
2064
2065 This builds the certificate chain for the server on Windows, installing the
2065 This builds the certificate chain for the server on Windows, installing the
2066 missing intermediates and trusted root via Windows Update if necessary. It
2066 missing intermediates and trusted root via Windows Update if necessary. It
2067 does nothing on other platforms.
2067 does nothing on other platforms.
2068
2068
2069 If SOURCE is omitted, the 'default' path will be used. If a URL is given,
2069 If SOURCE is omitted, the 'default' path will be used. If a URL is given,
2070 that server is used. See :hg:`help urls` for more information.
2070 that server is used. See :hg:`help urls` for more information.
2071
2071
2072 If the update succeeds, retry the original operation. Otherwise, the cause
2072 If the update succeeds, retry the original operation. Otherwise, the cause
2073 of the SSL error is likely another issue.
2073 of the SSL error is likely another issue.
2074 '''
2074 '''
2075 if pycompat.osname != 'nt':
2075 if pycompat.osname != 'nt':
2076 raise error.Abort(_('certificate chain building is only possible on '
2076 raise error.Abort(_('certificate chain building is only possible on '
2077 'Windows'))
2077 'Windows'))
2078
2078
2079 if not source:
2079 if not source:
2080 if not repo:
2080 if not repo:
2081 raise error.Abort(_("there is no Mercurial repository here, and no "
2081 raise error.Abort(_("there is no Mercurial repository here, and no "
2082 "server specified"))
2082 "server specified"))
2083 source = "default"
2083 source = "default"
2084
2084
2085 source, branches = hg.parseurl(ui.expandpath(source))
2085 source, branches = hg.parseurl(ui.expandpath(source))
2086 url = util.url(source)
2086 url = util.url(source)
2087 addr = None
2087 addr = None
2088
2088
2089 if url.scheme == 'https':
2089 if url.scheme == 'https':
2090 addr = (url.host, url.port or 443)
2090 addr = (url.host, url.port or 443)
2091 elif url.scheme == 'ssh':
2091 elif url.scheme == 'ssh':
2092 addr = (url.host, url.port or 22)
2092 addr = (url.host, url.port or 22)
2093 else:
2093 else:
2094 raise error.Abort(_("only https and ssh connections are supported"))
2094 raise error.Abort(_("only https and ssh connections are supported"))
2095
2095
2096 from . import win32
2096 from . import win32
2097
2097
2098 s = ssl.wrap_socket(socket.socket(), ssl_version=ssl.PROTOCOL_TLS,
2098 s = ssl.wrap_socket(socket.socket(), ssl_version=ssl.PROTOCOL_TLS,
2099 cert_reqs=ssl.CERT_NONE, ca_certs=None)
2099 cert_reqs=ssl.CERT_NONE, ca_certs=None)
2100
2100
2101 try:
2101 try:
2102 s.connect(addr)
2102 s.connect(addr)
2103 cert = s.getpeercert(True)
2103 cert = s.getpeercert(True)
2104
2104
2105 ui.status(_('checking the certificate chain for %s\n') % url.host)
2105 ui.status(_('checking the certificate chain for %s\n') % url.host)
2106
2106
2107 complete = win32.checkcertificatechain(cert, build=False)
2107 complete = win32.checkcertificatechain(cert, build=False)
2108
2108
2109 if not complete:
2109 if not complete:
2110 ui.status(_('certificate chain is incomplete, updating... '))
2110 ui.status(_('certificate chain is incomplete, updating... '))
2111
2111
2112 if not win32.checkcertificatechain(cert):
2112 if not win32.checkcertificatechain(cert):
2113 ui.status(_('failed.\n'))
2113 ui.status(_('failed.\n'))
2114 else:
2114 else:
2115 ui.status(_('done.\n'))
2115 ui.status(_('done.\n'))
2116 else:
2116 else:
2117 ui.status(_('full certificate chain is available\n'))
2117 ui.status(_('full certificate chain is available\n'))
2118 finally:
2118 finally:
2119 s.close()
2119 s.close()
2120
2120
2121 @command('debugsub',
2121 @command('debugsub',
2122 [('r', 'rev', '',
2122 [('r', 'rev', '',
2123 _('revision to check'), _('REV'))],
2123 _('revision to check'), _('REV'))],
2124 _('[-r REV] [REV]'))
2124 _('[-r REV] [REV]'))
2125 def debugsub(ui, repo, rev=None):
2125 def debugsub(ui, repo, rev=None):
2126 ctx = scmutil.revsingle(repo, rev, None)
2126 ctx = scmutil.revsingle(repo, rev, None)
2127 for k, v in sorted(ctx.substate.items()):
2127 for k, v in sorted(ctx.substate.items()):
2128 ui.write(('path %s\n') % k)
2128 ui.write(('path %s\n') % k)
2129 ui.write((' source %s\n') % v[0])
2129 ui.write((' source %s\n') % v[0])
2130 ui.write((' revision %s\n') % v[1])
2130 ui.write((' revision %s\n') % v[1])
2131
2131
2132 @command('debugsuccessorssets',
2132 @command('debugsuccessorssets',
2133 [('', 'closest', False, _('return closest successors sets only'))],
2133 [('', 'closest', False, _('return closest successors sets only'))],
2134 _('[REV]'))
2134 _('[REV]'))
2135 def debugsuccessorssets(ui, repo, *revs, **opts):
2135 def debugsuccessorssets(ui, repo, *revs, **opts):
2136 """show set of successors for revision
2136 """show set of successors for revision
2137
2137
2138 A successors set of changeset A is a consistent group of revisions that
2138 A successors set of changeset A is a consistent group of revisions that
2139 succeed A. It contains non-obsolete changesets only unless closests
2139 succeed A. It contains non-obsolete changesets only unless closests
2140 successors set is set.
2140 successors set is set.
2141
2141
2142 In most cases a changeset A has a single successors set containing a single
2142 In most cases a changeset A has a single successors set containing a single
2143 successor (changeset A replaced by A').
2143 successor (changeset A replaced by A').
2144
2144
2145 A changeset that is made obsolete with no successors are called "pruned".
2145 A changeset that is made obsolete with no successors are called "pruned".
2146 Such changesets have no successors sets at all.
2146 Such changesets have no successors sets at all.
2147
2147
2148 A changeset that has been "split" will have a successors set containing
2148 A changeset that has been "split" will have a successors set containing
2149 more than one successor.
2149 more than one successor.
2150
2150
2151 A changeset that has been rewritten in multiple different ways is called
2151 A changeset that has been rewritten in multiple different ways is called
2152 "divergent". Such changesets have multiple successor sets (each of which
2152 "divergent". Such changesets have multiple successor sets (each of which
2153 may also be split, i.e. have multiple successors).
2153 may also be split, i.e. have multiple successors).
2154
2154
2155 Results are displayed as follows::
2155 Results are displayed as follows::
2156
2156
2157 <rev1>
2157 <rev1>
2158 <successors-1A>
2158 <successors-1A>
2159 <rev2>
2159 <rev2>
2160 <successors-2A>
2160 <successors-2A>
2161 <successors-2B1> <successors-2B2> <successors-2B3>
2161 <successors-2B1> <successors-2B2> <successors-2B3>
2162
2162
2163 Here rev2 has two possible (i.e. divergent) successors sets. The first
2163 Here rev2 has two possible (i.e. divergent) successors sets. The first
2164 holds one element, whereas the second holds three (i.e. the changeset has
2164 holds one element, whereas the second holds three (i.e. the changeset has
2165 been split).
2165 been split).
2166 """
2166 """
2167 # passed to successorssets caching computation from one call to another
2167 # passed to successorssets caching computation from one call to another
2168 cache = {}
2168 cache = {}
2169 ctx2str = str
2169 ctx2str = str
2170 node2str = short
2170 node2str = short
2171 if ui.debug():
2171 if ui.debug():
2172 def ctx2str(ctx):
2172 def ctx2str(ctx):
2173 return ctx.hex()
2173 return ctx.hex()
2174 node2str = hex
2174 node2str = hex
2175 for rev in scmutil.revrange(repo, revs):
2175 for rev in scmutil.revrange(repo, revs):
2176 ctx = repo[rev]
2176 ctx = repo[rev]
2177 ui.write('%s\n'% ctx2str(ctx))
2177 ui.write('%s\n'% ctx2str(ctx))
2178 for succsset in obsutil.successorssets(repo, ctx.node(),
2178 for succsset in obsutil.successorssets(repo, ctx.node(),
2179 closest=opts['closest'],
2179 closest=opts['closest'],
2180 cache=cache):
2180 cache=cache):
2181 if succsset:
2181 if succsset:
2182 ui.write(' ')
2182 ui.write(' ')
2183 ui.write(node2str(succsset[0]))
2183 ui.write(node2str(succsset[0]))
2184 for node in succsset[1:]:
2184 for node in succsset[1:]:
2185 ui.write(' ')
2185 ui.write(' ')
2186 ui.write(node2str(node))
2186 ui.write(node2str(node))
2187 ui.write('\n')
2187 ui.write('\n')
2188
2188
2189 @command('debugtemplate',
2189 @command('debugtemplate',
2190 [('r', 'rev', [], _('apply template on changesets'), _('REV')),
2190 [('r', 'rev', [], _('apply template on changesets'), _('REV')),
2191 ('D', 'define', [], _('define template keyword'), _('KEY=VALUE'))],
2191 ('D', 'define', [], _('define template keyword'), _('KEY=VALUE'))],
2192 _('[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
2192 _('[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
2193 optionalrepo=True)
2193 optionalrepo=True)
2194 def debugtemplate(ui, repo, tmpl, **opts):
2194 def debugtemplate(ui, repo, tmpl, **opts):
2195 """parse and apply a template
2195 """parse and apply a template
2196
2196
2197 If -r/--rev is given, the template is processed as a log template and
2197 If -r/--rev is given, the template is processed as a log template and
2198 applied to the given changesets. Otherwise, it is processed as a generic
2198 applied to the given changesets. Otherwise, it is processed as a generic
2199 template.
2199 template.
2200
2200
2201 Use --verbose to print the parsed tree.
2201 Use --verbose to print the parsed tree.
2202 """
2202 """
2203 revs = None
2203 revs = None
2204 if opts[r'rev']:
2204 if opts[r'rev']:
2205 if repo is None:
2205 if repo is None:
2206 raise error.RepoError(_('there is no Mercurial repository here '
2206 raise error.RepoError(_('there is no Mercurial repository here '
2207 '(.hg not found)'))
2207 '(.hg not found)'))
2208 revs = scmutil.revrange(repo, opts[r'rev'])
2208 revs = scmutil.revrange(repo, opts[r'rev'])
2209
2209
2210 props = {}
2210 props = {}
2211 for d in opts[r'define']:
2211 for d in opts[r'define']:
2212 try:
2212 try:
2213 k, v = (e.strip() for e in d.split('=', 1))
2213 k, v = (e.strip() for e in d.split('=', 1))
2214 if not k or k == 'ui':
2214 if not k or k == 'ui':
2215 raise ValueError
2215 raise ValueError
2216 props[k] = v
2216 props[k] = v
2217 except ValueError:
2217 except ValueError:
2218 raise error.Abort(_('malformed keyword definition: %s') % d)
2218 raise error.Abort(_('malformed keyword definition: %s') % d)
2219
2219
2220 if ui.verbose:
2220 if ui.verbose:
2221 aliases = ui.configitems('templatealias')
2221 aliases = ui.configitems('templatealias')
2222 tree = templater.parse(tmpl)
2222 tree = templater.parse(tmpl)
2223 ui.note(templater.prettyformat(tree), '\n')
2223 ui.note(templater.prettyformat(tree), '\n')
2224 newtree = templater.expandaliases(tree, aliases)
2224 newtree = templater.expandaliases(tree, aliases)
2225 if newtree != tree:
2225 if newtree != tree:
2226 ui.note(("* expanded:\n"), templater.prettyformat(newtree), '\n')
2226 ui.note(("* expanded:\n"), templater.prettyformat(newtree), '\n')
2227
2227
2228 if revs is None:
2228 if revs is None:
2229 t = formatter.maketemplater(ui, tmpl)
2229 t = formatter.maketemplater(ui, tmpl)
2230 props['ui'] = ui
2230 props['ui'] = ui
2231 ui.write(t.render(props))
2231 ui.write(t.render(props))
2232 else:
2232 else:
2233 displayer = cmdutil.makelogtemplater(ui, repo, tmpl)
2233 displayer = cmdutil.makelogtemplater(ui, repo, tmpl)
2234 for r in revs:
2234 for r in revs:
2235 displayer.show(repo[r], **pycompat.strkwargs(props))
2235 displayer.show(repo[r], **pycompat.strkwargs(props))
2236 displayer.close()
2236 displayer.close()
2237
2237
2238 @command('debugupdatecaches', [])
2238 @command('debugupdatecaches', [])
2239 def debugupdatecaches(ui, repo, *pats, **opts):
2239 def debugupdatecaches(ui, repo, *pats, **opts):
2240 """warm all known caches in the repository"""
2240 """warm all known caches in the repository"""
2241 with repo.wlock(), repo.lock():
2241 with repo.wlock(), repo.lock():
2242 repo.updatecaches()
2242 repo.updatecaches()
2243
2243
2244 @command('debugupgraderepo', [
2244 @command('debugupgraderepo', [
2245 ('o', 'optimize', [], _('extra optimization to perform'), _('NAME')),
2245 ('o', 'optimize', [], _('extra optimization to perform'), _('NAME')),
2246 ('', 'run', False, _('performs an upgrade')),
2246 ('', 'run', False, _('performs an upgrade')),
2247 ])
2247 ])
2248 def debugupgraderepo(ui, repo, run=False, optimize=None):
2248 def debugupgraderepo(ui, repo, run=False, optimize=None):
2249 """upgrade a repository to use different features
2249 """upgrade a repository to use different features
2250
2250
2251 If no arguments are specified, the repository is evaluated for upgrade
2251 If no arguments are specified, the repository is evaluated for upgrade
2252 and a list of problems and potential optimizations is printed.
2252 and a list of problems and potential optimizations is printed.
2253
2253
2254 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
2254 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
2255 can be influenced via additional arguments. More details will be provided
2255 can be influenced via additional arguments. More details will be provided
2256 by the command output when run without ``--run``.
2256 by the command output when run without ``--run``.
2257
2257
2258 During the upgrade, the repository will be locked and no writes will be
2258 During the upgrade, the repository will be locked and no writes will be
2259 allowed.
2259 allowed.
2260
2260
2261 At the end of the upgrade, the repository may not be readable while new
2261 At the end of the upgrade, the repository may not be readable while new
2262 repository data is swapped in. This window will be as long as it takes to
2262 repository data is swapped in. This window will be as long as it takes to
2263 rename some directories inside the ``.hg`` directory. On most machines, this
2263 rename some directories inside the ``.hg`` directory. On most machines, this
2264 should complete almost instantaneously and the chances of a consumer being
2264 should complete almost instantaneously and the chances of a consumer being
2265 unable to access the repository should be low.
2265 unable to access the repository should be low.
2266 """
2266 """
2267 return upgrade.upgraderepo(ui, repo, run=run, optimize=optimize)
2267 return upgrade.upgraderepo(ui, repo, run=run, optimize=optimize)
2268
2268
2269 @command('debugwalk', cmdutil.walkopts, _('[OPTION]... [FILE]...'),
2269 @command('debugwalk', cmdutil.walkopts, _('[OPTION]... [FILE]...'),
2270 inferrepo=True)
2270 inferrepo=True)
2271 def debugwalk(ui, repo, *pats, **opts):
2271 def debugwalk(ui, repo, *pats, **opts):
2272 """show how files match on given patterns"""
2272 """show how files match on given patterns"""
2273 opts = pycompat.byteskwargs(opts)
2273 opts = pycompat.byteskwargs(opts)
2274 m = scmutil.match(repo[None], pats, opts)
2274 m = scmutil.match(repo[None], pats, opts)
2275 ui.write(('matcher: %r\n' % m))
2275 ui.write(('matcher: %r\n' % m))
2276 items = list(repo[None].walk(m))
2276 items = list(repo[None].walk(m))
2277 if not items:
2277 if not items:
2278 return
2278 return
2279 f = lambda fn: fn
2279 f = lambda fn: fn
2280 if ui.configbool('ui', 'slash') and pycompat.ossep != '/':
2280 if ui.configbool('ui', 'slash') and pycompat.ossep != '/':
2281 f = lambda fn: util.normpath(fn)
2281 f = lambda fn: util.normpath(fn)
2282 fmt = 'f %%-%ds %%-%ds %%s' % (
2282 fmt = 'f %%-%ds %%-%ds %%s' % (
2283 max([len(abs) for abs in items]),
2283 max([len(abs) for abs in items]),
2284 max([len(m.rel(abs)) for abs in items]))
2284 max([len(m.rel(abs)) for abs in items]))
2285 for abs in items:
2285 for abs in items:
2286 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
2286 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
2287 ui.write("%s\n" % line.rstrip())
2287 ui.write("%s\n" % line.rstrip())
2288
2288
2289 @command('debugwireargs',
2289 @command('debugwireargs',
2290 [('', 'three', '', 'three'),
2290 [('', 'three', '', 'three'),
2291 ('', 'four', '', 'four'),
2291 ('', 'four', '', 'four'),
2292 ('', 'five', '', 'five'),
2292 ('', 'five', '', 'five'),
2293 ] + cmdutil.remoteopts,
2293 ] + cmdutil.remoteopts,
2294 _('REPO [OPTIONS]... [ONE [TWO]]'),
2294 _('REPO [OPTIONS]... [ONE [TWO]]'),
2295 norepo=True)
2295 norepo=True)
2296 def debugwireargs(ui, repopath, *vals, **opts):
2296 def debugwireargs(ui, repopath, *vals, **opts):
2297 opts = pycompat.byteskwargs(opts)
2297 opts = pycompat.byteskwargs(opts)
2298 repo = hg.peer(ui, opts, repopath)
2298 repo = hg.peer(ui, opts, repopath)
2299 for opt in cmdutil.remoteopts:
2299 for opt in cmdutil.remoteopts:
2300 del opts[opt[1]]
2300 del opts[opt[1]]
2301 args = {}
2301 args = {}
2302 for k, v in opts.iteritems():
2302 for k, v in opts.iteritems():
2303 if v:
2303 if v:
2304 args[k] = v
2304 args[k] = v
2305 # run twice to check that we don't mess up the stream for the next command
2305 # run twice to check that we don't mess up the stream for the next command
2306 res1 = repo.debugwireargs(*vals, **args)
2306 res1 = repo.debugwireargs(*vals, **args)
2307 res2 = repo.debugwireargs(*vals, **args)
2307 res2 = repo.debugwireargs(*vals, **args)
2308 ui.write("%s\n" % res1)
2308 ui.write("%s\n" % res1)
2309 if res1 != res2:
2309 if res1 != res2:
2310 ui.warn("%s\n" % res2)
2310 ui.warn("%s\n" % res2)
@@ -1,611 +1,627 b''
1 """ Mercurial phases support code
1 """ Mercurial phases support code
2
2
3 ---
3 ---
4
4
5 Copyright 2011 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
5 Copyright 2011 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
6 Logilab SA <contact@logilab.fr>
6 Logilab SA <contact@logilab.fr>
7 Augie Fackler <durin42@gmail.com>
7 Augie Fackler <durin42@gmail.com>
8
8
9 This software may be used and distributed according to the terms
9 This software may be used and distributed according to the terms
10 of the GNU General Public License version 2 or any later version.
10 of the GNU General Public License version 2 or any later version.
11
11
12 ---
12 ---
13
13
14 This module implements most phase logic in mercurial.
14 This module implements most phase logic in mercurial.
15
15
16
16
17 Basic Concept
17 Basic Concept
18 =============
18 =============
19
19
20 A 'changeset phase' is an indicator that tells us how a changeset is
20 A 'changeset phase' is an indicator that tells us how a changeset is
21 manipulated and communicated. The details of each phase is described
21 manipulated and communicated. The details of each phase is described
22 below, here we describe the properties they have in common.
22 below, here we describe the properties they have in common.
23
23
24 Like bookmarks, phases are not stored in history and thus are not
24 Like bookmarks, phases are not stored in history and thus are not
25 permanent and leave no audit trail.
25 permanent and leave no audit trail.
26
26
27 First, no changeset can be in two phases at once. Phases are ordered,
27 First, no changeset can be in two phases at once. Phases are ordered,
28 so they can be considered from lowest to highest. The default, lowest
28 so they can be considered from lowest to highest. The default, lowest
29 phase is 'public' - this is the normal phase of existing changesets. A
29 phase is 'public' - this is the normal phase of existing changesets. A
30 child changeset can not be in a lower phase than its parents.
30 child changeset can not be in a lower phase than its parents.
31
31
32 These phases share a hierarchy of traits:
32 These phases share a hierarchy of traits:
33
33
34 immutable shared
34 immutable shared
35 public: X X
35 public: X X
36 draft: X
36 draft: X
37 secret:
37 secret:
38
38
39 Local commits are draft by default.
39 Local commits are draft by default.
40
40
41 Phase Movement and Exchange
41 Phase Movement and Exchange
42 ===========================
42 ===========================
43
43
44 Phase data is exchanged by pushkey on pull and push. Some servers have
44 Phase data is exchanged by pushkey on pull and push. Some servers have
45 a publish option set, we call such a server a "publishing server".
45 a publish option set, we call such a server a "publishing server".
46 Pushing a draft changeset to a publishing server changes the phase to
46 Pushing a draft changeset to a publishing server changes the phase to
47 public.
47 public.
48
48
49 A small list of fact/rules define the exchange of phase:
49 A small list of fact/rules define the exchange of phase:
50
50
51 * old client never changes server states
51 * old client never changes server states
52 * pull never changes server states
52 * pull never changes server states
53 * publish and old server changesets are seen as public by client
53 * publish and old server changesets are seen as public by client
54 * any secret changeset seen in another repository is lowered to at
54 * any secret changeset seen in another repository is lowered to at
55 least draft
55 least draft
56
56
57 Here is the final table summing up the 49 possible use cases of phase
57 Here is the final table summing up the 49 possible use cases of phase
58 exchange:
58 exchange:
59
59
60 server
60 server
61 old publish non-publish
61 old publish non-publish
62 N X N D P N D P
62 N X N D P N D P
63 old client
63 old client
64 pull
64 pull
65 N - X/X - X/D X/P - X/D X/P
65 N - X/X - X/D X/P - X/D X/P
66 X - X/X - X/D X/P - X/D X/P
66 X - X/X - X/D X/P - X/D X/P
67 push
67 push
68 X X/X X/X X/P X/P X/P X/D X/D X/P
68 X X/X X/X X/P X/P X/P X/D X/D X/P
69 new client
69 new client
70 pull
70 pull
71 N - P/X - P/D P/P - D/D P/P
71 N - P/X - P/D P/P - D/D P/P
72 D - P/X - P/D P/P - D/D P/P
72 D - P/X - P/D P/P - D/D P/P
73 P - P/X - P/D P/P - P/D P/P
73 P - P/X - P/D P/P - P/D P/P
74 push
74 push
75 D P/X P/X P/P P/P P/P D/D D/D P/P
75 D P/X P/X P/P P/P P/P D/D D/D P/P
76 P P/X P/X P/P P/P P/P P/P P/P P/P
76 P P/X P/X P/P P/P P/P P/P P/P P/P
77
77
78 Legend:
78 Legend:
79
79
80 A/B = final state on client / state on server
80 A/B = final state on client / state on server
81
81
82 * N = new/not present,
82 * N = new/not present,
83 * P = public,
83 * P = public,
84 * D = draft,
84 * D = draft,
85 * X = not tracked (i.e., the old client or server has no internal
85 * X = not tracked (i.e., the old client or server has no internal
86 way of recording the phase.)
86 way of recording the phase.)
87
87
88 passive = only pushes
88 passive = only pushes
89
89
90
90
91 A cell here can be read like this:
91 A cell here can be read like this:
92
92
93 "When a new client pushes a draft changeset (D) to a publishing
93 "When a new client pushes a draft changeset (D) to a publishing
94 server where it's not present (N), it's marked public on both
94 server where it's not present (N), it's marked public on both
95 sides (P/P)."
95 sides (P/P)."
96
96
97 Note: old client behave as a publishing server with draft only content
97 Note: old client behave as a publishing server with draft only content
98 - other people see it as public
98 - other people see it as public
99 - content is pushed as draft
99 - content is pushed as draft
100
100
101 """
101 """
102
102
103 from __future__ import absolute_import
103 from __future__ import absolute_import
104
104
105 import errno
105 import errno
106 import struct
106 import struct
107
107
108 from .i18n import _
108 from .i18n import _
109 from .node import (
109 from .node import (
110 bin,
110 bin,
111 hex,
111 hex,
112 nullid,
112 nullid,
113 nullrev,
113 nullrev,
114 short,
114 short,
115 )
115 )
116 from . import (
116 from . import (
117 error,
117 error,
118 smartset,
118 smartset,
119 txnutil,
119 txnutil,
120 util,
120 util,
121 )
121 )
122
122
123 _fphasesentry = struct.Struct('>i20s')
123 _fphasesentry = struct.Struct('>i20s')
124
124
125 allphases = public, draft, secret = range(3)
125 allphases = public, draft, secret = range(3)
126 trackedphases = allphases[1:]
126 trackedphases = allphases[1:]
127 phasenames = ['public', 'draft', 'secret']
127 phasenames = ['public', 'draft', 'secret']
128
128
129 def _readroots(repo, phasedefaults=None):
129 def _readroots(repo, phasedefaults=None):
130 """Read phase roots from disk
130 """Read phase roots from disk
131
131
132 phasedefaults is a list of fn(repo, roots) callable, which are
132 phasedefaults is a list of fn(repo, roots) callable, which are
133 executed if the phase roots file does not exist. When phases are
133 executed if the phase roots file does not exist. When phases are
134 being initialized on an existing repository, this could be used to
134 being initialized on an existing repository, this could be used to
135 set selected changesets phase to something else than public.
135 set selected changesets phase to something else than public.
136
136
137 Return (roots, dirty) where dirty is true if roots differ from
137 Return (roots, dirty) where dirty is true if roots differ from
138 what is being stored.
138 what is being stored.
139 """
139 """
140 repo = repo.unfiltered()
140 repo = repo.unfiltered()
141 dirty = False
141 dirty = False
142 roots = [set() for i in allphases]
142 roots = [set() for i in allphases]
143 try:
143 try:
144 f, pending = txnutil.trypending(repo.root, repo.svfs, 'phaseroots')
144 f, pending = txnutil.trypending(repo.root, repo.svfs, 'phaseroots')
145 try:
145 try:
146 for line in f:
146 for line in f:
147 phase, nh = line.split()
147 phase, nh = line.split()
148 roots[int(phase)].add(bin(nh))
148 roots[int(phase)].add(bin(nh))
149 finally:
149 finally:
150 f.close()
150 f.close()
151 except IOError as inst:
151 except IOError as inst:
152 if inst.errno != errno.ENOENT:
152 if inst.errno != errno.ENOENT:
153 raise
153 raise
154 if phasedefaults:
154 if phasedefaults:
155 for f in phasedefaults:
155 for f in phasedefaults:
156 roots = f(repo, roots)
156 roots = f(repo, roots)
157 dirty = True
157 dirty = True
158 return roots, dirty
158 return roots, dirty
159
159
160 def binaryencode(phasemapping):
160 def binaryencode(phasemapping):
161 """encode a 'phase -> nodes' mapping into a binary stream
161 """encode a 'phase -> nodes' mapping into a binary stream
162
162
163 Since phases are integer the mapping is actually a python list:
163 Since phases are integer the mapping is actually a python list:
164 [[PUBLIC_HEADS], [DRAFTS_HEADS], [SECRET_HEADS]]
164 [[PUBLIC_HEADS], [DRAFTS_HEADS], [SECRET_HEADS]]
165 """
165 """
166 binarydata = []
166 binarydata = []
167 for phase, nodes in enumerate(phasemapping):
167 for phase, nodes in enumerate(phasemapping):
168 for head in nodes:
168 for head in nodes:
169 binarydata.append(_fphasesentry.pack(phase, head))
169 binarydata.append(_fphasesentry.pack(phase, head))
170 return ''.join(binarydata)
170 return ''.join(binarydata)
171
171
172 def binarydecode(stream):
173 """decode a binary stream into a 'phase -> nodes' mapping
174
175 Since phases are integer the mapping is actually a python list."""
176 headsbyphase = [[] for i in allphases]
177 entrysize = _fphasesentry.size
178 while True:
179 entry = stream.read(entrysize)
180 if len(entry) < entrysize:
181 if entry:
182 raise error.Abort(_('bad phase-heads stream'))
183 break
184 phase, node = _fphasesentry.unpack(entry)
185 headsbyphase[phase].append(node)
186 return headsbyphase
187
172 def _trackphasechange(data, rev, old, new):
188 def _trackphasechange(data, rev, old, new):
173 """add a phase move the <data> dictionnary
189 """add a phase move the <data> dictionnary
174
190
175 If data is None, nothing happens.
191 If data is None, nothing happens.
176 """
192 """
177 if data is None:
193 if data is None:
178 return
194 return
179 existing = data.get(rev)
195 existing = data.get(rev)
180 if existing is not None:
196 if existing is not None:
181 old = existing[0]
197 old = existing[0]
182 data[rev] = (old, new)
198 data[rev] = (old, new)
183
199
184 class phasecache(object):
200 class phasecache(object):
185 def __init__(self, repo, phasedefaults, _load=True):
201 def __init__(self, repo, phasedefaults, _load=True):
186 if _load:
202 if _load:
187 # Cheap trick to allow shallow-copy without copy module
203 # Cheap trick to allow shallow-copy without copy module
188 self.phaseroots, self.dirty = _readroots(repo, phasedefaults)
204 self.phaseroots, self.dirty = _readroots(repo, phasedefaults)
189 self._phaserevs = None
205 self._phaserevs = None
190 self._phasesets = None
206 self._phasesets = None
191 self.filterunknown(repo)
207 self.filterunknown(repo)
192 self.opener = repo.svfs
208 self.opener = repo.svfs
193
209
194 def getrevset(self, repo, phases):
210 def getrevset(self, repo, phases):
195 """return a smartset for the given phases"""
211 """return a smartset for the given phases"""
196 self.loadphaserevs(repo) # ensure phase's sets are loaded
212 self.loadphaserevs(repo) # ensure phase's sets are loaded
197
213
198 if self._phasesets and all(self._phasesets[p] is not None
214 if self._phasesets and all(self._phasesets[p] is not None
199 for p in phases):
215 for p in phases):
200 # fast path - use _phasesets
216 # fast path - use _phasesets
201 revs = self._phasesets[phases[0]]
217 revs = self._phasesets[phases[0]]
202 if len(phases) > 1:
218 if len(phases) > 1:
203 revs = revs.copy() # only copy when needed
219 revs = revs.copy() # only copy when needed
204 for p in phases[1:]:
220 for p in phases[1:]:
205 revs.update(self._phasesets[p])
221 revs.update(self._phasesets[p])
206 if repo.changelog.filteredrevs:
222 if repo.changelog.filteredrevs:
207 revs = revs - repo.changelog.filteredrevs
223 revs = revs - repo.changelog.filteredrevs
208 return smartset.baseset(revs)
224 return smartset.baseset(revs)
209 else:
225 else:
210 # slow path - enumerate all revisions
226 # slow path - enumerate all revisions
211 phase = self.phase
227 phase = self.phase
212 revs = (r for r in repo if phase(repo, r) in phases)
228 revs = (r for r in repo if phase(repo, r) in phases)
213 return smartset.generatorset(revs, iterasc=True)
229 return smartset.generatorset(revs, iterasc=True)
214
230
215 def copy(self):
231 def copy(self):
216 # Shallow copy meant to ensure isolation in
232 # Shallow copy meant to ensure isolation in
217 # advance/retractboundary(), nothing more.
233 # advance/retractboundary(), nothing more.
218 ph = self.__class__(None, None, _load=False)
234 ph = self.__class__(None, None, _load=False)
219 ph.phaseroots = self.phaseroots[:]
235 ph.phaseroots = self.phaseroots[:]
220 ph.dirty = self.dirty
236 ph.dirty = self.dirty
221 ph.opener = self.opener
237 ph.opener = self.opener
222 ph._phaserevs = self._phaserevs
238 ph._phaserevs = self._phaserevs
223 ph._phasesets = self._phasesets
239 ph._phasesets = self._phasesets
224 return ph
240 return ph
225
241
226 def replace(self, phcache):
242 def replace(self, phcache):
227 """replace all values in 'self' with content of phcache"""
243 """replace all values in 'self' with content of phcache"""
228 for a in ('phaseroots', 'dirty', 'opener', '_phaserevs', '_phasesets'):
244 for a in ('phaseroots', 'dirty', 'opener', '_phaserevs', '_phasesets'):
229 setattr(self, a, getattr(phcache, a))
245 setattr(self, a, getattr(phcache, a))
230
246
231 def _getphaserevsnative(self, repo):
247 def _getphaserevsnative(self, repo):
232 repo = repo.unfiltered()
248 repo = repo.unfiltered()
233 nativeroots = []
249 nativeroots = []
234 for phase in trackedphases:
250 for phase in trackedphases:
235 nativeroots.append(map(repo.changelog.rev, self.phaseroots[phase]))
251 nativeroots.append(map(repo.changelog.rev, self.phaseroots[phase]))
236 return repo.changelog.computephases(nativeroots)
252 return repo.changelog.computephases(nativeroots)
237
253
238 def _computephaserevspure(self, repo):
254 def _computephaserevspure(self, repo):
239 repo = repo.unfiltered()
255 repo = repo.unfiltered()
240 revs = [public] * len(repo.changelog)
256 revs = [public] * len(repo.changelog)
241 self._phaserevs = revs
257 self._phaserevs = revs
242 self._populatephaseroots(repo)
258 self._populatephaseroots(repo)
243 for phase in trackedphases:
259 for phase in trackedphases:
244 roots = list(map(repo.changelog.rev, self.phaseroots[phase]))
260 roots = list(map(repo.changelog.rev, self.phaseroots[phase]))
245 if roots:
261 if roots:
246 for rev in roots:
262 for rev in roots:
247 revs[rev] = phase
263 revs[rev] = phase
248 for rev in repo.changelog.descendants(roots):
264 for rev in repo.changelog.descendants(roots):
249 revs[rev] = phase
265 revs[rev] = phase
250
266
251 def loadphaserevs(self, repo):
267 def loadphaserevs(self, repo):
252 """ensure phase information is loaded in the object"""
268 """ensure phase information is loaded in the object"""
253 if self._phaserevs is None:
269 if self._phaserevs is None:
254 try:
270 try:
255 res = self._getphaserevsnative(repo)
271 res = self._getphaserevsnative(repo)
256 self._phaserevs, self._phasesets = res
272 self._phaserevs, self._phasesets = res
257 except AttributeError:
273 except AttributeError:
258 self._computephaserevspure(repo)
274 self._computephaserevspure(repo)
259
275
260 def invalidate(self):
276 def invalidate(self):
261 self._phaserevs = None
277 self._phaserevs = None
262 self._phasesets = None
278 self._phasesets = None
263
279
264 def _populatephaseroots(self, repo):
280 def _populatephaseroots(self, repo):
265 """Fills the _phaserevs cache with phases for the roots.
281 """Fills the _phaserevs cache with phases for the roots.
266 """
282 """
267 cl = repo.changelog
283 cl = repo.changelog
268 phaserevs = self._phaserevs
284 phaserevs = self._phaserevs
269 for phase in trackedphases:
285 for phase in trackedphases:
270 roots = map(cl.rev, self.phaseroots[phase])
286 roots = map(cl.rev, self.phaseroots[phase])
271 for root in roots:
287 for root in roots:
272 phaserevs[root] = phase
288 phaserevs[root] = phase
273
289
274 def phase(self, repo, rev):
290 def phase(self, repo, rev):
275 # We need a repo argument here to be able to build _phaserevs
291 # We need a repo argument here to be able to build _phaserevs
276 # if necessary. The repository instance is not stored in
292 # if necessary. The repository instance is not stored in
277 # phasecache to avoid reference cycles. The changelog instance
293 # phasecache to avoid reference cycles. The changelog instance
278 # is not stored because it is a filecache() property and can
294 # is not stored because it is a filecache() property and can
279 # be replaced without us being notified.
295 # be replaced without us being notified.
280 if rev == nullrev:
296 if rev == nullrev:
281 return public
297 return public
282 if rev < nullrev:
298 if rev < nullrev:
283 raise ValueError(_('cannot lookup negative revision'))
299 raise ValueError(_('cannot lookup negative revision'))
284 if self._phaserevs is None or rev >= len(self._phaserevs):
300 if self._phaserevs is None or rev >= len(self._phaserevs):
285 self.invalidate()
301 self.invalidate()
286 self.loadphaserevs(repo)
302 self.loadphaserevs(repo)
287 return self._phaserevs[rev]
303 return self._phaserevs[rev]
288
304
289 def write(self):
305 def write(self):
290 if not self.dirty:
306 if not self.dirty:
291 return
307 return
292 f = self.opener('phaseroots', 'w', atomictemp=True, checkambig=True)
308 f = self.opener('phaseroots', 'w', atomictemp=True, checkambig=True)
293 try:
309 try:
294 self._write(f)
310 self._write(f)
295 finally:
311 finally:
296 f.close()
312 f.close()
297
313
298 def _write(self, fp):
314 def _write(self, fp):
299 for phase, roots in enumerate(self.phaseroots):
315 for phase, roots in enumerate(self.phaseroots):
300 for h in roots:
316 for h in roots:
301 fp.write('%i %s\n' % (phase, hex(h)))
317 fp.write('%i %s\n' % (phase, hex(h)))
302 self.dirty = False
318 self.dirty = False
303
319
304 def _updateroots(self, phase, newroots, tr):
320 def _updateroots(self, phase, newroots, tr):
305 self.phaseroots[phase] = newroots
321 self.phaseroots[phase] = newroots
306 self.invalidate()
322 self.invalidate()
307 self.dirty = True
323 self.dirty = True
308
324
309 tr.addfilegenerator('phase', ('phaseroots',), self._write)
325 tr.addfilegenerator('phase', ('phaseroots',), self._write)
310 tr.hookargs['phases_moved'] = '1'
326 tr.hookargs['phases_moved'] = '1'
311
327
312 def registernew(self, repo, tr, targetphase, nodes):
328 def registernew(self, repo, tr, targetphase, nodes):
313 repo = repo.unfiltered()
329 repo = repo.unfiltered()
314 self._retractboundary(repo, tr, targetphase, nodes)
330 self._retractboundary(repo, tr, targetphase, nodes)
315 if tr is not None and 'phases' in tr.changes:
331 if tr is not None and 'phases' in tr.changes:
316 phasetracking = tr.changes['phases']
332 phasetracking = tr.changes['phases']
317 torev = repo.changelog.rev
333 torev = repo.changelog.rev
318 phase = self.phase
334 phase = self.phase
319 for n in nodes:
335 for n in nodes:
320 rev = torev(n)
336 rev = torev(n)
321 revphase = phase(repo, rev)
337 revphase = phase(repo, rev)
322 _trackphasechange(phasetracking, rev, None, revphase)
338 _trackphasechange(phasetracking, rev, None, revphase)
323 repo.invalidatevolatilesets()
339 repo.invalidatevolatilesets()
324
340
325 def advanceboundary(self, repo, tr, targetphase, nodes):
341 def advanceboundary(self, repo, tr, targetphase, nodes):
326 """Set all 'nodes' to phase 'targetphase'
342 """Set all 'nodes' to phase 'targetphase'
327
343
328 Nodes with a phase lower than 'targetphase' are not affected.
344 Nodes with a phase lower than 'targetphase' are not affected.
329 """
345 """
330 # Be careful to preserve shallow-copied values: do not update
346 # Be careful to preserve shallow-copied values: do not update
331 # phaseroots values, replace them.
347 # phaseroots values, replace them.
332 if tr is None:
348 if tr is None:
333 phasetracking = None
349 phasetracking = None
334 else:
350 else:
335 phasetracking = tr.changes.get('phases')
351 phasetracking = tr.changes.get('phases')
336
352
337 repo = repo.unfiltered()
353 repo = repo.unfiltered()
338
354
339 delroots = [] # set of root deleted by this path
355 delroots = [] # set of root deleted by this path
340 for phase in xrange(targetphase + 1, len(allphases)):
356 for phase in xrange(targetphase + 1, len(allphases)):
341 # filter nodes that are not in a compatible phase already
357 # filter nodes that are not in a compatible phase already
342 nodes = [n for n in nodes
358 nodes = [n for n in nodes
343 if self.phase(repo, repo[n].rev()) >= phase]
359 if self.phase(repo, repo[n].rev()) >= phase]
344 if not nodes:
360 if not nodes:
345 break # no roots to move anymore
361 break # no roots to move anymore
346
362
347 olds = self.phaseroots[phase]
363 olds = self.phaseroots[phase]
348
364
349 affected = repo.revs('%ln::%ln', olds, nodes)
365 affected = repo.revs('%ln::%ln', olds, nodes)
350 for r in affected:
366 for r in affected:
351 _trackphasechange(phasetracking, r, self.phase(repo, r),
367 _trackphasechange(phasetracking, r, self.phase(repo, r),
352 targetphase)
368 targetphase)
353
369
354 roots = set(ctx.node() for ctx in repo.set(
370 roots = set(ctx.node() for ctx in repo.set(
355 'roots((%ln::) - %ld)', olds, affected))
371 'roots((%ln::) - %ld)', olds, affected))
356 if olds != roots:
372 if olds != roots:
357 self._updateroots(phase, roots, tr)
373 self._updateroots(phase, roots, tr)
358 # some roots may need to be declared for lower phases
374 # some roots may need to be declared for lower phases
359 delroots.extend(olds - roots)
375 delroots.extend(olds - roots)
360 # declare deleted root in the target phase
376 # declare deleted root in the target phase
361 if targetphase != 0:
377 if targetphase != 0:
362 self._retractboundary(repo, tr, targetphase, delroots)
378 self._retractboundary(repo, tr, targetphase, delroots)
363 repo.invalidatevolatilesets()
379 repo.invalidatevolatilesets()
364
380
365 def retractboundary(self, repo, tr, targetphase, nodes):
381 def retractboundary(self, repo, tr, targetphase, nodes):
366 oldroots = self.phaseroots[:targetphase + 1]
382 oldroots = self.phaseroots[:targetphase + 1]
367 if tr is None:
383 if tr is None:
368 phasetracking = None
384 phasetracking = None
369 else:
385 else:
370 phasetracking = tr.changes.get('phases')
386 phasetracking = tr.changes.get('phases')
371 repo = repo.unfiltered()
387 repo = repo.unfiltered()
372 if (self._retractboundary(repo, tr, targetphase, nodes)
388 if (self._retractboundary(repo, tr, targetphase, nodes)
373 and phasetracking is not None):
389 and phasetracking is not None):
374
390
375 # find the affected revisions
391 # find the affected revisions
376 new = self.phaseroots[targetphase]
392 new = self.phaseroots[targetphase]
377 old = oldroots[targetphase]
393 old = oldroots[targetphase]
378 affected = set(repo.revs('(%ln::) - (%ln::)', new, old))
394 affected = set(repo.revs('(%ln::) - (%ln::)', new, old))
379
395
380 # find the phase of the affected revision
396 # find the phase of the affected revision
381 for phase in xrange(targetphase, -1, -1):
397 for phase in xrange(targetphase, -1, -1):
382 if phase:
398 if phase:
383 roots = oldroots[phase]
399 roots = oldroots[phase]
384 revs = set(repo.revs('%ln::%ld', roots, affected))
400 revs = set(repo.revs('%ln::%ld', roots, affected))
385 affected -= revs
401 affected -= revs
386 else: # public phase
402 else: # public phase
387 revs = affected
403 revs = affected
388 for r in revs:
404 for r in revs:
389 _trackphasechange(phasetracking, r, phase, targetphase)
405 _trackphasechange(phasetracking, r, phase, targetphase)
390 repo.invalidatevolatilesets()
406 repo.invalidatevolatilesets()
391
407
392 def _retractboundary(self, repo, tr, targetphase, nodes):
408 def _retractboundary(self, repo, tr, targetphase, nodes):
393 # Be careful to preserve shallow-copied values: do not update
409 # Be careful to preserve shallow-copied values: do not update
394 # phaseroots values, replace them.
410 # phaseroots values, replace them.
395
411
396 repo = repo.unfiltered()
412 repo = repo.unfiltered()
397 currentroots = self.phaseroots[targetphase]
413 currentroots = self.phaseroots[targetphase]
398 finalroots = oldroots = set(currentroots)
414 finalroots = oldroots = set(currentroots)
399 newroots = [n for n in nodes
415 newroots = [n for n in nodes
400 if self.phase(repo, repo[n].rev()) < targetphase]
416 if self.phase(repo, repo[n].rev()) < targetphase]
401 if newroots:
417 if newroots:
402
418
403 if nullid in newroots:
419 if nullid in newroots:
404 raise error.Abort(_('cannot change null revision phase'))
420 raise error.Abort(_('cannot change null revision phase'))
405 currentroots = currentroots.copy()
421 currentroots = currentroots.copy()
406 currentroots.update(newroots)
422 currentroots.update(newroots)
407
423
408 # Only compute new roots for revs above the roots that are being
424 # Only compute new roots for revs above the roots that are being
409 # retracted.
425 # retracted.
410 minnewroot = min(repo[n].rev() for n in newroots)
426 minnewroot = min(repo[n].rev() for n in newroots)
411 aboveroots = [n for n in currentroots
427 aboveroots = [n for n in currentroots
412 if repo[n].rev() >= minnewroot]
428 if repo[n].rev() >= minnewroot]
413 updatedroots = repo.set('roots(%ln::)', aboveroots)
429 updatedroots = repo.set('roots(%ln::)', aboveroots)
414
430
415 finalroots = set(n for n in currentroots if repo[n].rev() <
431 finalroots = set(n for n in currentroots if repo[n].rev() <
416 minnewroot)
432 minnewroot)
417 finalroots.update(ctx.node() for ctx in updatedroots)
433 finalroots.update(ctx.node() for ctx in updatedroots)
418 if finalroots != oldroots:
434 if finalroots != oldroots:
419 self._updateroots(targetphase, finalroots, tr)
435 self._updateroots(targetphase, finalroots, tr)
420 return True
436 return True
421 return False
437 return False
422
438
423 def filterunknown(self, repo):
439 def filterunknown(self, repo):
424 """remove unknown nodes from the phase boundary
440 """remove unknown nodes from the phase boundary
425
441
426 Nothing is lost as unknown nodes only hold data for their descendants.
442 Nothing is lost as unknown nodes only hold data for their descendants.
427 """
443 """
428 filtered = False
444 filtered = False
429 nodemap = repo.changelog.nodemap # to filter unknown nodes
445 nodemap = repo.changelog.nodemap # to filter unknown nodes
430 for phase, nodes in enumerate(self.phaseroots):
446 for phase, nodes in enumerate(self.phaseroots):
431 missing = sorted(node for node in nodes if node not in nodemap)
447 missing = sorted(node for node in nodes if node not in nodemap)
432 if missing:
448 if missing:
433 for mnode in missing:
449 for mnode in missing:
434 repo.ui.debug(
450 repo.ui.debug(
435 'removing unknown node %s from %i-phase boundary\n'
451 'removing unknown node %s from %i-phase boundary\n'
436 % (short(mnode), phase))
452 % (short(mnode), phase))
437 nodes.symmetric_difference_update(missing)
453 nodes.symmetric_difference_update(missing)
438 filtered = True
454 filtered = True
439 if filtered:
455 if filtered:
440 self.dirty = True
456 self.dirty = True
441 # filterunknown is called by repo.destroyed, we may have no changes in
457 # filterunknown is called by repo.destroyed, we may have no changes in
442 # root but phaserevs contents is certainly invalid (or at least we
458 # root but phaserevs contents is certainly invalid (or at least we
443 # have not proper way to check that). related to issue 3858.
459 # have not proper way to check that). related to issue 3858.
444 #
460 #
445 # The other caller is __init__ that have no _phaserevs initialized
461 # The other caller is __init__ that have no _phaserevs initialized
446 # anyway. If this change we should consider adding a dedicated
462 # anyway. If this change we should consider adding a dedicated
447 # "destroyed" function to phasecache or a proper cache key mechanism
463 # "destroyed" function to phasecache or a proper cache key mechanism
448 # (see branchmap one)
464 # (see branchmap one)
449 self.invalidate()
465 self.invalidate()
450
466
451 def advanceboundary(repo, tr, targetphase, nodes):
467 def advanceboundary(repo, tr, targetphase, nodes):
452 """Add nodes to a phase changing other nodes phases if necessary.
468 """Add nodes to a phase changing other nodes phases if necessary.
453
469
454 This function move boundary *forward* this means that all nodes
470 This function move boundary *forward* this means that all nodes
455 are set in the target phase or kept in a *lower* phase.
471 are set in the target phase or kept in a *lower* phase.
456
472
457 Simplify boundary to contains phase roots only."""
473 Simplify boundary to contains phase roots only."""
458 phcache = repo._phasecache.copy()
474 phcache = repo._phasecache.copy()
459 phcache.advanceboundary(repo, tr, targetphase, nodes)
475 phcache.advanceboundary(repo, tr, targetphase, nodes)
460 repo._phasecache.replace(phcache)
476 repo._phasecache.replace(phcache)
461
477
462 def retractboundary(repo, tr, targetphase, nodes):
478 def retractboundary(repo, tr, targetphase, nodes):
463 """Set nodes back to a phase changing other nodes phases if
479 """Set nodes back to a phase changing other nodes phases if
464 necessary.
480 necessary.
465
481
466 This function move boundary *backward* this means that all nodes
482 This function move boundary *backward* this means that all nodes
467 are set in the target phase or kept in a *higher* phase.
483 are set in the target phase or kept in a *higher* phase.
468
484
469 Simplify boundary to contains phase roots only."""
485 Simplify boundary to contains phase roots only."""
470 phcache = repo._phasecache.copy()
486 phcache = repo._phasecache.copy()
471 phcache.retractboundary(repo, tr, targetphase, nodes)
487 phcache.retractboundary(repo, tr, targetphase, nodes)
472 repo._phasecache.replace(phcache)
488 repo._phasecache.replace(phcache)
473
489
474 def registernew(repo, tr, targetphase, nodes):
490 def registernew(repo, tr, targetphase, nodes):
475 """register a new revision and its phase
491 """register a new revision and its phase
476
492
477 Code adding revisions to the repository should use this function to
493 Code adding revisions to the repository should use this function to
478 set new changeset in their target phase (or higher).
494 set new changeset in their target phase (or higher).
479 """
495 """
480 phcache = repo._phasecache.copy()
496 phcache = repo._phasecache.copy()
481 phcache.registernew(repo, tr, targetphase, nodes)
497 phcache.registernew(repo, tr, targetphase, nodes)
482 repo._phasecache.replace(phcache)
498 repo._phasecache.replace(phcache)
483
499
484 def listphases(repo):
500 def listphases(repo):
485 """List phases root for serialization over pushkey"""
501 """List phases root for serialization over pushkey"""
486 # Use ordered dictionary so behavior is deterministic.
502 # Use ordered dictionary so behavior is deterministic.
487 keys = util.sortdict()
503 keys = util.sortdict()
488 value = '%i' % draft
504 value = '%i' % draft
489 for root in repo._phasecache.phaseroots[draft]:
505 for root in repo._phasecache.phaseroots[draft]:
490 keys[hex(root)] = value
506 keys[hex(root)] = value
491
507
492 if repo.publishing():
508 if repo.publishing():
493 # Add an extra data to let remote know we are a publishing
509 # Add an extra data to let remote know we are a publishing
494 # repo. Publishing repo can't just pretend they are old repo.
510 # repo. Publishing repo can't just pretend they are old repo.
495 # When pushing to a publishing repo, the client still need to
511 # When pushing to a publishing repo, the client still need to
496 # push phase boundary
512 # push phase boundary
497 #
513 #
498 # Push do not only push changeset. It also push phase data.
514 # Push do not only push changeset. It also push phase data.
499 # New phase data may apply to common changeset which won't be
515 # New phase data may apply to common changeset which won't be
500 # push (as they are common). Here is a very simple example:
516 # push (as they are common). Here is a very simple example:
501 #
517 #
502 # 1) repo A push changeset X as draft to repo B
518 # 1) repo A push changeset X as draft to repo B
503 # 2) repo B make changeset X public
519 # 2) repo B make changeset X public
504 # 3) repo B push to repo A. X is not pushed but the data that
520 # 3) repo B push to repo A. X is not pushed but the data that
505 # X as now public should
521 # X as now public should
506 #
522 #
507 # The server can't handle it on it's own as it has no idea of
523 # The server can't handle it on it's own as it has no idea of
508 # client phase data.
524 # client phase data.
509 keys['publishing'] = 'True'
525 keys['publishing'] = 'True'
510 return keys
526 return keys
511
527
512 def pushphase(repo, nhex, oldphasestr, newphasestr):
528 def pushphase(repo, nhex, oldphasestr, newphasestr):
513 """List phases root for serialization over pushkey"""
529 """List phases root for serialization over pushkey"""
514 repo = repo.unfiltered()
530 repo = repo.unfiltered()
515 with repo.lock():
531 with repo.lock():
516 currentphase = repo[nhex].phase()
532 currentphase = repo[nhex].phase()
517 newphase = abs(int(newphasestr)) # let's avoid negative index surprise
533 newphase = abs(int(newphasestr)) # let's avoid negative index surprise
518 oldphase = abs(int(oldphasestr)) # let's avoid negative index surprise
534 oldphase = abs(int(oldphasestr)) # let's avoid negative index surprise
519 if currentphase == oldphase and newphase < oldphase:
535 if currentphase == oldphase and newphase < oldphase:
520 with repo.transaction('pushkey-phase') as tr:
536 with repo.transaction('pushkey-phase') as tr:
521 advanceboundary(repo, tr, newphase, [bin(nhex)])
537 advanceboundary(repo, tr, newphase, [bin(nhex)])
522 return True
538 return True
523 elif currentphase == newphase:
539 elif currentphase == newphase:
524 # raced, but got correct result
540 # raced, but got correct result
525 return True
541 return True
526 else:
542 else:
527 return False
543 return False
528
544
529 def subsetphaseheads(repo, subset):
545 def subsetphaseheads(repo, subset):
530 """Finds the phase heads for a subset of a history
546 """Finds the phase heads for a subset of a history
531
547
532 Returns a list indexed by phase number where each item is a list of phase
548 Returns a list indexed by phase number where each item is a list of phase
533 head nodes.
549 head nodes.
534 """
550 """
535 cl = repo.changelog
551 cl = repo.changelog
536
552
537 headsbyphase = [[] for i in allphases]
553 headsbyphase = [[] for i in allphases]
538 # No need to keep track of secret phase; any heads in the subset that
554 # No need to keep track of secret phase; any heads in the subset that
539 # are not mentioned are implicitly secret.
555 # are not mentioned are implicitly secret.
540 for phase in allphases[:-1]:
556 for phase in allphases[:-1]:
541 revset = "heads(%%ln & %s())" % phasenames[phase]
557 revset = "heads(%%ln & %s())" % phasenames[phase]
542 headsbyphase[phase] = [cl.node(r) for r in repo.revs(revset, subset)]
558 headsbyphase[phase] = [cl.node(r) for r in repo.revs(revset, subset)]
543 return headsbyphase
559 return headsbyphase
544
560
545 def updatephases(repo, tr, headsbyphase):
561 def updatephases(repo, tr, headsbyphase):
546 """Updates the repo with the given phase heads"""
562 """Updates the repo with the given phase heads"""
547 # Now advance phase boundaries of all but secret phase
563 # Now advance phase boundaries of all but secret phase
548 for phase in allphases[:-1]:
564 for phase in allphases[:-1]:
549 advanceboundary(repo, tr, phase, headsbyphase[phase])
565 advanceboundary(repo, tr, phase, headsbyphase[phase])
550
566
551 def analyzeremotephases(repo, subset, roots):
567 def analyzeremotephases(repo, subset, roots):
552 """Compute phases heads and root in a subset of node from root dict
568 """Compute phases heads and root in a subset of node from root dict
553
569
554 * subset is heads of the subset
570 * subset is heads of the subset
555 * roots is {<nodeid> => phase} mapping. key and value are string.
571 * roots is {<nodeid> => phase} mapping. key and value are string.
556
572
557 Accept unknown element input
573 Accept unknown element input
558 """
574 """
559 repo = repo.unfiltered()
575 repo = repo.unfiltered()
560 # build list from dictionary
576 # build list from dictionary
561 draftroots = []
577 draftroots = []
562 nodemap = repo.changelog.nodemap # to filter unknown nodes
578 nodemap = repo.changelog.nodemap # to filter unknown nodes
563 for nhex, phase in roots.iteritems():
579 for nhex, phase in roots.iteritems():
564 if nhex == 'publishing': # ignore data related to publish option
580 if nhex == 'publishing': # ignore data related to publish option
565 continue
581 continue
566 node = bin(nhex)
582 node = bin(nhex)
567 phase = int(phase)
583 phase = int(phase)
568 if phase == public:
584 if phase == public:
569 if node != nullid:
585 if node != nullid:
570 repo.ui.warn(_('ignoring inconsistent public root'
586 repo.ui.warn(_('ignoring inconsistent public root'
571 ' from remote: %s\n') % nhex)
587 ' from remote: %s\n') % nhex)
572 elif phase == draft:
588 elif phase == draft:
573 if node in nodemap:
589 if node in nodemap:
574 draftroots.append(node)
590 draftroots.append(node)
575 else:
591 else:
576 repo.ui.warn(_('ignoring unexpected root from remote: %i %s\n')
592 repo.ui.warn(_('ignoring unexpected root from remote: %i %s\n')
577 % (phase, nhex))
593 % (phase, nhex))
578 # compute heads
594 # compute heads
579 publicheads = newheads(repo, subset, draftroots)
595 publicheads = newheads(repo, subset, draftroots)
580 return publicheads, draftroots
596 return publicheads, draftroots
581
597
582 def newheads(repo, heads, roots):
598 def newheads(repo, heads, roots):
583 """compute new head of a subset minus another
599 """compute new head of a subset minus another
584
600
585 * `heads`: define the first subset
601 * `heads`: define the first subset
586 * `roots`: define the second we subtract from the first"""
602 * `roots`: define the second we subtract from the first"""
587 repo = repo.unfiltered()
603 repo = repo.unfiltered()
588 revset = repo.set('heads((%ln + parents(%ln)) - (%ln::%ln))',
604 revset = repo.set('heads((%ln + parents(%ln)) - (%ln::%ln))',
589 heads, roots, roots, heads)
605 heads, roots, roots, heads)
590 return [c.node() for c in revset]
606 return [c.node() for c in revset]
591
607
592
608
593 def newcommitphase(ui):
609 def newcommitphase(ui):
594 """helper to get the target phase of new commit
610 """helper to get the target phase of new commit
595
611
596 Handle all possible values for the phases.new-commit options.
612 Handle all possible values for the phases.new-commit options.
597
613
598 """
614 """
599 v = ui.config('phases', 'new-commit', draft)
615 v = ui.config('phases', 'new-commit', draft)
600 try:
616 try:
601 return phasenames.index(v)
617 return phasenames.index(v)
602 except ValueError:
618 except ValueError:
603 try:
619 try:
604 return int(v)
620 return int(v)
605 except ValueError:
621 except ValueError:
606 msg = _("phases.new-commit: not a valid phase name ('%s')")
622 msg = _("phases.new-commit: not a valid phase name ('%s')")
607 raise error.ConfigError(msg % v)
623 raise error.ConfigError(msg % v)
608
624
609 def hassecret(repo):
625 def hassecret(repo):
610 """utility function that check if a repo have any secret changeset."""
626 """utility function that check if a repo have any secret changeset."""
611 return bool(repo._phasecache.phaseroots[2])
627 return bool(repo._phasecache.phaseroots[2])
General Comments 0
You need to be logged in to leave comments. Login now