##// END OF EJS Templates
tagcache: distinguish between invalid and missing entries...
Matt Harbison -
r47223:7fbca460 default draft
parent child Browse files
Show More
@@ -1,2561 +1,2561 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import, division
148 from __future__ import absolute_import, division
149
149
150 import collections
150 import collections
151 import errno
151 import errno
152 import os
152 import os
153 import re
153 import re
154 import string
154 import string
155 import struct
155 import struct
156 import sys
156 import sys
157
157
158 from .i18n import _
158 from .i18n import _
159 from .node import (
159 from .node import (
160 hex,
160 hex,
161 nullid,
161 nullid,
162 short,
162 short,
163 )
163 )
164 from . import (
164 from . import (
165 bookmarks,
165 bookmarks,
166 changegroup,
166 changegroup,
167 encoding,
167 encoding,
168 error,
168 error,
169 obsolete,
169 obsolete,
170 phases,
170 phases,
171 pushkey,
171 pushkey,
172 pycompat,
172 pycompat,
173 requirements,
173 requirements,
174 scmutil,
174 scmutil,
175 streamclone,
175 streamclone,
176 tags,
176 tags,
177 url,
177 url,
178 util,
178 util,
179 )
179 )
180 from .utils import stringutil
180 from .utils import stringutil
181
181
182 urlerr = util.urlerr
182 urlerr = util.urlerr
183 urlreq = util.urlreq
183 urlreq = util.urlreq
184
184
185 _pack = struct.pack
185 _pack = struct.pack
186 _unpack = struct.unpack
186 _unpack = struct.unpack
187
187
188 _fstreamparamsize = b'>i'
188 _fstreamparamsize = b'>i'
189 _fpartheadersize = b'>i'
189 _fpartheadersize = b'>i'
190 _fparttypesize = b'>B'
190 _fparttypesize = b'>B'
191 _fpartid = b'>I'
191 _fpartid = b'>I'
192 _fpayloadsize = b'>i'
192 _fpayloadsize = b'>i'
193 _fpartparamcount = b'>BB'
193 _fpartparamcount = b'>BB'
194
194
195 preferedchunksize = 32768
195 preferedchunksize = 32768
196
196
197 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
197 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
198
198
199
199
200 def outdebug(ui, message):
200 def outdebug(ui, message):
201 """debug regarding output stream (bundling)"""
201 """debug regarding output stream (bundling)"""
202 if ui.configbool(b'devel', b'bundle2.debug'):
202 if ui.configbool(b'devel', b'bundle2.debug'):
203 ui.debug(b'bundle2-output: %s\n' % message)
203 ui.debug(b'bundle2-output: %s\n' % message)
204
204
205
205
206 def indebug(ui, message):
206 def indebug(ui, message):
207 """debug on input stream (unbundling)"""
207 """debug on input stream (unbundling)"""
208 if ui.configbool(b'devel', b'bundle2.debug'):
208 if ui.configbool(b'devel', b'bundle2.debug'):
209 ui.debug(b'bundle2-input: %s\n' % message)
209 ui.debug(b'bundle2-input: %s\n' % message)
210
210
211
211
212 def validateparttype(parttype):
212 def validateparttype(parttype):
213 """raise ValueError if a parttype contains invalid character"""
213 """raise ValueError if a parttype contains invalid character"""
214 if _parttypeforbidden.search(parttype):
214 if _parttypeforbidden.search(parttype):
215 raise ValueError(parttype)
215 raise ValueError(parttype)
216
216
217
217
218 def _makefpartparamsizes(nbparams):
218 def _makefpartparamsizes(nbparams):
219 """return a struct format to read part parameter sizes
219 """return a struct format to read part parameter sizes
220
220
221 The number parameters is variable so we need to build that format
221 The number parameters is variable so we need to build that format
222 dynamically.
222 dynamically.
223 """
223 """
224 return b'>' + (b'BB' * nbparams)
224 return b'>' + (b'BB' * nbparams)
225
225
226
226
227 parthandlermapping = {}
227 parthandlermapping = {}
228
228
229
229
230 def parthandler(parttype, params=()):
230 def parthandler(parttype, params=()):
231 """decorator that register a function as a bundle2 part handler
231 """decorator that register a function as a bundle2 part handler
232
232
233 eg::
233 eg::
234
234
235 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
235 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
236 def myparttypehandler(...):
236 def myparttypehandler(...):
237 '''process a part of type "my part".'''
237 '''process a part of type "my part".'''
238 ...
238 ...
239 """
239 """
240 validateparttype(parttype)
240 validateparttype(parttype)
241
241
242 def _decorator(func):
242 def _decorator(func):
243 lparttype = parttype.lower() # enforce lower case matching.
243 lparttype = parttype.lower() # enforce lower case matching.
244 assert lparttype not in parthandlermapping
244 assert lparttype not in parthandlermapping
245 parthandlermapping[lparttype] = func
245 parthandlermapping[lparttype] = func
246 func.params = frozenset(params)
246 func.params = frozenset(params)
247 return func
247 return func
248
248
249 return _decorator
249 return _decorator
250
250
251
251
252 class unbundlerecords(object):
252 class unbundlerecords(object):
253 """keep record of what happens during and unbundle
253 """keep record of what happens during and unbundle
254
254
255 New records are added using `records.add('cat', obj)`. Where 'cat' is a
255 New records are added using `records.add('cat', obj)`. Where 'cat' is a
256 category of record and obj is an arbitrary object.
256 category of record and obj is an arbitrary object.
257
257
258 `records['cat']` will return all entries of this category 'cat'.
258 `records['cat']` will return all entries of this category 'cat'.
259
259
260 Iterating on the object itself will yield `('category', obj)` tuples
260 Iterating on the object itself will yield `('category', obj)` tuples
261 for all entries.
261 for all entries.
262
262
263 All iterations happens in chronological order.
263 All iterations happens in chronological order.
264 """
264 """
265
265
266 def __init__(self):
266 def __init__(self):
267 self._categories = {}
267 self._categories = {}
268 self._sequences = []
268 self._sequences = []
269 self._replies = {}
269 self._replies = {}
270
270
271 def add(self, category, entry, inreplyto=None):
271 def add(self, category, entry, inreplyto=None):
272 """add a new record of a given category.
272 """add a new record of a given category.
273
273
274 The entry can then be retrieved in the list returned by
274 The entry can then be retrieved in the list returned by
275 self['category']."""
275 self['category']."""
276 self._categories.setdefault(category, []).append(entry)
276 self._categories.setdefault(category, []).append(entry)
277 self._sequences.append((category, entry))
277 self._sequences.append((category, entry))
278 if inreplyto is not None:
278 if inreplyto is not None:
279 self.getreplies(inreplyto).add(category, entry)
279 self.getreplies(inreplyto).add(category, entry)
280
280
281 def getreplies(self, partid):
281 def getreplies(self, partid):
282 """get the records that are replies to a specific part"""
282 """get the records that are replies to a specific part"""
283 return self._replies.setdefault(partid, unbundlerecords())
283 return self._replies.setdefault(partid, unbundlerecords())
284
284
285 def __getitem__(self, cat):
285 def __getitem__(self, cat):
286 return tuple(self._categories.get(cat, ()))
286 return tuple(self._categories.get(cat, ()))
287
287
288 def __iter__(self):
288 def __iter__(self):
289 return iter(self._sequences)
289 return iter(self._sequences)
290
290
291 def __len__(self):
291 def __len__(self):
292 return len(self._sequences)
292 return len(self._sequences)
293
293
294 def __nonzero__(self):
294 def __nonzero__(self):
295 return bool(self._sequences)
295 return bool(self._sequences)
296
296
297 __bool__ = __nonzero__
297 __bool__ = __nonzero__
298
298
299
299
300 class bundleoperation(object):
300 class bundleoperation(object):
301 """an object that represents a single bundling process
301 """an object that represents a single bundling process
302
302
303 Its purpose is to carry unbundle-related objects and states.
303 Its purpose is to carry unbundle-related objects and states.
304
304
305 A new object should be created at the beginning of each bundle processing.
305 A new object should be created at the beginning of each bundle processing.
306 The object is to be returned by the processing function.
306 The object is to be returned by the processing function.
307
307
308 The object has very little content now it will ultimately contain:
308 The object has very little content now it will ultimately contain:
309 * an access to the repo the bundle is applied to,
309 * an access to the repo the bundle is applied to,
310 * a ui object,
310 * a ui object,
311 * a way to retrieve a transaction to add changes to the repo,
311 * a way to retrieve a transaction to add changes to the repo,
312 * a way to record the result of processing each part,
312 * a way to record the result of processing each part,
313 * a way to construct a bundle response when applicable.
313 * a way to construct a bundle response when applicable.
314 """
314 """
315
315
316 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
316 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
317 self.repo = repo
317 self.repo = repo
318 self.ui = repo.ui
318 self.ui = repo.ui
319 self.records = unbundlerecords()
319 self.records = unbundlerecords()
320 self.reply = None
320 self.reply = None
321 self.captureoutput = captureoutput
321 self.captureoutput = captureoutput
322 self.hookargs = {}
322 self.hookargs = {}
323 self._gettransaction = transactiongetter
323 self._gettransaction = transactiongetter
324 # carries value that can modify part behavior
324 # carries value that can modify part behavior
325 self.modes = {}
325 self.modes = {}
326 self.source = source
326 self.source = source
327
327
328 def gettransaction(self):
328 def gettransaction(self):
329 transaction = self._gettransaction()
329 transaction = self._gettransaction()
330
330
331 if self.hookargs:
331 if self.hookargs:
332 # the ones added to the transaction supercede those added
332 # the ones added to the transaction supercede those added
333 # to the operation.
333 # to the operation.
334 self.hookargs.update(transaction.hookargs)
334 self.hookargs.update(transaction.hookargs)
335 transaction.hookargs = self.hookargs
335 transaction.hookargs = self.hookargs
336
336
337 # mark the hookargs as flushed. further attempts to add to
337 # mark the hookargs as flushed. further attempts to add to
338 # hookargs will result in an abort.
338 # hookargs will result in an abort.
339 self.hookargs = None
339 self.hookargs = None
340
340
341 return transaction
341 return transaction
342
342
343 def addhookargs(self, hookargs):
343 def addhookargs(self, hookargs):
344 if self.hookargs is None:
344 if self.hookargs is None:
345 raise error.ProgrammingError(
345 raise error.ProgrammingError(
346 b'attempted to add hookargs to '
346 b'attempted to add hookargs to '
347 b'operation after transaction started'
347 b'operation after transaction started'
348 )
348 )
349 self.hookargs.update(hookargs)
349 self.hookargs.update(hookargs)
350
350
351
351
352 class TransactionUnavailable(RuntimeError):
352 class TransactionUnavailable(RuntimeError):
353 pass
353 pass
354
354
355
355
356 def _notransaction():
356 def _notransaction():
357 """default method to get a transaction while processing a bundle
357 """default method to get a transaction while processing a bundle
358
358
359 Raise an exception to highlight the fact that no transaction was expected
359 Raise an exception to highlight the fact that no transaction was expected
360 to be created"""
360 to be created"""
361 raise TransactionUnavailable()
361 raise TransactionUnavailable()
362
362
363
363
364 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
364 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
365 # transform me into unbundler.apply() as soon as the freeze is lifted
365 # transform me into unbundler.apply() as soon as the freeze is lifted
366 if isinstance(unbundler, unbundle20):
366 if isinstance(unbundler, unbundle20):
367 tr.hookargs[b'bundle2'] = b'1'
367 tr.hookargs[b'bundle2'] = b'1'
368 if source is not None and b'source' not in tr.hookargs:
368 if source is not None and b'source' not in tr.hookargs:
369 tr.hookargs[b'source'] = source
369 tr.hookargs[b'source'] = source
370 if url is not None and b'url' not in tr.hookargs:
370 if url is not None and b'url' not in tr.hookargs:
371 tr.hookargs[b'url'] = url
371 tr.hookargs[b'url'] = url
372 return processbundle(repo, unbundler, lambda: tr, source=source)
372 return processbundle(repo, unbundler, lambda: tr, source=source)
373 else:
373 else:
374 # the transactiongetter won't be used, but we might as well set it
374 # the transactiongetter won't be used, but we might as well set it
375 op = bundleoperation(repo, lambda: tr, source=source)
375 op = bundleoperation(repo, lambda: tr, source=source)
376 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
376 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
377 return op
377 return op
378
378
379
379
380 class partiterator(object):
380 class partiterator(object):
381 def __init__(self, repo, op, unbundler):
381 def __init__(self, repo, op, unbundler):
382 self.repo = repo
382 self.repo = repo
383 self.op = op
383 self.op = op
384 self.unbundler = unbundler
384 self.unbundler = unbundler
385 self.iterator = None
385 self.iterator = None
386 self.count = 0
386 self.count = 0
387 self.current = None
387 self.current = None
388
388
389 def __enter__(self):
389 def __enter__(self):
390 def func():
390 def func():
391 itr = enumerate(self.unbundler.iterparts(), 1)
391 itr = enumerate(self.unbundler.iterparts(), 1)
392 for count, p in itr:
392 for count, p in itr:
393 self.count = count
393 self.count = count
394 self.current = p
394 self.current = p
395 yield p
395 yield p
396 p.consume()
396 p.consume()
397 self.current = None
397 self.current = None
398
398
399 self.iterator = func()
399 self.iterator = func()
400 return self.iterator
400 return self.iterator
401
401
402 def __exit__(self, type, exc, tb):
402 def __exit__(self, type, exc, tb):
403 if not self.iterator:
403 if not self.iterator:
404 return
404 return
405
405
406 # Only gracefully abort in a normal exception situation. User aborts
406 # Only gracefully abort in a normal exception situation. User aborts
407 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
407 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
408 # and should not gracefully cleanup.
408 # and should not gracefully cleanup.
409 if isinstance(exc, Exception):
409 if isinstance(exc, Exception):
410 # Any exceptions seeking to the end of the bundle at this point are
410 # Any exceptions seeking to the end of the bundle at this point are
411 # almost certainly related to the underlying stream being bad.
411 # almost certainly related to the underlying stream being bad.
412 # And, chances are that the exception we're handling is related to
412 # And, chances are that the exception we're handling is related to
413 # getting in that bad state. So, we swallow the seeking error and
413 # getting in that bad state. So, we swallow the seeking error and
414 # re-raise the original error.
414 # re-raise the original error.
415 seekerror = False
415 seekerror = False
416 try:
416 try:
417 if self.current:
417 if self.current:
418 # consume the part content to not corrupt the stream.
418 # consume the part content to not corrupt the stream.
419 self.current.consume()
419 self.current.consume()
420
420
421 for part in self.iterator:
421 for part in self.iterator:
422 # consume the bundle content
422 # consume the bundle content
423 part.consume()
423 part.consume()
424 except Exception:
424 except Exception:
425 seekerror = True
425 seekerror = True
426
426
427 # Small hack to let caller code distinguish exceptions from bundle2
427 # Small hack to let caller code distinguish exceptions from bundle2
428 # processing from processing the old format. This is mostly needed
428 # processing from processing the old format. This is mostly needed
429 # to handle different return codes to unbundle according to the type
429 # to handle different return codes to unbundle according to the type
430 # of bundle. We should probably clean up or drop this return code
430 # of bundle. We should probably clean up or drop this return code
431 # craziness in a future version.
431 # craziness in a future version.
432 exc.duringunbundle2 = True
432 exc.duringunbundle2 = True
433 salvaged = []
433 salvaged = []
434 replycaps = None
434 replycaps = None
435 if self.op.reply is not None:
435 if self.op.reply is not None:
436 salvaged = self.op.reply.salvageoutput()
436 salvaged = self.op.reply.salvageoutput()
437 replycaps = self.op.reply.capabilities
437 replycaps = self.op.reply.capabilities
438 exc._replycaps = replycaps
438 exc._replycaps = replycaps
439 exc._bundle2salvagedoutput = salvaged
439 exc._bundle2salvagedoutput = salvaged
440
440
441 # Re-raising from a variable loses the original stack. So only use
441 # Re-raising from a variable loses the original stack. So only use
442 # that form if we need to.
442 # that form if we need to.
443 if seekerror:
443 if seekerror:
444 raise exc
444 raise exc
445
445
446 self.repo.ui.debug(
446 self.repo.ui.debug(
447 b'bundle2-input-bundle: %i parts total\n' % self.count
447 b'bundle2-input-bundle: %i parts total\n' % self.count
448 )
448 )
449
449
450
450
451 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
451 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
452 """This function process a bundle, apply effect to/from a repo
452 """This function process a bundle, apply effect to/from a repo
453
453
454 It iterates over each part then searches for and uses the proper handling
454 It iterates over each part then searches for and uses the proper handling
455 code to process the part. Parts are processed in order.
455 code to process the part. Parts are processed in order.
456
456
457 Unknown Mandatory part will abort the process.
457 Unknown Mandatory part will abort the process.
458
458
459 It is temporarily possible to provide a prebuilt bundleoperation to the
459 It is temporarily possible to provide a prebuilt bundleoperation to the
460 function. This is used to ensure output is properly propagated in case of
460 function. This is used to ensure output is properly propagated in case of
461 an error during the unbundling. This output capturing part will likely be
461 an error during the unbundling. This output capturing part will likely be
462 reworked and this ability will probably go away in the process.
462 reworked and this ability will probably go away in the process.
463 """
463 """
464 if op is None:
464 if op is None:
465 if transactiongetter is None:
465 if transactiongetter is None:
466 transactiongetter = _notransaction
466 transactiongetter = _notransaction
467 op = bundleoperation(repo, transactiongetter, source=source)
467 op = bundleoperation(repo, transactiongetter, source=source)
468 # todo:
468 # todo:
469 # - replace this is a init function soon.
469 # - replace this is a init function soon.
470 # - exception catching
470 # - exception catching
471 unbundler.params
471 unbundler.params
472 if repo.ui.debugflag:
472 if repo.ui.debugflag:
473 msg = [b'bundle2-input-bundle:']
473 msg = [b'bundle2-input-bundle:']
474 if unbundler.params:
474 if unbundler.params:
475 msg.append(b' %i params' % len(unbundler.params))
475 msg.append(b' %i params' % len(unbundler.params))
476 if op._gettransaction is None or op._gettransaction is _notransaction:
476 if op._gettransaction is None or op._gettransaction is _notransaction:
477 msg.append(b' no-transaction')
477 msg.append(b' no-transaction')
478 else:
478 else:
479 msg.append(b' with-transaction')
479 msg.append(b' with-transaction')
480 msg.append(b'\n')
480 msg.append(b'\n')
481 repo.ui.debug(b''.join(msg))
481 repo.ui.debug(b''.join(msg))
482
482
483 processparts(repo, op, unbundler)
483 processparts(repo, op, unbundler)
484
484
485 return op
485 return op
486
486
487
487
488 def processparts(repo, op, unbundler):
488 def processparts(repo, op, unbundler):
489 with partiterator(repo, op, unbundler) as parts:
489 with partiterator(repo, op, unbundler) as parts:
490 for part in parts:
490 for part in parts:
491 _processpart(op, part)
491 _processpart(op, part)
492
492
493
493
494 def _processchangegroup(op, cg, tr, source, url, **kwargs):
494 def _processchangegroup(op, cg, tr, source, url, **kwargs):
495 ret = cg.apply(op.repo, tr, source, url, **kwargs)
495 ret = cg.apply(op.repo, tr, source, url, **kwargs)
496 op.records.add(
496 op.records.add(
497 b'changegroup',
497 b'changegroup',
498 {
498 {
499 b'return': ret,
499 b'return': ret,
500 },
500 },
501 )
501 )
502 return ret
502 return ret
503
503
504
504
505 def _gethandler(op, part):
505 def _gethandler(op, part):
506 status = b'unknown' # used by debug output
506 status = b'unknown' # used by debug output
507 try:
507 try:
508 handler = parthandlermapping.get(part.type)
508 handler = parthandlermapping.get(part.type)
509 if handler is None:
509 if handler is None:
510 status = b'unsupported-type'
510 status = b'unsupported-type'
511 raise error.BundleUnknownFeatureError(parttype=part.type)
511 raise error.BundleUnknownFeatureError(parttype=part.type)
512 indebug(op.ui, b'found a handler for part %s' % part.type)
512 indebug(op.ui, b'found a handler for part %s' % part.type)
513 unknownparams = part.mandatorykeys - handler.params
513 unknownparams = part.mandatorykeys - handler.params
514 if unknownparams:
514 if unknownparams:
515 unknownparams = list(unknownparams)
515 unknownparams = list(unknownparams)
516 unknownparams.sort()
516 unknownparams.sort()
517 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
517 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
518 raise error.BundleUnknownFeatureError(
518 raise error.BundleUnknownFeatureError(
519 parttype=part.type, params=unknownparams
519 parttype=part.type, params=unknownparams
520 )
520 )
521 status = b'supported'
521 status = b'supported'
522 except error.BundleUnknownFeatureError as exc:
522 except error.BundleUnknownFeatureError as exc:
523 if part.mandatory: # mandatory parts
523 if part.mandatory: # mandatory parts
524 raise
524 raise
525 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
525 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
526 return # skip to part processing
526 return # skip to part processing
527 finally:
527 finally:
528 if op.ui.debugflag:
528 if op.ui.debugflag:
529 msg = [b'bundle2-input-part: "%s"' % part.type]
529 msg = [b'bundle2-input-part: "%s"' % part.type]
530 if not part.mandatory:
530 if not part.mandatory:
531 msg.append(b' (advisory)')
531 msg.append(b' (advisory)')
532 nbmp = len(part.mandatorykeys)
532 nbmp = len(part.mandatorykeys)
533 nbap = len(part.params) - nbmp
533 nbap = len(part.params) - nbmp
534 if nbmp or nbap:
534 if nbmp or nbap:
535 msg.append(b' (params:')
535 msg.append(b' (params:')
536 if nbmp:
536 if nbmp:
537 msg.append(b' %i mandatory' % nbmp)
537 msg.append(b' %i mandatory' % nbmp)
538 if nbap:
538 if nbap:
539 msg.append(b' %i advisory' % nbmp)
539 msg.append(b' %i advisory' % nbmp)
540 msg.append(b')')
540 msg.append(b')')
541 msg.append(b' %s\n' % status)
541 msg.append(b' %s\n' % status)
542 op.ui.debug(b''.join(msg))
542 op.ui.debug(b''.join(msg))
543
543
544 return handler
544 return handler
545
545
546
546
547 def _processpart(op, part):
547 def _processpart(op, part):
548 """process a single part from a bundle
548 """process a single part from a bundle
549
549
550 The part is guaranteed to have been fully consumed when the function exits
550 The part is guaranteed to have been fully consumed when the function exits
551 (even if an exception is raised)."""
551 (even if an exception is raised)."""
552 handler = _gethandler(op, part)
552 handler = _gethandler(op, part)
553 if handler is None:
553 if handler is None:
554 return
554 return
555
555
556 # handler is called outside the above try block so that we don't
556 # handler is called outside the above try block so that we don't
557 # risk catching KeyErrors from anything other than the
557 # risk catching KeyErrors from anything other than the
558 # parthandlermapping lookup (any KeyError raised by handler()
558 # parthandlermapping lookup (any KeyError raised by handler()
559 # itself represents a defect of a different variety).
559 # itself represents a defect of a different variety).
560 output = None
560 output = None
561 if op.captureoutput and op.reply is not None:
561 if op.captureoutput and op.reply is not None:
562 op.ui.pushbuffer(error=True, subproc=True)
562 op.ui.pushbuffer(error=True, subproc=True)
563 output = b''
563 output = b''
564 try:
564 try:
565 handler(op, part)
565 handler(op, part)
566 finally:
566 finally:
567 if output is not None:
567 if output is not None:
568 output = op.ui.popbuffer()
568 output = op.ui.popbuffer()
569 if output:
569 if output:
570 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
570 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
571 outpart.addparam(
571 outpart.addparam(
572 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
572 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
573 )
573 )
574
574
575
575
576 def decodecaps(blob):
576 def decodecaps(blob):
577 """decode a bundle2 caps bytes blob into a dictionary
577 """decode a bundle2 caps bytes blob into a dictionary
578
578
579 The blob is a list of capabilities (one per line)
579 The blob is a list of capabilities (one per line)
580 Capabilities may have values using a line of the form::
580 Capabilities may have values using a line of the form::
581
581
582 capability=value1,value2,value3
582 capability=value1,value2,value3
583
583
584 The values are always a list."""
584 The values are always a list."""
585 caps = {}
585 caps = {}
586 for line in blob.splitlines():
586 for line in blob.splitlines():
587 if not line:
587 if not line:
588 continue
588 continue
589 if b'=' not in line:
589 if b'=' not in line:
590 key, vals = line, ()
590 key, vals = line, ()
591 else:
591 else:
592 key, vals = line.split(b'=', 1)
592 key, vals = line.split(b'=', 1)
593 vals = vals.split(b',')
593 vals = vals.split(b',')
594 key = urlreq.unquote(key)
594 key = urlreq.unquote(key)
595 vals = [urlreq.unquote(v) for v in vals]
595 vals = [urlreq.unquote(v) for v in vals]
596 caps[key] = vals
596 caps[key] = vals
597 return caps
597 return caps
598
598
599
599
600 def encodecaps(caps):
600 def encodecaps(caps):
601 """encode a bundle2 caps dictionary into a bytes blob"""
601 """encode a bundle2 caps dictionary into a bytes blob"""
602 chunks = []
602 chunks = []
603 for ca in sorted(caps):
603 for ca in sorted(caps):
604 vals = caps[ca]
604 vals = caps[ca]
605 ca = urlreq.quote(ca)
605 ca = urlreq.quote(ca)
606 vals = [urlreq.quote(v) for v in vals]
606 vals = [urlreq.quote(v) for v in vals]
607 if vals:
607 if vals:
608 ca = b"%s=%s" % (ca, b','.join(vals))
608 ca = b"%s=%s" % (ca, b','.join(vals))
609 chunks.append(ca)
609 chunks.append(ca)
610 return b'\n'.join(chunks)
610 return b'\n'.join(chunks)
611
611
612
612
613 bundletypes = {
613 bundletypes = {
614 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
614 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
615 # since the unification ssh accepts a header but there
615 # since the unification ssh accepts a header but there
616 # is no capability signaling it.
616 # is no capability signaling it.
617 b"HG20": (), # special-cased below
617 b"HG20": (), # special-cased below
618 b"HG10UN": (b"HG10UN", b'UN'),
618 b"HG10UN": (b"HG10UN", b'UN'),
619 b"HG10BZ": (b"HG10", b'BZ'),
619 b"HG10BZ": (b"HG10", b'BZ'),
620 b"HG10GZ": (b"HG10GZ", b'GZ'),
620 b"HG10GZ": (b"HG10GZ", b'GZ'),
621 }
621 }
622
622
623 # hgweb uses this list to communicate its preferred type
623 # hgweb uses this list to communicate its preferred type
624 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
624 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
625
625
626
626
627 class bundle20(object):
627 class bundle20(object):
628 """represent an outgoing bundle2 container
628 """represent an outgoing bundle2 container
629
629
630 Use the `addparam` method to add stream level parameter. and `newpart` to
630 Use the `addparam` method to add stream level parameter. and `newpart` to
631 populate it. Then call `getchunks` to retrieve all the binary chunks of
631 populate it. Then call `getchunks` to retrieve all the binary chunks of
632 data that compose the bundle2 container."""
632 data that compose the bundle2 container."""
633
633
634 _magicstring = b'HG20'
634 _magicstring = b'HG20'
635
635
636 def __init__(self, ui, capabilities=()):
636 def __init__(self, ui, capabilities=()):
637 self.ui = ui
637 self.ui = ui
638 self._params = []
638 self._params = []
639 self._parts = []
639 self._parts = []
640 self.capabilities = dict(capabilities)
640 self.capabilities = dict(capabilities)
641 self._compengine = util.compengines.forbundletype(b'UN')
641 self._compengine = util.compengines.forbundletype(b'UN')
642 self._compopts = None
642 self._compopts = None
643 # If compression is being handled by a consumer of the raw
643 # If compression is being handled by a consumer of the raw
644 # data (e.g. the wire protocol), unsetting this flag tells
644 # data (e.g. the wire protocol), unsetting this flag tells
645 # consumers that the bundle is best left uncompressed.
645 # consumers that the bundle is best left uncompressed.
646 self.prefercompressed = True
646 self.prefercompressed = True
647
647
648 def setcompression(self, alg, compopts=None):
648 def setcompression(self, alg, compopts=None):
649 """setup core part compression to <alg>"""
649 """setup core part compression to <alg>"""
650 if alg in (None, b'UN'):
650 if alg in (None, b'UN'):
651 return
651 return
652 assert not any(n.lower() == b'compression' for n, v in self._params)
652 assert not any(n.lower() == b'compression' for n, v in self._params)
653 self.addparam(b'Compression', alg)
653 self.addparam(b'Compression', alg)
654 self._compengine = util.compengines.forbundletype(alg)
654 self._compengine = util.compengines.forbundletype(alg)
655 self._compopts = compopts
655 self._compopts = compopts
656
656
657 @property
657 @property
658 def nbparts(self):
658 def nbparts(self):
659 """total number of parts added to the bundler"""
659 """total number of parts added to the bundler"""
660 return len(self._parts)
660 return len(self._parts)
661
661
662 # methods used to defines the bundle2 content
662 # methods used to defines the bundle2 content
663 def addparam(self, name, value=None):
663 def addparam(self, name, value=None):
664 """add a stream level parameter"""
664 """add a stream level parameter"""
665 if not name:
665 if not name:
666 raise error.ProgrammingError(b'empty parameter name')
666 raise error.ProgrammingError(b'empty parameter name')
667 if name[0:1] not in pycompat.bytestr(
667 if name[0:1] not in pycompat.bytestr(
668 string.ascii_letters # pytype: disable=wrong-arg-types
668 string.ascii_letters # pytype: disable=wrong-arg-types
669 ):
669 ):
670 raise error.ProgrammingError(
670 raise error.ProgrammingError(
671 b'non letter first character: %s' % name
671 b'non letter first character: %s' % name
672 )
672 )
673 self._params.append((name, value))
673 self._params.append((name, value))
674
674
675 def addpart(self, part):
675 def addpart(self, part):
676 """add a new part to the bundle2 container
676 """add a new part to the bundle2 container
677
677
678 Parts contains the actual applicative payload."""
678 Parts contains the actual applicative payload."""
679 assert part.id is None
679 assert part.id is None
680 part.id = len(self._parts) # very cheap counter
680 part.id = len(self._parts) # very cheap counter
681 self._parts.append(part)
681 self._parts.append(part)
682
682
683 def newpart(self, typeid, *args, **kwargs):
683 def newpart(self, typeid, *args, **kwargs):
684 """create a new part and add it to the containers
684 """create a new part and add it to the containers
685
685
686 As the part is directly added to the containers. For now, this means
686 As the part is directly added to the containers. For now, this means
687 that any failure to properly initialize the part after calling
687 that any failure to properly initialize the part after calling
688 ``newpart`` should result in a failure of the whole bundling process.
688 ``newpart`` should result in a failure of the whole bundling process.
689
689
690 You can still fall back to manually create and add if you need better
690 You can still fall back to manually create and add if you need better
691 control."""
691 control."""
692 part = bundlepart(typeid, *args, **kwargs)
692 part = bundlepart(typeid, *args, **kwargs)
693 self.addpart(part)
693 self.addpart(part)
694 return part
694 return part
695
695
696 # methods used to generate the bundle2 stream
696 # methods used to generate the bundle2 stream
697 def getchunks(self):
697 def getchunks(self):
698 if self.ui.debugflag:
698 if self.ui.debugflag:
699 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
699 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
700 if self._params:
700 if self._params:
701 msg.append(b' (%i params)' % len(self._params))
701 msg.append(b' (%i params)' % len(self._params))
702 msg.append(b' %i parts total\n' % len(self._parts))
702 msg.append(b' %i parts total\n' % len(self._parts))
703 self.ui.debug(b''.join(msg))
703 self.ui.debug(b''.join(msg))
704 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
704 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
705 yield self._magicstring
705 yield self._magicstring
706 param = self._paramchunk()
706 param = self._paramchunk()
707 outdebug(self.ui, b'bundle parameter: %s' % param)
707 outdebug(self.ui, b'bundle parameter: %s' % param)
708 yield _pack(_fstreamparamsize, len(param))
708 yield _pack(_fstreamparamsize, len(param))
709 if param:
709 if param:
710 yield param
710 yield param
711 for chunk in self._compengine.compressstream(
711 for chunk in self._compengine.compressstream(
712 self._getcorechunk(), self._compopts
712 self._getcorechunk(), self._compopts
713 ):
713 ):
714 yield chunk
714 yield chunk
715
715
716 def _paramchunk(self):
716 def _paramchunk(self):
717 """return a encoded version of all stream parameters"""
717 """return a encoded version of all stream parameters"""
718 blocks = []
718 blocks = []
719 for par, value in self._params:
719 for par, value in self._params:
720 par = urlreq.quote(par)
720 par = urlreq.quote(par)
721 if value is not None:
721 if value is not None:
722 value = urlreq.quote(value)
722 value = urlreq.quote(value)
723 par = b'%s=%s' % (par, value)
723 par = b'%s=%s' % (par, value)
724 blocks.append(par)
724 blocks.append(par)
725 return b' '.join(blocks)
725 return b' '.join(blocks)
726
726
727 def _getcorechunk(self):
727 def _getcorechunk(self):
728 """yield chunk for the core part of the bundle
728 """yield chunk for the core part of the bundle
729
729
730 (all but headers and parameters)"""
730 (all but headers and parameters)"""
731 outdebug(self.ui, b'start of parts')
731 outdebug(self.ui, b'start of parts')
732 for part in self._parts:
732 for part in self._parts:
733 outdebug(self.ui, b'bundle part: "%s"' % part.type)
733 outdebug(self.ui, b'bundle part: "%s"' % part.type)
734 for chunk in part.getchunks(ui=self.ui):
734 for chunk in part.getchunks(ui=self.ui):
735 yield chunk
735 yield chunk
736 outdebug(self.ui, b'end of bundle')
736 outdebug(self.ui, b'end of bundle')
737 yield _pack(_fpartheadersize, 0)
737 yield _pack(_fpartheadersize, 0)
738
738
739 def salvageoutput(self):
739 def salvageoutput(self):
740 """return a list with a copy of all output parts in the bundle
740 """return a list with a copy of all output parts in the bundle
741
741
742 This is meant to be used during error handling to make sure we preserve
742 This is meant to be used during error handling to make sure we preserve
743 server output"""
743 server output"""
744 salvaged = []
744 salvaged = []
745 for part in self._parts:
745 for part in self._parts:
746 if part.type.startswith(b'output'):
746 if part.type.startswith(b'output'):
747 salvaged.append(part.copy())
747 salvaged.append(part.copy())
748 return salvaged
748 return salvaged
749
749
750
750
751 class unpackermixin(object):
751 class unpackermixin(object):
752 """A mixin to extract bytes and struct data from a stream"""
752 """A mixin to extract bytes and struct data from a stream"""
753
753
754 def __init__(self, fp):
754 def __init__(self, fp):
755 self._fp = fp
755 self._fp = fp
756
756
757 def _unpack(self, format):
757 def _unpack(self, format):
758 """unpack this struct format from the stream
758 """unpack this struct format from the stream
759
759
760 This method is meant for internal usage by the bundle2 protocol only.
760 This method is meant for internal usage by the bundle2 protocol only.
761 They directly manipulate the low level stream including bundle2 level
761 They directly manipulate the low level stream including bundle2 level
762 instruction.
762 instruction.
763
763
764 Do not use it to implement higher-level logic or methods."""
764 Do not use it to implement higher-level logic or methods."""
765 data = self._readexact(struct.calcsize(format))
765 data = self._readexact(struct.calcsize(format))
766 return _unpack(format, data)
766 return _unpack(format, data)
767
767
768 def _readexact(self, size):
768 def _readexact(self, size):
769 """read exactly <size> bytes from the stream
769 """read exactly <size> bytes from the stream
770
770
771 This method is meant for internal usage by the bundle2 protocol only.
771 This method is meant for internal usage by the bundle2 protocol only.
772 They directly manipulate the low level stream including bundle2 level
772 They directly manipulate the low level stream including bundle2 level
773 instruction.
773 instruction.
774
774
775 Do not use it to implement higher-level logic or methods."""
775 Do not use it to implement higher-level logic or methods."""
776 return changegroup.readexactly(self._fp, size)
776 return changegroup.readexactly(self._fp, size)
777
777
778
778
779 def getunbundler(ui, fp, magicstring=None):
779 def getunbundler(ui, fp, magicstring=None):
780 """return a valid unbundler object for a given magicstring"""
780 """return a valid unbundler object for a given magicstring"""
781 if magicstring is None:
781 if magicstring is None:
782 magicstring = changegroup.readexactly(fp, 4)
782 magicstring = changegroup.readexactly(fp, 4)
783 magic, version = magicstring[0:2], magicstring[2:4]
783 magic, version = magicstring[0:2], magicstring[2:4]
784 if magic != b'HG':
784 if magic != b'HG':
785 ui.debug(
785 ui.debug(
786 b"error: invalid magic: %r (version %r), should be 'HG'\n"
786 b"error: invalid magic: %r (version %r), should be 'HG'\n"
787 % (magic, version)
787 % (magic, version)
788 )
788 )
789 raise error.Abort(_(b'not a Mercurial bundle'))
789 raise error.Abort(_(b'not a Mercurial bundle'))
790 unbundlerclass = formatmap.get(version)
790 unbundlerclass = formatmap.get(version)
791 if unbundlerclass is None:
791 if unbundlerclass is None:
792 raise error.Abort(_(b'unknown bundle version %s') % version)
792 raise error.Abort(_(b'unknown bundle version %s') % version)
793 unbundler = unbundlerclass(ui, fp)
793 unbundler = unbundlerclass(ui, fp)
794 indebug(ui, b'start processing of %s stream' % magicstring)
794 indebug(ui, b'start processing of %s stream' % magicstring)
795 return unbundler
795 return unbundler
796
796
797
797
798 class unbundle20(unpackermixin):
798 class unbundle20(unpackermixin):
799 """interpret a bundle2 stream
799 """interpret a bundle2 stream
800
800
801 This class is fed with a binary stream and yields parts through its
801 This class is fed with a binary stream and yields parts through its
802 `iterparts` methods."""
802 `iterparts` methods."""
803
803
804 _magicstring = b'HG20'
804 _magicstring = b'HG20'
805
805
806 def __init__(self, ui, fp):
806 def __init__(self, ui, fp):
807 """If header is specified, we do not read it out of the stream."""
807 """If header is specified, we do not read it out of the stream."""
808 self.ui = ui
808 self.ui = ui
809 self._compengine = util.compengines.forbundletype(b'UN')
809 self._compengine = util.compengines.forbundletype(b'UN')
810 self._compressed = None
810 self._compressed = None
811 super(unbundle20, self).__init__(fp)
811 super(unbundle20, self).__init__(fp)
812
812
813 @util.propertycache
813 @util.propertycache
814 def params(self):
814 def params(self):
815 """dictionary of stream level parameters"""
815 """dictionary of stream level parameters"""
816 indebug(self.ui, b'reading bundle2 stream parameters')
816 indebug(self.ui, b'reading bundle2 stream parameters')
817 params = {}
817 params = {}
818 paramssize = self._unpack(_fstreamparamsize)[0]
818 paramssize = self._unpack(_fstreamparamsize)[0]
819 if paramssize < 0:
819 if paramssize < 0:
820 raise error.BundleValueError(
820 raise error.BundleValueError(
821 b'negative bundle param size: %i' % paramssize
821 b'negative bundle param size: %i' % paramssize
822 )
822 )
823 if paramssize:
823 if paramssize:
824 params = self._readexact(paramssize)
824 params = self._readexact(paramssize)
825 params = self._processallparams(params)
825 params = self._processallparams(params)
826 return params
826 return params
827
827
828 def _processallparams(self, paramsblock):
828 def _processallparams(self, paramsblock):
829 """"""
829 """"""
830 params = util.sortdict()
830 params = util.sortdict()
831 for p in paramsblock.split(b' '):
831 for p in paramsblock.split(b' '):
832 p = p.split(b'=', 1)
832 p = p.split(b'=', 1)
833 p = [urlreq.unquote(i) for i in p]
833 p = [urlreq.unquote(i) for i in p]
834 if len(p) < 2:
834 if len(p) < 2:
835 p.append(None)
835 p.append(None)
836 self._processparam(*p)
836 self._processparam(*p)
837 params[p[0]] = p[1]
837 params[p[0]] = p[1]
838 return params
838 return params
839
839
840 def _processparam(self, name, value):
840 def _processparam(self, name, value):
841 """process a parameter, applying its effect if needed
841 """process a parameter, applying its effect if needed
842
842
843 Parameter starting with a lower case letter are advisory and will be
843 Parameter starting with a lower case letter are advisory and will be
844 ignored when unknown. Those starting with an upper case letter are
844 ignored when unknown. Those starting with an upper case letter are
845 mandatory and will this function will raise a KeyError when unknown.
845 mandatory and will this function will raise a KeyError when unknown.
846
846
847 Note: no option are currently supported. Any input will be either
847 Note: no option are currently supported. Any input will be either
848 ignored or failing.
848 ignored or failing.
849 """
849 """
850 if not name:
850 if not name:
851 raise ValueError('empty parameter name')
851 raise ValueError('empty parameter name')
852 if name[0:1] not in pycompat.bytestr(
852 if name[0:1] not in pycompat.bytestr(
853 string.ascii_letters # pytype: disable=wrong-arg-types
853 string.ascii_letters # pytype: disable=wrong-arg-types
854 ):
854 ):
855 raise ValueError('non letter first character: %s' % name)
855 raise ValueError('non letter first character: %s' % name)
856 try:
856 try:
857 handler = b2streamparamsmap[name.lower()]
857 handler = b2streamparamsmap[name.lower()]
858 except KeyError:
858 except KeyError:
859 if name[0:1].islower():
859 if name[0:1].islower():
860 indebug(self.ui, b"ignoring unknown parameter %s" % name)
860 indebug(self.ui, b"ignoring unknown parameter %s" % name)
861 else:
861 else:
862 raise error.BundleUnknownFeatureError(params=(name,))
862 raise error.BundleUnknownFeatureError(params=(name,))
863 else:
863 else:
864 handler(self, name, value)
864 handler(self, name, value)
865
865
866 def _forwardchunks(self):
866 def _forwardchunks(self):
867 """utility to transfer a bundle2 as binary
867 """utility to transfer a bundle2 as binary
868
868
869 This is made necessary by the fact the 'getbundle' command over 'ssh'
869 This is made necessary by the fact the 'getbundle' command over 'ssh'
870 have no way to know then the reply end, relying on the bundle to be
870 have no way to know then the reply end, relying on the bundle to be
871 interpreted to know its end. This is terrible and we are sorry, but we
871 interpreted to know its end. This is terrible and we are sorry, but we
872 needed to move forward to get general delta enabled.
872 needed to move forward to get general delta enabled.
873 """
873 """
874 yield self._magicstring
874 yield self._magicstring
875 assert 'params' not in vars(self)
875 assert 'params' not in vars(self)
876 paramssize = self._unpack(_fstreamparamsize)[0]
876 paramssize = self._unpack(_fstreamparamsize)[0]
877 if paramssize < 0:
877 if paramssize < 0:
878 raise error.BundleValueError(
878 raise error.BundleValueError(
879 b'negative bundle param size: %i' % paramssize
879 b'negative bundle param size: %i' % paramssize
880 )
880 )
881 if paramssize:
881 if paramssize:
882 params = self._readexact(paramssize)
882 params = self._readexact(paramssize)
883 self._processallparams(params)
883 self._processallparams(params)
884 # The payload itself is decompressed below, so drop
884 # The payload itself is decompressed below, so drop
885 # the compression parameter passed down to compensate.
885 # the compression parameter passed down to compensate.
886 outparams = []
886 outparams = []
887 for p in params.split(b' '):
887 for p in params.split(b' '):
888 k, v = p.split(b'=', 1)
888 k, v = p.split(b'=', 1)
889 if k.lower() != b'compression':
889 if k.lower() != b'compression':
890 outparams.append(p)
890 outparams.append(p)
891 outparams = b' '.join(outparams)
891 outparams = b' '.join(outparams)
892 yield _pack(_fstreamparamsize, len(outparams))
892 yield _pack(_fstreamparamsize, len(outparams))
893 yield outparams
893 yield outparams
894 else:
894 else:
895 yield _pack(_fstreamparamsize, paramssize)
895 yield _pack(_fstreamparamsize, paramssize)
896 # From there, payload might need to be decompressed
896 # From there, payload might need to be decompressed
897 self._fp = self._compengine.decompressorreader(self._fp)
897 self._fp = self._compengine.decompressorreader(self._fp)
898 emptycount = 0
898 emptycount = 0
899 while emptycount < 2:
899 while emptycount < 2:
900 # so we can brainlessly loop
900 # so we can brainlessly loop
901 assert _fpartheadersize == _fpayloadsize
901 assert _fpartheadersize == _fpayloadsize
902 size = self._unpack(_fpartheadersize)[0]
902 size = self._unpack(_fpartheadersize)[0]
903 yield _pack(_fpartheadersize, size)
903 yield _pack(_fpartheadersize, size)
904 if size:
904 if size:
905 emptycount = 0
905 emptycount = 0
906 else:
906 else:
907 emptycount += 1
907 emptycount += 1
908 continue
908 continue
909 if size == flaginterrupt:
909 if size == flaginterrupt:
910 continue
910 continue
911 elif size < 0:
911 elif size < 0:
912 raise error.BundleValueError(b'negative chunk size: %i')
912 raise error.BundleValueError(b'negative chunk size: %i')
913 yield self._readexact(size)
913 yield self._readexact(size)
914
914
915 def iterparts(self, seekable=False):
915 def iterparts(self, seekable=False):
916 """yield all parts contained in the stream"""
916 """yield all parts contained in the stream"""
917 cls = seekableunbundlepart if seekable else unbundlepart
917 cls = seekableunbundlepart if seekable else unbundlepart
918 # make sure param have been loaded
918 # make sure param have been loaded
919 self.params
919 self.params
920 # From there, payload need to be decompressed
920 # From there, payload need to be decompressed
921 self._fp = self._compengine.decompressorreader(self._fp)
921 self._fp = self._compengine.decompressorreader(self._fp)
922 indebug(self.ui, b'start extraction of bundle2 parts')
922 indebug(self.ui, b'start extraction of bundle2 parts')
923 headerblock = self._readpartheader()
923 headerblock = self._readpartheader()
924 while headerblock is not None:
924 while headerblock is not None:
925 part = cls(self.ui, headerblock, self._fp)
925 part = cls(self.ui, headerblock, self._fp)
926 yield part
926 yield part
927 # Ensure part is fully consumed so we can start reading the next
927 # Ensure part is fully consumed so we can start reading the next
928 # part.
928 # part.
929 part.consume()
929 part.consume()
930
930
931 headerblock = self._readpartheader()
931 headerblock = self._readpartheader()
932 indebug(self.ui, b'end of bundle2 stream')
932 indebug(self.ui, b'end of bundle2 stream')
933
933
934 def _readpartheader(self):
934 def _readpartheader(self):
935 """reads a part header size and return the bytes blob
935 """reads a part header size and return the bytes blob
936
936
937 returns None if empty"""
937 returns None if empty"""
938 headersize = self._unpack(_fpartheadersize)[0]
938 headersize = self._unpack(_fpartheadersize)[0]
939 if headersize < 0:
939 if headersize < 0:
940 raise error.BundleValueError(
940 raise error.BundleValueError(
941 b'negative part header size: %i' % headersize
941 b'negative part header size: %i' % headersize
942 )
942 )
943 indebug(self.ui, b'part header size: %i' % headersize)
943 indebug(self.ui, b'part header size: %i' % headersize)
944 if headersize:
944 if headersize:
945 return self._readexact(headersize)
945 return self._readexact(headersize)
946 return None
946 return None
947
947
948 def compressed(self):
948 def compressed(self):
949 self.params # load params
949 self.params # load params
950 return self._compressed
950 return self._compressed
951
951
952 def close(self):
952 def close(self):
953 """close underlying file"""
953 """close underlying file"""
954 if util.safehasattr(self._fp, 'close'):
954 if util.safehasattr(self._fp, 'close'):
955 return self._fp.close()
955 return self._fp.close()
956
956
957
957
958 formatmap = {b'20': unbundle20}
958 formatmap = {b'20': unbundle20}
959
959
960 b2streamparamsmap = {}
960 b2streamparamsmap = {}
961
961
962
962
963 def b2streamparamhandler(name):
963 def b2streamparamhandler(name):
964 """register a handler for a stream level parameter"""
964 """register a handler for a stream level parameter"""
965
965
966 def decorator(func):
966 def decorator(func):
967 assert name not in formatmap
967 assert name not in formatmap
968 b2streamparamsmap[name] = func
968 b2streamparamsmap[name] = func
969 return func
969 return func
970
970
971 return decorator
971 return decorator
972
972
973
973
974 @b2streamparamhandler(b'compression')
974 @b2streamparamhandler(b'compression')
975 def processcompression(unbundler, param, value):
975 def processcompression(unbundler, param, value):
976 """read compression parameter and install payload decompression"""
976 """read compression parameter and install payload decompression"""
977 if value not in util.compengines.supportedbundletypes:
977 if value not in util.compengines.supportedbundletypes:
978 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
978 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
979 unbundler._compengine = util.compengines.forbundletype(value)
979 unbundler._compengine = util.compengines.forbundletype(value)
980 if value is not None:
980 if value is not None:
981 unbundler._compressed = True
981 unbundler._compressed = True
982
982
983
983
984 class bundlepart(object):
984 class bundlepart(object):
985 """A bundle2 part contains application level payload
985 """A bundle2 part contains application level payload
986
986
987 The part `type` is used to route the part to the application level
987 The part `type` is used to route the part to the application level
988 handler.
988 handler.
989
989
990 The part payload is contained in ``part.data``. It could be raw bytes or a
990 The part payload is contained in ``part.data``. It could be raw bytes or a
991 generator of byte chunks.
991 generator of byte chunks.
992
992
993 You can add parameters to the part using the ``addparam`` method.
993 You can add parameters to the part using the ``addparam`` method.
994 Parameters can be either mandatory (default) or advisory. Remote side
994 Parameters can be either mandatory (default) or advisory. Remote side
995 should be able to safely ignore the advisory ones.
995 should be able to safely ignore the advisory ones.
996
996
997 Both data and parameters cannot be modified after the generation has begun.
997 Both data and parameters cannot be modified after the generation has begun.
998 """
998 """
999
999
1000 def __init__(
1000 def __init__(
1001 self,
1001 self,
1002 parttype,
1002 parttype,
1003 mandatoryparams=(),
1003 mandatoryparams=(),
1004 advisoryparams=(),
1004 advisoryparams=(),
1005 data=b'',
1005 data=b'',
1006 mandatory=True,
1006 mandatory=True,
1007 ):
1007 ):
1008 validateparttype(parttype)
1008 validateparttype(parttype)
1009 self.id = None
1009 self.id = None
1010 self.type = parttype
1010 self.type = parttype
1011 self._data = data
1011 self._data = data
1012 self._mandatoryparams = list(mandatoryparams)
1012 self._mandatoryparams = list(mandatoryparams)
1013 self._advisoryparams = list(advisoryparams)
1013 self._advisoryparams = list(advisoryparams)
1014 # checking for duplicated entries
1014 # checking for duplicated entries
1015 self._seenparams = set()
1015 self._seenparams = set()
1016 for pname, __ in self._mandatoryparams + self._advisoryparams:
1016 for pname, __ in self._mandatoryparams + self._advisoryparams:
1017 if pname in self._seenparams:
1017 if pname in self._seenparams:
1018 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1018 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1019 self._seenparams.add(pname)
1019 self._seenparams.add(pname)
1020 # status of the part's generation:
1020 # status of the part's generation:
1021 # - None: not started,
1021 # - None: not started,
1022 # - False: currently generated,
1022 # - False: currently generated,
1023 # - True: generation done.
1023 # - True: generation done.
1024 self._generated = None
1024 self._generated = None
1025 self.mandatory = mandatory
1025 self.mandatory = mandatory
1026
1026
1027 def __repr__(self):
1027 def __repr__(self):
1028 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1028 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1029 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1029 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1030 cls,
1030 cls,
1031 id(self),
1031 id(self),
1032 self.id,
1032 self.id,
1033 self.type,
1033 self.type,
1034 self.mandatory,
1034 self.mandatory,
1035 )
1035 )
1036
1036
1037 def copy(self):
1037 def copy(self):
1038 """return a copy of the part
1038 """return a copy of the part
1039
1039
1040 The new part have the very same content but no partid assigned yet.
1040 The new part have the very same content but no partid assigned yet.
1041 Parts with generated data cannot be copied."""
1041 Parts with generated data cannot be copied."""
1042 assert not util.safehasattr(self.data, 'next')
1042 assert not util.safehasattr(self.data, 'next')
1043 return self.__class__(
1043 return self.__class__(
1044 self.type,
1044 self.type,
1045 self._mandatoryparams,
1045 self._mandatoryparams,
1046 self._advisoryparams,
1046 self._advisoryparams,
1047 self._data,
1047 self._data,
1048 self.mandatory,
1048 self.mandatory,
1049 )
1049 )
1050
1050
1051 # methods used to defines the part content
1051 # methods used to defines the part content
1052 @property
1052 @property
1053 def data(self):
1053 def data(self):
1054 return self._data
1054 return self._data
1055
1055
1056 @data.setter
1056 @data.setter
1057 def data(self, data):
1057 def data(self, data):
1058 if self._generated is not None:
1058 if self._generated is not None:
1059 raise error.ReadOnlyPartError(b'part is being generated')
1059 raise error.ReadOnlyPartError(b'part is being generated')
1060 self._data = data
1060 self._data = data
1061
1061
1062 @property
1062 @property
1063 def mandatoryparams(self):
1063 def mandatoryparams(self):
1064 # make it an immutable tuple to force people through ``addparam``
1064 # make it an immutable tuple to force people through ``addparam``
1065 return tuple(self._mandatoryparams)
1065 return tuple(self._mandatoryparams)
1066
1066
1067 @property
1067 @property
1068 def advisoryparams(self):
1068 def advisoryparams(self):
1069 # make it an immutable tuple to force people through ``addparam``
1069 # make it an immutable tuple to force people through ``addparam``
1070 return tuple(self._advisoryparams)
1070 return tuple(self._advisoryparams)
1071
1071
1072 def addparam(self, name, value=b'', mandatory=True):
1072 def addparam(self, name, value=b'', mandatory=True):
1073 """add a parameter to the part
1073 """add a parameter to the part
1074
1074
1075 If 'mandatory' is set to True, the remote handler must claim support
1075 If 'mandatory' is set to True, the remote handler must claim support
1076 for this parameter or the unbundling will be aborted.
1076 for this parameter or the unbundling will be aborted.
1077
1077
1078 The 'name' and 'value' cannot exceed 255 bytes each.
1078 The 'name' and 'value' cannot exceed 255 bytes each.
1079 """
1079 """
1080 if self._generated is not None:
1080 if self._generated is not None:
1081 raise error.ReadOnlyPartError(b'part is being generated')
1081 raise error.ReadOnlyPartError(b'part is being generated')
1082 if name in self._seenparams:
1082 if name in self._seenparams:
1083 raise ValueError(b'duplicated params: %s' % name)
1083 raise ValueError(b'duplicated params: %s' % name)
1084 self._seenparams.add(name)
1084 self._seenparams.add(name)
1085 params = self._advisoryparams
1085 params = self._advisoryparams
1086 if mandatory:
1086 if mandatory:
1087 params = self._mandatoryparams
1087 params = self._mandatoryparams
1088 params.append((name, value))
1088 params.append((name, value))
1089
1089
1090 # methods used to generates the bundle2 stream
1090 # methods used to generates the bundle2 stream
1091 def getchunks(self, ui):
1091 def getchunks(self, ui):
1092 if self._generated is not None:
1092 if self._generated is not None:
1093 raise error.ProgrammingError(b'part can only be consumed once')
1093 raise error.ProgrammingError(b'part can only be consumed once')
1094 self._generated = False
1094 self._generated = False
1095
1095
1096 if ui.debugflag:
1096 if ui.debugflag:
1097 msg = [b'bundle2-output-part: "%s"' % self.type]
1097 msg = [b'bundle2-output-part: "%s"' % self.type]
1098 if not self.mandatory:
1098 if not self.mandatory:
1099 msg.append(b' (advisory)')
1099 msg.append(b' (advisory)')
1100 nbmp = len(self.mandatoryparams)
1100 nbmp = len(self.mandatoryparams)
1101 nbap = len(self.advisoryparams)
1101 nbap = len(self.advisoryparams)
1102 if nbmp or nbap:
1102 if nbmp or nbap:
1103 msg.append(b' (params:')
1103 msg.append(b' (params:')
1104 if nbmp:
1104 if nbmp:
1105 msg.append(b' %i mandatory' % nbmp)
1105 msg.append(b' %i mandatory' % nbmp)
1106 if nbap:
1106 if nbap:
1107 msg.append(b' %i advisory' % nbmp)
1107 msg.append(b' %i advisory' % nbmp)
1108 msg.append(b')')
1108 msg.append(b')')
1109 if not self.data:
1109 if not self.data:
1110 msg.append(b' empty payload')
1110 msg.append(b' empty payload')
1111 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1111 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1112 self.data, b'__next__'
1112 self.data, b'__next__'
1113 ):
1113 ):
1114 msg.append(b' streamed payload')
1114 msg.append(b' streamed payload')
1115 else:
1115 else:
1116 msg.append(b' %i bytes payload' % len(self.data))
1116 msg.append(b' %i bytes payload' % len(self.data))
1117 msg.append(b'\n')
1117 msg.append(b'\n')
1118 ui.debug(b''.join(msg))
1118 ui.debug(b''.join(msg))
1119
1119
1120 #### header
1120 #### header
1121 if self.mandatory:
1121 if self.mandatory:
1122 parttype = self.type.upper()
1122 parttype = self.type.upper()
1123 else:
1123 else:
1124 parttype = self.type.lower()
1124 parttype = self.type.lower()
1125 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1125 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1126 ## parttype
1126 ## parttype
1127 header = [
1127 header = [
1128 _pack(_fparttypesize, len(parttype)),
1128 _pack(_fparttypesize, len(parttype)),
1129 parttype,
1129 parttype,
1130 _pack(_fpartid, self.id),
1130 _pack(_fpartid, self.id),
1131 ]
1131 ]
1132 ## parameters
1132 ## parameters
1133 # count
1133 # count
1134 manpar = self.mandatoryparams
1134 manpar = self.mandatoryparams
1135 advpar = self.advisoryparams
1135 advpar = self.advisoryparams
1136 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1136 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1137 # size
1137 # size
1138 parsizes = []
1138 parsizes = []
1139 for key, value in manpar:
1139 for key, value in manpar:
1140 parsizes.append(len(key))
1140 parsizes.append(len(key))
1141 parsizes.append(len(value))
1141 parsizes.append(len(value))
1142 for key, value in advpar:
1142 for key, value in advpar:
1143 parsizes.append(len(key))
1143 parsizes.append(len(key))
1144 parsizes.append(len(value))
1144 parsizes.append(len(value))
1145 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1145 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1146 header.append(paramsizes)
1146 header.append(paramsizes)
1147 # key, value
1147 # key, value
1148 for key, value in manpar:
1148 for key, value in manpar:
1149 header.append(key)
1149 header.append(key)
1150 header.append(value)
1150 header.append(value)
1151 for key, value in advpar:
1151 for key, value in advpar:
1152 header.append(key)
1152 header.append(key)
1153 header.append(value)
1153 header.append(value)
1154 ## finalize header
1154 ## finalize header
1155 try:
1155 try:
1156 headerchunk = b''.join(header)
1156 headerchunk = b''.join(header)
1157 except TypeError:
1157 except TypeError:
1158 raise TypeError(
1158 raise TypeError(
1159 'Found a non-bytes trying to '
1159 'Found a non-bytes trying to '
1160 'build bundle part header: %r' % header
1160 'build bundle part header: %r' % header
1161 )
1161 )
1162 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1162 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1163 yield _pack(_fpartheadersize, len(headerchunk))
1163 yield _pack(_fpartheadersize, len(headerchunk))
1164 yield headerchunk
1164 yield headerchunk
1165 ## payload
1165 ## payload
1166 try:
1166 try:
1167 for chunk in self._payloadchunks():
1167 for chunk in self._payloadchunks():
1168 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1168 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1169 yield _pack(_fpayloadsize, len(chunk))
1169 yield _pack(_fpayloadsize, len(chunk))
1170 yield chunk
1170 yield chunk
1171 except GeneratorExit:
1171 except GeneratorExit:
1172 # GeneratorExit means that nobody is listening for our
1172 # GeneratorExit means that nobody is listening for our
1173 # results anyway, so just bail quickly rather than trying
1173 # results anyway, so just bail quickly rather than trying
1174 # to produce an error part.
1174 # to produce an error part.
1175 ui.debug(b'bundle2-generatorexit\n')
1175 ui.debug(b'bundle2-generatorexit\n')
1176 raise
1176 raise
1177 except BaseException as exc:
1177 except BaseException as exc:
1178 bexc = stringutil.forcebytestr(exc)
1178 bexc = stringutil.forcebytestr(exc)
1179 # backup exception data for later
1179 # backup exception data for later
1180 ui.debug(
1180 ui.debug(
1181 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1181 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1182 )
1182 )
1183 tb = sys.exc_info()[2]
1183 tb = sys.exc_info()[2]
1184 msg = b'unexpected error: %s' % bexc
1184 msg = b'unexpected error: %s' % bexc
1185 interpart = bundlepart(
1185 interpart = bundlepart(
1186 b'error:abort', [(b'message', msg)], mandatory=False
1186 b'error:abort', [(b'message', msg)], mandatory=False
1187 )
1187 )
1188 interpart.id = 0
1188 interpart.id = 0
1189 yield _pack(_fpayloadsize, -1)
1189 yield _pack(_fpayloadsize, -1)
1190 for chunk in interpart.getchunks(ui=ui):
1190 for chunk in interpart.getchunks(ui=ui):
1191 yield chunk
1191 yield chunk
1192 outdebug(ui, b'closing payload chunk')
1192 outdebug(ui, b'closing payload chunk')
1193 # abort current part payload
1193 # abort current part payload
1194 yield _pack(_fpayloadsize, 0)
1194 yield _pack(_fpayloadsize, 0)
1195 pycompat.raisewithtb(exc, tb)
1195 pycompat.raisewithtb(exc, tb)
1196 # end of payload
1196 # end of payload
1197 outdebug(ui, b'closing payload chunk')
1197 outdebug(ui, b'closing payload chunk')
1198 yield _pack(_fpayloadsize, 0)
1198 yield _pack(_fpayloadsize, 0)
1199 self._generated = True
1199 self._generated = True
1200
1200
1201 def _payloadchunks(self):
1201 def _payloadchunks(self):
1202 """yield chunks of a the part payload
1202 """yield chunks of a the part payload
1203
1203
1204 Exists to handle the different methods to provide data to a part."""
1204 Exists to handle the different methods to provide data to a part."""
1205 # we only support fixed size data now.
1205 # we only support fixed size data now.
1206 # This will be improved in the future.
1206 # This will be improved in the future.
1207 if util.safehasattr(self.data, 'next') or util.safehasattr(
1207 if util.safehasattr(self.data, 'next') or util.safehasattr(
1208 self.data, b'__next__'
1208 self.data, b'__next__'
1209 ):
1209 ):
1210 buff = util.chunkbuffer(self.data)
1210 buff = util.chunkbuffer(self.data)
1211 chunk = buff.read(preferedchunksize)
1211 chunk = buff.read(preferedchunksize)
1212 while chunk:
1212 while chunk:
1213 yield chunk
1213 yield chunk
1214 chunk = buff.read(preferedchunksize)
1214 chunk = buff.read(preferedchunksize)
1215 elif len(self.data):
1215 elif len(self.data):
1216 yield self.data
1216 yield self.data
1217
1217
1218
1218
1219 flaginterrupt = -1
1219 flaginterrupt = -1
1220
1220
1221
1221
1222 class interrupthandler(unpackermixin):
1222 class interrupthandler(unpackermixin):
1223 """read one part and process it with restricted capability
1223 """read one part and process it with restricted capability
1224
1224
1225 This allows to transmit exception raised on the producer size during part
1225 This allows to transmit exception raised on the producer size during part
1226 iteration while the consumer is reading a part.
1226 iteration while the consumer is reading a part.
1227
1227
1228 Part processed in this manner only have access to a ui object,"""
1228 Part processed in this manner only have access to a ui object,"""
1229
1229
1230 def __init__(self, ui, fp):
1230 def __init__(self, ui, fp):
1231 super(interrupthandler, self).__init__(fp)
1231 super(interrupthandler, self).__init__(fp)
1232 self.ui = ui
1232 self.ui = ui
1233
1233
1234 def _readpartheader(self):
1234 def _readpartheader(self):
1235 """reads a part header size and return the bytes blob
1235 """reads a part header size and return the bytes blob
1236
1236
1237 returns None if empty"""
1237 returns None if empty"""
1238 headersize = self._unpack(_fpartheadersize)[0]
1238 headersize = self._unpack(_fpartheadersize)[0]
1239 if headersize < 0:
1239 if headersize < 0:
1240 raise error.BundleValueError(
1240 raise error.BundleValueError(
1241 b'negative part header size: %i' % headersize
1241 b'negative part header size: %i' % headersize
1242 )
1242 )
1243 indebug(self.ui, b'part header size: %i\n' % headersize)
1243 indebug(self.ui, b'part header size: %i\n' % headersize)
1244 if headersize:
1244 if headersize:
1245 return self._readexact(headersize)
1245 return self._readexact(headersize)
1246 return None
1246 return None
1247
1247
1248 def __call__(self):
1248 def __call__(self):
1249
1249
1250 self.ui.debug(
1250 self.ui.debug(
1251 b'bundle2-input-stream-interrupt: opening out of band context\n'
1251 b'bundle2-input-stream-interrupt: opening out of band context\n'
1252 )
1252 )
1253 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1253 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1254 headerblock = self._readpartheader()
1254 headerblock = self._readpartheader()
1255 if headerblock is None:
1255 if headerblock is None:
1256 indebug(self.ui, b'no part found during interruption.')
1256 indebug(self.ui, b'no part found during interruption.')
1257 return
1257 return
1258 part = unbundlepart(self.ui, headerblock, self._fp)
1258 part = unbundlepart(self.ui, headerblock, self._fp)
1259 op = interruptoperation(self.ui)
1259 op = interruptoperation(self.ui)
1260 hardabort = False
1260 hardabort = False
1261 try:
1261 try:
1262 _processpart(op, part)
1262 _processpart(op, part)
1263 except (SystemExit, KeyboardInterrupt):
1263 except (SystemExit, KeyboardInterrupt):
1264 hardabort = True
1264 hardabort = True
1265 raise
1265 raise
1266 finally:
1266 finally:
1267 if not hardabort:
1267 if not hardabort:
1268 part.consume()
1268 part.consume()
1269 self.ui.debug(
1269 self.ui.debug(
1270 b'bundle2-input-stream-interrupt: closing out of band context\n'
1270 b'bundle2-input-stream-interrupt: closing out of band context\n'
1271 )
1271 )
1272
1272
1273
1273
1274 class interruptoperation(object):
1274 class interruptoperation(object):
1275 """A limited operation to be use by part handler during interruption
1275 """A limited operation to be use by part handler during interruption
1276
1276
1277 It only have access to an ui object.
1277 It only have access to an ui object.
1278 """
1278 """
1279
1279
1280 def __init__(self, ui):
1280 def __init__(self, ui):
1281 self.ui = ui
1281 self.ui = ui
1282 self.reply = None
1282 self.reply = None
1283 self.captureoutput = False
1283 self.captureoutput = False
1284
1284
1285 @property
1285 @property
1286 def repo(self):
1286 def repo(self):
1287 raise error.ProgrammingError(b'no repo access from stream interruption')
1287 raise error.ProgrammingError(b'no repo access from stream interruption')
1288
1288
1289 def gettransaction(self):
1289 def gettransaction(self):
1290 raise TransactionUnavailable(b'no repo access from stream interruption')
1290 raise TransactionUnavailable(b'no repo access from stream interruption')
1291
1291
1292
1292
1293 def decodepayloadchunks(ui, fh):
1293 def decodepayloadchunks(ui, fh):
1294 """Reads bundle2 part payload data into chunks.
1294 """Reads bundle2 part payload data into chunks.
1295
1295
1296 Part payload data consists of framed chunks. This function takes
1296 Part payload data consists of framed chunks. This function takes
1297 a file handle and emits those chunks.
1297 a file handle and emits those chunks.
1298 """
1298 """
1299 dolog = ui.configbool(b'devel', b'bundle2.debug')
1299 dolog = ui.configbool(b'devel', b'bundle2.debug')
1300 debug = ui.debug
1300 debug = ui.debug
1301
1301
1302 headerstruct = struct.Struct(_fpayloadsize)
1302 headerstruct = struct.Struct(_fpayloadsize)
1303 headersize = headerstruct.size
1303 headersize = headerstruct.size
1304 unpack = headerstruct.unpack
1304 unpack = headerstruct.unpack
1305
1305
1306 readexactly = changegroup.readexactly
1306 readexactly = changegroup.readexactly
1307 read = fh.read
1307 read = fh.read
1308
1308
1309 chunksize = unpack(readexactly(fh, headersize))[0]
1309 chunksize = unpack(readexactly(fh, headersize))[0]
1310 indebug(ui, b'payload chunk size: %i' % chunksize)
1310 indebug(ui, b'payload chunk size: %i' % chunksize)
1311
1311
1312 # changegroup.readexactly() is inlined below for performance.
1312 # changegroup.readexactly() is inlined below for performance.
1313 while chunksize:
1313 while chunksize:
1314 if chunksize >= 0:
1314 if chunksize >= 0:
1315 s = read(chunksize)
1315 s = read(chunksize)
1316 if len(s) < chunksize:
1316 if len(s) < chunksize:
1317 raise error.Abort(
1317 raise error.Abort(
1318 _(
1318 _(
1319 b'stream ended unexpectedly '
1319 b'stream ended unexpectedly '
1320 b' (got %d bytes, expected %d)'
1320 b' (got %d bytes, expected %d)'
1321 )
1321 )
1322 % (len(s), chunksize)
1322 % (len(s), chunksize)
1323 )
1323 )
1324
1324
1325 yield s
1325 yield s
1326 elif chunksize == flaginterrupt:
1326 elif chunksize == flaginterrupt:
1327 # Interrupt "signal" detected. The regular stream is interrupted
1327 # Interrupt "signal" detected. The regular stream is interrupted
1328 # and a bundle2 part follows. Consume it.
1328 # and a bundle2 part follows. Consume it.
1329 interrupthandler(ui, fh)()
1329 interrupthandler(ui, fh)()
1330 else:
1330 else:
1331 raise error.BundleValueError(
1331 raise error.BundleValueError(
1332 b'negative payload chunk size: %s' % chunksize
1332 b'negative payload chunk size: %s' % chunksize
1333 )
1333 )
1334
1334
1335 s = read(headersize)
1335 s = read(headersize)
1336 if len(s) < headersize:
1336 if len(s) < headersize:
1337 raise error.Abort(
1337 raise error.Abort(
1338 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1338 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1339 % (len(s), chunksize)
1339 % (len(s), chunksize)
1340 )
1340 )
1341
1341
1342 chunksize = unpack(s)[0]
1342 chunksize = unpack(s)[0]
1343
1343
1344 # indebug() inlined for performance.
1344 # indebug() inlined for performance.
1345 if dolog:
1345 if dolog:
1346 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1346 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1347
1347
1348
1348
1349 class unbundlepart(unpackermixin):
1349 class unbundlepart(unpackermixin):
1350 """a bundle part read from a bundle"""
1350 """a bundle part read from a bundle"""
1351
1351
1352 def __init__(self, ui, header, fp):
1352 def __init__(self, ui, header, fp):
1353 super(unbundlepart, self).__init__(fp)
1353 super(unbundlepart, self).__init__(fp)
1354 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1354 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1355 fp, b'tell'
1355 fp, b'tell'
1356 )
1356 )
1357 self.ui = ui
1357 self.ui = ui
1358 # unbundle state attr
1358 # unbundle state attr
1359 self._headerdata = header
1359 self._headerdata = header
1360 self._headeroffset = 0
1360 self._headeroffset = 0
1361 self._initialized = False
1361 self._initialized = False
1362 self.consumed = False
1362 self.consumed = False
1363 # part data
1363 # part data
1364 self.id = None
1364 self.id = None
1365 self.type = None
1365 self.type = None
1366 self.mandatoryparams = None
1366 self.mandatoryparams = None
1367 self.advisoryparams = None
1367 self.advisoryparams = None
1368 self.params = None
1368 self.params = None
1369 self.mandatorykeys = ()
1369 self.mandatorykeys = ()
1370 self._readheader()
1370 self._readheader()
1371 self._mandatory = None
1371 self._mandatory = None
1372 self._pos = 0
1372 self._pos = 0
1373
1373
1374 def _fromheader(self, size):
1374 def _fromheader(self, size):
1375 """return the next <size> byte from the header"""
1375 """return the next <size> byte from the header"""
1376 offset = self._headeroffset
1376 offset = self._headeroffset
1377 data = self._headerdata[offset : (offset + size)]
1377 data = self._headerdata[offset : (offset + size)]
1378 self._headeroffset = offset + size
1378 self._headeroffset = offset + size
1379 return data
1379 return data
1380
1380
1381 def _unpackheader(self, format):
1381 def _unpackheader(self, format):
1382 """read given format from header
1382 """read given format from header
1383
1383
1384 This automatically compute the size of the format to read."""
1384 This automatically compute the size of the format to read."""
1385 data = self._fromheader(struct.calcsize(format))
1385 data = self._fromheader(struct.calcsize(format))
1386 return _unpack(format, data)
1386 return _unpack(format, data)
1387
1387
1388 def _initparams(self, mandatoryparams, advisoryparams):
1388 def _initparams(self, mandatoryparams, advisoryparams):
1389 """internal function to setup all logic related parameters"""
1389 """internal function to setup all logic related parameters"""
1390 # make it read only to prevent people touching it by mistake.
1390 # make it read only to prevent people touching it by mistake.
1391 self.mandatoryparams = tuple(mandatoryparams)
1391 self.mandatoryparams = tuple(mandatoryparams)
1392 self.advisoryparams = tuple(advisoryparams)
1392 self.advisoryparams = tuple(advisoryparams)
1393 # user friendly UI
1393 # user friendly UI
1394 self.params = util.sortdict(self.mandatoryparams)
1394 self.params = util.sortdict(self.mandatoryparams)
1395 self.params.update(self.advisoryparams)
1395 self.params.update(self.advisoryparams)
1396 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1396 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1397
1397
1398 def _readheader(self):
1398 def _readheader(self):
1399 """read the header and setup the object"""
1399 """read the header and setup the object"""
1400 typesize = self._unpackheader(_fparttypesize)[0]
1400 typesize = self._unpackheader(_fparttypesize)[0]
1401 self.type = self._fromheader(typesize)
1401 self.type = self._fromheader(typesize)
1402 indebug(self.ui, b'part type: "%s"' % self.type)
1402 indebug(self.ui, b'part type: "%s"' % self.type)
1403 self.id = self._unpackheader(_fpartid)[0]
1403 self.id = self._unpackheader(_fpartid)[0]
1404 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1404 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1405 # extract mandatory bit from type
1405 # extract mandatory bit from type
1406 self.mandatory = self.type != self.type.lower()
1406 self.mandatory = self.type != self.type.lower()
1407 self.type = self.type.lower()
1407 self.type = self.type.lower()
1408 ## reading parameters
1408 ## reading parameters
1409 # param count
1409 # param count
1410 mancount, advcount = self._unpackheader(_fpartparamcount)
1410 mancount, advcount = self._unpackheader(_fpartparamcount)
1411 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1411 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1412 # param size
1412 # param size
1413 fparamsizes = _makefpartparamsizes(mancount + advcount)
1413 fparamsizes = _makefpartparamsizes(mancount + advcount)
1414 paramsizes = self._unpackheader(fparamsizes)
1414 paramsizes = self._unpackheader(fparamsizes)
1415 # make it a list of couple again
1415 # make it a list of couple again
1416 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1416 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1417 # split mandatory from advisory
1417 # split mandatory from advisory
1418 mansizes = paramsizes[:mancount]
1418 mansizes = paramsizes[:mancount]
1419 advsizes = paramsizes[mancount:]
1419 advsizes = paramsizes[mancount:]
1420 # retrieve param value
1420 # retrieve param value
1421 manparams = []
1421 manparams = []
1422 for key, value in mansizes:
1422 for key, value in mansizes:
1423 manparams.append((self._fromheader(key), self._fromheader(value)))
1423 manparams.append((self._fromheader(key), self._fromheader(value)))
1424 advparams = []
1424 advparams = []
1425 for key, value in advsizes:
1425 for key, value in advsizes:
1426 advparams.append((self._fromheader(key), self._fromheader(value)))
1426 advparams.append((self._fromheader(key), self._fromheader(value)))
1427 self._initparams(manparams, advparams)
1427 self._initparams(manparams, advparams)
1428 ## part payload
1428 ## part payload
1429 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1429 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1430 # we read the data, tell it
1430 # we read the data, tell it
1431 self._initialized = True
1431 self._initialized = True
1432
1432
1433 def _payloadchunks(self):
1433 def _payloadchunks(self):
1434 """Generator of decoded chunks in the payload."""
1434 """Generator of decoded chunks in the payload."""
1435 return decodepayloadchunks(self.ui, self._fp)
1435 return decodepayloadchunks(self.ui, self._fp)
1436
1436
1437 def consume(self):
1437 def consume(self):
1438 """Read the part payload until completion.
1438 """Read the part payload until completion.
1439
1439
1440 By consuming the part data, the underlying stream read offset will
1440 By consuming the part data, the underlying stream read offset will
1441 be advanced to the next part (or end of stream).
1441 be advanced to the next part (or end of stream).
1442 """
1442 """
1443 if self.consumed:
1443 if self.consumed:
1444 return
1444 return
1445
1445
1446 chunk = self.read(32768)
1446 chunk = self.read(32768)
1447 while chunk:
1447 while chunk:
1448 self._pos += len(chunk)
1448 self._pos += len(chunk)
1449 chunk = self.read(32768)
1449 chunk = self.read(32768)
1450
1450
1451 def read(self, size=None):
1451 def read(self, size=None):
1452 """read payload data"""
1452 """read payload data"""
1453 if not self._initialized:
1453 if not self._initialized:
1454 self._readheader()
1454 self._readheader()
1455 if size is None:
1455 if size is None:
1456 data = self._payloadstream.read()
1456 data = self._payloadstream.read()
1457 else:
1457 else:
1458 data = self._payloadstream.read(size)
1458 data = self._payloadstream.read(size)
1459 self._pos += len(data)
1459 self._pos += len(data)
1460 if size is None or len(data) < size:
1460 if size is None or len(data) < size:
1461 if not self.consumed and self._pos:
1461 if not self.consumed and self._pos:
1462 self.ui.debug(
1462 self.ui.debug(
1463 b'bundle2-input-part: total payload size %i\n' % self._pos
1463 b'bundle2-input-part: total payload size %i\n' % self._pos
1464 )
1464 )
1465 self.consumed = True
1465 self.consumed = True
1466 return data
1466 return data
1467
1467
1468
1468
1469 class seekableunbundlepart(unbundlepart):
1469 class seekableunbundlepart(unbundlepart):
1470 """A bundle2 part in a bundle that is seekable.
1470 """A bundle2 part in a bundle that is seekable.
1471
1471
1472 Regular ``unbundlepart`` instances can only be read once. This class
1472 Regular ``unbundlepart`` instances can only be read once. This class
1473 extends ``unbundlepart`` to enable bi-directional seeking within the
1473 extends ``unbundlepart`` to enable bi-directional seeking within the
1474 part.
1474 part.
1475
1475
1476 Bundle2 part data consists of framed chunks. Offsets when seeking
1476 Bundle2 part data consists of framed chunks. Offsets when seeking
1477 refer to the decoded data, not the offsets in the underlying bundle2
1477 refer to the decoded data, not the offsets in the underlying bundle2
1478 stream.
1478 stream.
1479
1479
1480 To facilitate quickly seeking within the decoded data, instances of this
1480 To facilitate quickly seeking within the decoded data, instances of this
1481 class maintain a mapping between offsets in the underlying stream and
1481 class maintain a mapping between offsets in the underlying stream and
1482 the decoded payload. This mapping will consume memory in proportion
1482 the decoded payload. This mapping will consume memory in proportion
1483 to the number of chunks within the payload (which almost certainly
1483 to the number of chunks within the payload (which almost certainly
1484 increases in proportion with the size of the part).
1484 increases in proportion with the size of the part).
1485 """
1485 """
1486
1486
1487 def __init__(self, ui, header, fp):
1487 def __init__(self, ui, header, fp):
1488 # (payload, file) offsets for chunk starts.
1488 # (payload, file) offsets for chunk starts.
1489 self._chunkindex = []
1489 self._chunkindex = []
1490
1490
1491 super(seekableunbundlepart, self).__init__(ui, header, fp)
1491 super(seekableunbundlepart, self).__init__(ui, header, fp)
1492
1492
1493 def _payloadchunks(self, chunknum=0):
1493 def _payloadchunks(self, chunknum=0):
1494 '''seek to specified chunk and start yielding data'''
1494 '''seek to specified chunk and start yielding data'''
1495 if len(self._chunkindex) == 0:
1495 if len(self._chunkindex) == 0:
1496 assert chunknum == 0, b'Must start with chunk 0'
1496 assert chunknum == 0, b'Must start with chunk 0'
1497 self._chunkindex.append((0, self._tellfp()))
1497 self._chunkindex.append((0, self._tellfp()))
1498 else:
1498 else:
1499 assert chunknum < len(self._chunkindex), (
1499 assert chunknum < len(self._chunkindex), (
1500 b'Unknown chunk %d' % chunknum
1500 b'Unknown chunk %d' % chunknum
1501 )
1501 )
1502 self._seekfp(self._chunkindex[chunknum][1])
1502 self._seekfp(self._chunkindex[chunknum][1])
1503
1503
1504 pos = self._chunkindex[chunknum][0]
1504 pos = self._chunkindex[chunknum][0]
1505
1505
1506 for chunk in decodepayloadchunks(self.ui, self._fp):
1506 for chunk in decodepayloadchunks(self.ui, self._fp):
1507 chunknum += 1
1507 chunknum += 1
1508 pos += len(chunk)
1508 pos += len(chunk)
1509 if chunknum == len(self._chunkindex):
1509 if chunknum == len(self._chunkindex):
1510 self._chunkindex.append((pos, self._tellfp()))
1510 self._chunkindex.append((pos, self._tellfp()))
1511
1511
1512 yield chunk
1512 yield chunk
1513
1513
1514 def _findchunk(self, pos):
1514 def _findchunk(self, pos):
1515 '''for a given payload position, return a chunk number and offset'''
1515 '''for a given payload position, return a chunk number and offset'''
1516 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1516 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1517 if ppos == pos:
1517 if ppos == pos:
1518 return chunk, 0
1518 return chunk, 0
1519 elif ppos > pos:
1519 elif ppos > pos:
1520 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1520 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1521 raise ValueError(b'Unknown chunk')
1521 raise ValueError(b'Unknown chunk')
1522
1522
1523 def tell(self):
1523 def tell(self):
1524 return self._pos
1524 return self._pos
1525
1525
1526 def seek(self, offset, whence=os.SEEK_SET):
1526 def seek(self, offset, whence=os.SEEK_SET):
1527 if whence == os.SEEK_SET:
1527 if whence == os.SEEK_SET:
1528 newpos = offset
1528 newpos = offset
1529 elif whence == os.SEEK_CUR:
1529 elif whence == os.SEEK_CUR:
1530 newpos = self._pos + offset
1530 newpos = self._pos + offset
1531 elif whence == os.SEEK_END:
1531 elif whence == os.SEEK_END:
1532 if not self.consumed:
1532 if not self.consumed:
1533 # Can't use self.consume() here because it advances self._pos.
1533 # Can't use self.consume() here because it advances self._pos.
1534 chunk = self.read(32768)
1534 chunk = self.read(32768)
1535 while chunk:
1535 while chunk:
1536 chunk = self.read(32768)
1536 chunk = self.read(32768)
1537 newpos = self._chunkindex[-1][0] - offset
1537 newpos = self._chunkindex[-1][0] - offset
1538 else:
1538 else:
1539 raise ValueError(b'Unknown whence value: %r' % (whence,))
1539 raise ValueError(b'Unknown whence value: %r' % (whence,))
1540
1540
1541 if newpos > self._chunkindex[-1][0] and not self.consumed:
1541 if newpos > self._chunkindex[-1][0] and not self.consumed:
1542 # Can't use self.consume() here because it advances self._pos.
1542 # Can't use self.consume() here because it advances self._pos.
1543 chunk = self.read(32768)
1543 chunk = self.read(32768)
1544 while chunk:
1544 while chunk:
1545 chunk = self.read(32668)
1545 chunk = self.read(32668)
1546
1546
1547 if not 0 <= newpos <= self._chunkindex[-1][0]:
1547 if not 0 <= newpos <= self._chunkindex[-1][0]:
1548 raise ValueError(b'Offset out of range')
1548 raise ValueError(b'Offset out of range')
1549
1549
1550 if self._pos != newpos:
1550 if self._pos != newpos:
1551 chunk, internaloffset = self._findchunk(newpos)
1551 chunk, internaloffset = self._findchunk(newpos)
1552 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1552 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1553 adjust = self.read(internaloffset)
1553 adjust = self.read(internaloffset)
1554 if len(adjust) != internaloffset:
1554 if len(adjust) != internaloffset:
1555 raise error.Abort(_(b'Seek failed\n'))
1555 raise error.Abort(_(b'Seek failed\n'))
1556 self._pos = newpos
1556 self._pos = newpos
1557
1557
1558 def _seekfp(self, offset, whence=0):
1558 def _seekfp(self, offset, whence=0):
1559 """move the underlying file pointer
1559 """move the underlying file pointer
1560
1560
1561 This method is meant for internal usage by the bundle2 protocol only.
1561 This method is meant for internal usage by the bundle2 protocol only.
1562 They directly manipulate the low level stream including bundle2 level
1562 They directly manipulate the low level stream including bundle2 level
1563 instruction.
1563 instruction.
1564
1564
1565 Do not use it to implement higher-level logic or methods."""
1565 Do not use it to implement higher-level logic or methods."""
1566 if self._seekable:
1566 if self._seekable:
1567 return self._fp.seek(offset, whence)
1567 return self._fp.seek(offset, whence)
1568 else:
1568 else:
1569 raise NotImplementedError(_(b'File pointer is not seekable'))
1569 raise NotImplementedError(_(b'File pointer is not seekable'))
1570
1570
1571 def _tellfp(self):
1571 def _tellfp(self):
1572 """return the file offset, or None if file is not seekable
1572 """return the file offset, or None if file is not seekable
1573
1573
1574 This method is meant for internal usage by the bundle2 protocol only.
1574 This method is meant for internal usage by the bundle2 protocol only.
1575 They directly manipulate the low level stream including bundle2 level
1575 They directly manipulate the low level stream including bundle2 level
1576 instruction.
1576 instruction.
1577
1577
1578 Do not use it to implement higher-level logic or methods."""
1578 Do not use it to implement higher-level logic or methods."""
1579 if self._seekable:
1579 if self._seekable:
1580 try:
1580 try:
1581 return self._fp.tell()
1581 return self._fp.tell()
1582 except IOError as e:
1582 except IOError as e:
1583 if e.errno == errno.ESPIPE:
1583 if e.errno == errno.ESPIPE:
1584 self._seekable = False
1584 self._seekable = False
1585 else:
1585 else:
1586 raise
1586 raise
1587 return None
1587 return None
1588
1588
1589
1589
1590 # These are only the static capabilities.
1590 # These are only the static capabilities.
1591 # Check the 'getrepocaps' function for the rest.
1591 # Check the 'getrepocaps' function for the rest.
1592 capabilities = {
1592 capabilities = {
1593 b'HG20': (),
1593 b'HG20': (),
1594 b'bookmarks': (),
1594 b'bookmarks': (),
1595 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1595 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1596 b'listkeys': (),
1596 b'listkeys': (),
1597 b'pushkey': (),
1597 b'pushkey': (),
1598 b'digests': tuple(sorted(util.DIGESTS.keys())),
1598 b'digests': tuple(sorted(util.DIGESTS.keys())),
1599 b'remote-changegroup': (b'http', b'https'),
1599 b'remote-changegroup': (b'http', b'https'),
1600 b'hgtagsfnodes': (),
1600 b'hgtagsfnodes': (),
1601 b'rev-branch-cache': (),
1601 b'rev-branch-cache': (),
1602 b'phases': (b'heads',),
1602 b'phases': (b'heads',),
1603 b'stream': (b'v2',),
1603 b'stream': (b'v2',),
1604 }
1604 }
1605
1605
1606
1606
1607 def getrepocaps(repo, allowpushback=False, role=None):
1607 def getrepocaps(repo, allowpushback=False, role=None):
1608 """return the bundle2 capabilities for a given repo
1608 """return the bundle2 capabilities for a given repo
1609
1609
1610 Exists to allow extensions (like evolution) to mutate the capabilities.
1610 Exists to allow extensions (like evolution) to mutate the capabilities.
1611
1611
1612 The returned value is used for servers advertising their capabilities as
1612 The returned value is used for servers advertising their capabilities as
1613 well as clients advertising their capabilities to servers as part of
1613 well as clients advertising their capabilities to servers as part of
1614 bundle2 requests. The ``role`` argument specifies which is which.
1614 bundle2 requests. The ``role`` argument specifies which is which.
1615 """
1615 """
1616 if role not in (b'client', b'server'):
1616 if role not in (b'client', b'server'):
1617 raise error.ProgrammingError(b'role argument must be client or server')
1617 raise error.ProgrammingError(b'role argument must be client or server')
1618
1618
1619 caps = capabilities.copy()
1619 caps = capabilities.copy()
1620 caps[b'changegroup'] = tuple(
1620 caps[b'changegroup'] = tuple(
1621 sorted(changegroup.supportedincomingversions(repo))
1621 sorted(changegroup.supportedincomingversions(repo))
1622 )
1622 )
1623 if obsolete.isenabled(repo, obsolete.exchangeopt):
1623 if obsolete.isenabled(repo, obsolete.exchangeopt):
1624 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1624 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1625 caps[b'obsmarkers'] = supportedformat
1625 caps[b'obsmarkers'] = supportedformat
1626 if allowpushback:
1626 if allowpushback:
1627 caps[b'pushback'] = ()
1627 caps[b'pushback'] = ()
1628 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1628 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1629 if cpmode == b'check-related':
1629 if cpmode == b'check-related':
1630 caps[b'checkheads'] = (b'related',)
1630 caps[b'checkheads'] = (b'related',)
1631 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1631 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1632 caps.pop(b'phases')
1632 caps.pop(b'phases')
1633
1633
1634 # Don't advertise stream clone support in server mode if not configured.
1634 # Don't advertise stream clone support in server mode if not configured.
1635 if role == b'server':
1635 if role == b'server':
1636 streamsupported = repo.ui.configbool(
1636 streamsupported = repo.ui.configbool(
1637 b'server', b'uncompressed', untrusted=True
1637 b'server', b'uncompressed', untrusted=True
1638 )
1638 )
1639 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1639 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1640
1640
1641 if not streamsupported or not featuresupported:
1641 if not streamsupported or not featuresupported:
1642 caps.pop(b'stream')
1642 caps.pop(b'stream')
1643 # Else always advertise support on client, because payload support
1643 # Else always advertise support on client, because payload support
1644 # should always be advertised.
1644 # should always be advertised.
1645
1645
1646 return caps
1646 return caps
1647
1647
1648
1648
1649 def bundle2caps(remote):
1649 def bundle2caps(remote):
1650 """return the bundle capabilities of a peer as dict"""
1650 """return the bundle capabilities of a peer as dict"""
1651 raw = remote.capable(b'bundle2')
1651 raw = remote.capable(b'bundle2')
1652 if not raw and raw != b'':
1652 if not raw and raw != b'':
1653 return {}
1653 return {}
1654 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1654 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1655 return decodecaps(capsblob)
1655 return decodecaps(capsblob)
1656
1656
1657
1657
1658 def obsmarkersversion(caps):
1658 def obsmarkersversion(caps):
1659 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1659 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1660 obscaps = caps.get(b'obsmarkers', ())
1660 obscaps = caps.get(b'obsmarkers', ())
1661 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1661 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1662
1662
1663
1663
1664 def writenewbundle(
1664 def writenewbundle(
1665 ui,
1665 ui,
1666 repo,
1666 repo,
1667 source,
1667 source,
1668 filename,
1668 filename,
1669 bundletype,
1669 bundletype,
1670 outgoing,
1670 outgoing,
1671 opts,
1671 opts,
1672 vfs=None,
1672 vfs=None,
1673 compression=None,
1673 compression=None,
1674 compopts=None,
1674 compopts=None,
1675 ):
1675 ):
1676 if bundletype.startswith(b'HG10'):
1676 if bundletype.startswith(b'HG10'):
1677 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1677 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1678 return writebundle(
1678 return writebundle(
1679 ui,
1679 ui,
1680 cg,
1680 cg,
1681 filename,
1681 filename,
1682 bundletype,
1682 bundletype,
1683 vfs=vfs,
1683 vfs=vfs,
1684 compression=compression,
1684 compression=compression,
1685 compopts=compopts,
1685 compopts=compopts,
1686 )
1686 )
1687 elif not bundletype.startswith(b'HG20'):
1687 elif not bundletype.startswith(b'HG20'):
1688 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1688 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1689
1689
1690 caps = {}
1690 caps = {}
1691 if b'obsolescence' in opts:
1691 if b'obsolescence' in opts:
1692 caps[b'obsmarkers'] = (b'V1',)
1692 caps[b'obsmarkers'] = (b'V1',)
1693 bundle = bundle20(ui, caps)
1693 bundle = bundle20(ui, caps)
1694 bundle.setcompression(compression, compopts)
1694 bundle.setcompression(compression, compopts)
1695 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1695 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1696 chunkiter = bundle.getchunks()
1696 chunkiter = bundle.getchunks()
1697
1697
1698 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1698 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1699
1699
1700
1700
1701 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1701 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1702 # We should eventually reconcile this logic with the one behind
1702 # We should eventually reconcile this logic with the one behind
1703 # 'exchange.getbundle2partsgenerator'.
1703 # 'exchange.getbundle2partsgenerator'.
1704 #
1704 #
1705 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1705 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1706 # different right now. So we keep them separated for now for the sake of
1706 # different right now. So we keep them separated for now for the sake of
1707 # simplicity.
1707 # simplicity.
1708
1708
1709 # we might not always want a changegroup in such bundle, for example in
1709 # we might not always want a changegroup in such bundle, for example in
1710 # stream bundles
1710 # stream bundles
1711 if opts.get(b'changegroup', True):
1711 if opts.get(b'changegroup', True):
1712 cgversion = opts.get(b'cg.version')
1712 cgversion = opts.get(b'cg.version')
1713 if cgversion is None:
1713 if cgversion is None:
1714 cgversion = changegroup.safeversion(repo)
1714 cgversion = changegroup.safeversion(repo)
1715 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1715 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1716 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1716 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1717 part.addparam(b'version', cg.version)
1717 part.addparam(b'version', cg.version)
1718 if b'clcount' in cg.extras:
1718 if b'clcount' in cg.extras:
1719 part.addparam(
1719 part.addparam(
1720 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1720 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1721 )
1721 )
1722 if opts.get(b'phases') and repo.revs(
1722 if opts.get(b'phases') and repo.revs(
1723 b'%ln and secret()', outgoing.ancestorsof
1723 b'%ln and secret()', outgoing.ancestorsof
1724 ):
1724 ):
1725 part.addparam(
1725 part.addparam(
1726 b'targetphase', b'%d' % phases.secret, mandatory=False
1726 b'targetphase', b'%d' % phases.secret, mandatory=False
1727 )
1727 )
1728 if b'exp-sidedata-flag' in repo.requirements:
1728 if b'exp-sidedata-flag' in repo.requirements:
1729 part.addparam(b'exp-sidedata', b'1')
1729 part.addparam(b'exp-sidedata', b'1')
1730
1730
1731 if opts.get(b'streamv2', False):
1731 if opts.get(b'streamv2', False):
1732 addpartbundlestream2(bundler, repo, stream=True)
1732 addpartbundlestream2(bundler, repo, stream=True)
1733
1733
1734 if opts.get(b'tagsfnodescache', True):
1734 if opts.get(b'tagsfnodescache', True):
1735 addparttagsfnodescache(repo, bundler, outgoing)
1735 addparttagsfnodescache(repo, bundler, outgoing)
1736
1736
1737 if opts.get(b'revbranchcache', True):
1737 if opts.get(b'revbranchcache', True):
1738 addpartrevbranchcache(repo, bundler, outgoing)
1738 addpartrevbranchcache(repo, bundler, outgoing)
1739
1739
1740 if opts.get(b'obsolescence', False):
1740 if opts.get(b'obsolescence', False):
1741 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1741 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1742 buildobsmarkerspart(
1742 buildobsmarkerspart(
1743 bundler,
1743 bundler,
1744 obsmarkers,
1744 obsmarkers,
1745 mandatory=opts.get(b'obsolescence-mandatory', True),
1745 mandatory=opts.get(b'obsolescence-mandatory', True),
1746 )
1746 )
1747
1747
1748 if opts.get(b'phases', False):
1748 if opts.get(b'phases', False):
1749 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1749 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1750 phasedata = phases.binaryencode(headsbyphase)
1750 phasedata = phases.binaryencode(headsbyphase)
1751 bundler.newpart(b'phase-heads', data=phasedata)
1751 bundler.newpart(b'phase-heads', data=phasedata)
1752
1752
1753
1753
1754 def addparttagsfnodescache(repo, bundler, outgoing):
1754 def addparttagsfnodescache(repo, bundler, outgoing):
1755 # we include the tags fnode cache for the bundle changeset
1755 # we include the tags fnode cache for the bundle changeset
1756 # (as an optional parts)
1756 # (as an optional parts)
1757 cache = tags.hgtagsfnodescache(repo.unfiltered())
1757 cache = tags.hgtagsfnodescache(repo.unfiltered())
1758 chunks = []
1758 chunks = []
1759
1759
1760 # .hgtags fnodes are only relevant for head changesets. While we could
1760 # .hgtags fnodes are only relevant for head changesets. While we could
1761 # transfer values for all known nodes, there will likely be little to
1761 # transfer values for all known nodes, there will likely be little to
1762 # no benefit.
1762 # no benefit.
1763 #
1763 #
1764 # We don't bother using a generator to produce output data because
1764 # We don't bother using a generator to produce output data because
1765 # a) we only have 40 bytes per head and even esoteric numbers of heads
1765 # a) we only have 40 bytes per head and even esoteric numbers of heads
1766 # consume little memory (1M heads is 40MB) b) we don't want to send the
1766 # consume little memory (1M heads is 40MB) b) we don't want to send the
1767 # part if we don't have entries and knowing if we have entries requires
1767 # part if we don't have entries and knowing if we have entries requires
1768 # cache lookups.
1768 # cache lookups.
1769 for node in outgoing.ancestorsof:
1769 for node in outgoing.ancestorsof:
1770 # Don't compute missing, as this may slow down serving.
1770 # Don't compute missing, as this may slow down serving.
1771 fnode = cache.getfnode(node, computemissing=False)
1771 fnode = cache.getfnode(node, computemissing=False)
1772 if fnode is not None:
1772 if fnode:
1773 chunks.extend([node, fnode])
1773 chunks.extend([node, fnode])
1774
1774
1775 if chunks:
1775 if chunks:
1776 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1776 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1777
1777
1778
1778
1779 def addpartrevbranchcache(repo, bundler, outgoing):
1779 def addpartrevbranchcache(repo, bundler, outgoing):
1780 # we include the rev branch cache for the bundle changeset
1780 # we include the rev branch cache for the bundle changeset
1781 # (as an optional parts)
1781 # (as an optional parts)
1782 cache = repo.revbranchcache()
1782 cache = repo.revbranchcache()
1783 cl = repo.unfiltered().changelog
1783 cl = repo.unfiltered().changelog
1784 branchesdata = collections.defaultdict(lambda: (set(), set()))
1784 branchesdata = collections.defaultdict(lambda: (set(), set()))
1785 for node in outgoing.missing:
1785 for node in outgoing.missing:
1786 branch, close = cache.branchinfo(cl.rev(node))
1786 branch, close = cache.branchinfo(cl.rev(node))
1787 branchesdata[branch][close].add(node)
1787 branchesdata[branch][close].add(node)
1788
1788
1789 def generate():
1789 def generate():
1790 for branch, (nodes, closed) in sorted(branchesdata.items()):
1790 for branch, (nodes, closed) in sorted(branchesdata.items()):
1791 utf8branch = encoding.fromlocal(branch)
1791 utf8branch = encoding.fromlocal(branch)
1792 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1792 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1793 yield utf8branch
1793 yield utf8branch
1794 for n in sorted(nodes):
1794 for n in sorted(nodes):
1795 yield n
1795 yield n
1796 for n in sorted(closed):
1796 for n in sorted(closed):
1797 yield n
1797 yield n
1798
1798
1799 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1799 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1800
1800
1801
1801
1802 def _formatrequirementsspec(requirements):
1802 def _formatrequirementsspec(requirements):
1803 requirements = [req for req in requirements if req != b"shared"]
1803 requirements = [req for req in requirements if req != b"shared"]
1804 return urlreq.quote(b','.join(sorted(requirements)))
1804 return urlreq.quote(b','.join(sorted(requirements)))
1805
1805
1806
1806
1807 def _formatrequirementsparams(requirements):
1807 def _formatrequirementsparams(requirements):
1808 requirements = _formatrequirementsspec(requirements)
1808 requirements = _formatrequirementsspec(requirements)
1809 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1809 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1810 return params
1810 return params
1811
1811
1812
1812
1813 def addpartbundlestream2(bundler, repo, **kwargs):
1813 def addpartbundlestream2(bundler, repo, **kwargs):
1814 if not kwargs.get('stream', False):
1814 if not kwargs.get('stream', False):
1815 return
1815 return
1816
1816
1817 if not streamclone.allowservergeneration(repo):
1817 if not streamclone.allowservergeneration(repo):
1818 raise error.Abort(
1818 raise error.Abort(
1819 _(
1819 _(
1820 b'stream data requested but server does not allow '
1820 b'stream data requested but server does not allow '
1821 b'this feature'
1821 b'this feature'
1822 ),
1822 ),
1823 hint=_(
1823 hint=_(
1824 b'well-behaved clients should not be '
1824 b'well-behaved clients should not be '
1825 b'requesting stream data from servers not '
1825 b'requesting stream data from servers not '
1826 b'advertising it; the client may be buggy'
1826 b'advertising it; the client may be buggy'
1827 ),
1827 ),
1828 )
1828 )
1829
1829
1830 # Stream clones don't compress well. And compression undermines a
1830 # Stream clones don't compress well. And compression undermines a
1831 # goal of stream clones, which is to be fast. Communicate the desire
1831 # goal of stream clones, which is to be fast. Communicate the desire
1832 # to avoid compression to consumers of the bundle.
1832 # to avoid compression to consumers of the bundle.
1833 bundler.prefercompressed = False
1833 bundler.prefercompressed = False
1834
1834
1835 # get the includes and excludes
1835 # get the includes and excludes
1836 includepats = kwargs.get('includepats')
1836 includepats = kwargs.get('includepats')
1837 excludepats = kwargs.get('excludepats')
1837 excludepats = kwargs.get('excludepats')
1838
1838
1839 narrowstream = repo.ui.configbool(
1839 narrowstream = repo.ui.configbool(
1840 b'experimental', b'server.stream-narrow-clones'
1840 b'experimental', b'server.stream-narrow-clones'
1841 )
1841 )
1842
1842
1843 if (includepats or excludepats) and not narrowstream:
1843 if (includepats or excludepats) and not narrowstream:
1844 raise error.Abort(_(b'server does not support narrow stream clones'))
1844 raise error.Abort(_(b'server does not support narrow stream clones'))
1845
1845
1846 includeobsmarkers = False
1846 includeobsmarkers = False
1847 if repo.obsstore:
1847 if repo.obsstore:
1848 remoteversions = obsmarkersversion(bundler.capabilities)
1848 remoteversions = obsmarkersversion(bundler.capabilities)
1849 if not remoteversions:
1849 if not remoteversions:
1850 raise error.Abort(
1850 raise error.Abort(
1851 _(
1851 _(
1852 b'server has obsolescence markers, but client '
1852 b'server has obsolescence markers, but client '
1853 b'cannot receive them via stream clone'
1853 b'cannot receive them via stream clone'
1854 )
1854 )
1855 )
1855 )
1856 elif repo.obsstore._version in remoteversions:
1856 elif repo.obsstore._version in remoteversions:
1857 includeobsmarkers = True
1857 includeobsmarkers = True
1858
1858
1859 filecount, bytecount, it = streamclone.generatev2(
1859 filecount, bytecount, it = streamclone.generatev2(
1860 repo, includepats, excludepats, includeobsmarkers
1860 repo, includepats, excludepats, includeobsmarkers
1861 )
1861 )
1862 requirements = _formatrequirementsspec(repo.requirements)
1862 requirements = _formatrequirementsspec(repo.requirements)
1863 part = bundler.newpart(b'stream2', data=it)
1863 part = bundler.newpart(b'stream2', data=it)
1864 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1864 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1865 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1865 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1866 part.addparam(b'requirements', requirements, mandatory=True)
1866 part.addparam(b'requirements', requirements, mandatory=True)
1867
1867
1868
1868
1869 def buildobsmarkerspart(bundler, markers, mandatory=True):
1869 def buildobsmarkerspart(bundler, markers, mandatory=True):
1870 """add an obsmarker part to the bundler with <markers>
1870 """add an obsmarker part to the bundler with <markers>
1871
1871
1872 No part is created if markers is empty.
1872 No part is created if markers is empty.
1873 Raises ValueError if the bundler doesn't support any known obsmarker format.
1873 Raises ValueError if the bundler doesn't support any known obsmarker format.
1874 """
1874 """
1875 if not markers:
1875 if not markers:
1876 return None
1876 return None
1877
1877
1878 remoteversions = obsmarkersversion(bundler.capabilities)
1878 remoteversions = obsmarkersversion(bundler.capabilities)
1879 version = obsolete.commonversion(remoteversions)
1879 version = obsolete.commonversion(remoteversions)
1880 if version is None:
1880 if version is None:
1881 raise ValueError(b'bundler does not support common obsmarker format')
1881 raise ValueError(b'bundler does not support common obsmarker format')
1882 stream = obsolete.encodemarkers(markers, True, version=version)
1882 stream = obsolete.encodemarkers(markers, True, version=version)
1883 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1883 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1884
1884
1885
1885
1886 def writebundle(
1886 def writebundle(
1887 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1887 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1888 ):
1888 ):
1889 """Write a bundle file and return its filename.
1889 """Write a bundle file and return its filename.
1890
1890
1891 Existing files will not be overwritten.
1891 Existing files will not be overwritten.
1892 If no filename is specified, a temporary file is created.
1892 If no filename is specified, a temporary file is created.
1893 bz2 compression can be turned off.
1893 bz2 compression can be turned off.
1894 The bundle file will be deleted in case of errors.
1894 The bundle file will be deleted in case of errors.
1895 """
1895 """
1896
1896
1897 if bundletype == b"HG20":
1897 if bundletype == b"HG20":
1898 bundle = bundle20(ui)
1898 bundle = bundle20(ui)
1899 bundle.setcompression(compression, compopts)
1899 bundle.setcompression(compression, compopts)
1900 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1900 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1901 part.addparam(b'version', cg.version)
1901 part.addparam(b'version', cg.version)
1902 if b'clcount' in cg.extras:
1902 if b'clcount' in cg.extras:
1903 part.addparam(
1903 part.addparam(
1904 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1904 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1905 )
1905 )
1906 chunkiter = bundle.getchunks()
1906 chunkiter = bundle.getchunks()
1907 else:
1907 else:
1908 # compression argument is only for the bundle2 case
1908 # compression argument is only for the bundle2 case
1909 assert compression is None
1909 assert compression is None
1910 if cg.version != b'01':
1910 if cg.version != b'01':
1911 raise error.Abort(
1911 raise error.Abort(
1912 _(b'old bundle types only supports v1 changegroups')
1912 _(b'old bundle types only supports v1 changegroups')
1913 )
1913 )
1914 header, comp = bundletypes[bundletype]
1914 header, comp = bundletypes[bundletype]
1915 if comp not in util.compengines.supportedbundletypes:
1915 if comp not in util.compengines.supportedbundletypes:
1916 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1916 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1917 compengine = util.compengines.forbundletype(comp)
1917 compengine = util.compengines.forbundletype(comp)
1918
1918
1919 def chunkiter():
1919 def chunkiter():
1920 yield header
1920 yield header
1921 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1921 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1922 yield chunk
1922 yield chunk
1923
1923
1924 chunkiter = chunkiter()
1924 chunkiter = chunkiter()
1925
1925
1926 # parse the changegroup data, otherwise we will block
1926 # parse the changegroup data, otherwise we will block
1927 # in case of sshrepo because we don't know the end of the stream
1927 # in case of sshrepo because we don't know the end of the stream
1928 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1928 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1929
1929
1930
1930
1931 def combinechangegroupresults(op):
1931 def combinechangegroupresults(op):
1932 """logic to combine 0 or more addchangegroup results into one"""
1932 """logic to combine 0 or more addchangegroup results into one"""
1933 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1933 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1934 changedheads = 0
1934 changedheads = 0
1935 result = 1
1935 result = 1
1936 for ret in results:
1936 for ret in results:
1937 # If any changegroup result is 0, return 0
1937 # If any changegroup result is 0, return 0
1938 if ret == 0:
1938 if ret == 0:
1939 result = 0
1939 result = 0
1940 break
1940 break
1941 if ret < -1:
1941 if ret < -1:
1942 changedheads += ret + 1
1942 changedheads += ret + 1
1943 elif ret > 1:
1943 elif ret > 1:
1944 changedheads += ret - 1
1944 changedheads += ret - 1
1945 if changedheads > 0:
1945 if changedheads > 0:
1946 result = 1 + changedheads
1946 result = 1 + changedheads
1947 elif changedheads < 0:
1947 elif changedheads < 0:
1948 result = -1 + changedheads
1948 result = -1 + changedheads
1949 return result
1949 return result
1950
1950
1951
1951
1952 @parthandler(
1952 @parthandler(
1953 b'changegroup',
1953 b'changegroup',
1954 (
1954 (
1955 b'version',
1955 b'version',
1956 b'nbchanges',
1956 b'nbchanges',
1957 b'exp-sidedata',
1957 b'exp-sidedata',
1958 b'treemanifest',
1958 b'treemanifest',
1959 b'targetphase',
1959 b'targetphase',
1960 ),
1960 ),
1961 )
1961 )
1962 def handlechangegroup(op, inpart):
1962 def handlechangegroup(op, inpart):
1963 """apply a changegroup part on the repo"""
1963 """apply a changegroup part on the repo"""
1964 from . import localrepo
1964 from . import localrepo
1965
1965
1966 tr = op.gettransaction()
1966 tr = op.gettransaction()
1967 unpackerversion = inpart.params.get(b'version', b'01')
1967 unpackerversion = inpart.params.get(b'version', b'01')
1968 # We should raise an appropriate exception here
1968 # We should raise an appropriate exception here
1969 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1969 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1970 # the source and url passed here are overwritten by the one contained in
1970 # the source and url passed here are overwritten by the one contained in
1971 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1971 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1972 nbchangesets = None
1972 nbchangesets = None
1973 if b'nbchanges' in inpart.params:
1973 if b'nbchanges' in inpart.params:
1974 nbchangesets = int(inpart.params.get(b'nbchanges'))
1974 nbchangesets = int(inpart.params.get(b'nbchanges'))
1975 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
1975 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
1976 if len(op.repo.changelog) != 0:
1976 if len(op.repo.changelog) != 0:
1977 raise error.Abort(
1977 raise error.Abort(
1978 _(
1978 _(
1979 b"bundle contains tree manifests, but local repo is "
1979 b"bundle contains tree manifests, but local repo is "
1980 b"non-empty and does not use tree manifests"
1980 b"non-empty and does not use tree manifests"
1981 )
1981 )
1982 )
1982 )
1983 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
1983 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
1984 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
1984 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
1985 op.repo.ui, op.repo.requirements, op.repo.features
1985 op.repo.ui, op.repo.requirements, op.repo.features
1986 )
1986 )
1987 scmutil.writereporequirements(op.repo)
1987 scmutil.writereporequirements(op.repo)
1988
1988
1989 bundlesidedata = bool(b'exp-sidedata' in inpart.params)
1989 bundlesidedata = bool(b'exp-sidedata' in inpart.params)
1990 reposidedata = bool(b'exp-sidedata-flag' in op.repo.requirements)
1990 reposidedata = bool(b'exp-sidedata-flag' in op.repo.requirements)
1991 if reposidedata and not bundlesidedata:
1991 if reposidedata and not bundlesidedata:
1992 msg = b"repository is using sidedata but the bundle source do not"
1992 msg = b"repository is using sidedata but the bundle source do not"
1993 hint = b'this is currently unsupported'
1993 hint = b'this is currently unsupported'
1994 raise error.Abort(msg, hint=hint)
1994 raise error.Abort(msg, hint=hint)
1995
1995
1996 extrakwargs = {}
1996 extrakwargs = {}
1997 targetphase = inpart.params.get(b'targetphase')
1997 targetphase = inpart.params.get(b'targetphase')
1998 if targetphase is not None:
1998 if targetphase is not None:
1999 extrakwargs['targetphase'] = int(targetphase)
1999 extrakwargs['targetphase'] = int(targetphase)
2000 ret = _processchangegroup(
2000 ret = _processchangegroup(
2001 op,
2001 op,
2002 cg,
2002 cg,
2003 tr,
2003 tr,
2004 b'bundle2',
2004 b'bundle2',
2005 b'bundle2',
2005 b'bundle2',
2006 expectedtotal=nbchangesets,
2006 expectedtotal=nbchangesets,
2007 **extrakwargs
2007 **extrakwargs
2008 )
2008 )
2009 if op.reply is not None:
2009 if op.reply is not None:
2010 # This is definitely not the final form of this
2010 # This is definitely not the final form of this
2011 # return. But one need to start somewhere.
2011 # return. But one need to start somewhere.
2012 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2012 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2013 part.addparam(
2013 part.addparam(
2014 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2014 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2015 )
2015 )
2016 part.addparam(b'return', b'%i' % ret, mandatory=False)
2016 part.addparam(b'return', b'%i' % ret, mandatory=False)
2017 assert not inpart.read()
2017 assert not inpart.read()
2018
2018
2019
2019
2020 _remotechangegroupparams = tuple(
2020 _remotechangegroupparams = tuple(
2021 [b'url', b'size', b'digests']
2021 [b'url', b'size', b'digests']
2022 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2022 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2023 )
2023 )
2024
2024
2025
2025
2026 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2026 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2027 def handleremotechangegroup(op, inpart):
2027 def handleremotechangegroup(op, inpart):
2028 """apply a bundle10 on the repo, given an url and validation information
2028 """apply a bundle10 on the repo, given an url and validation information
2029
2029
2030 All the information about the remote bundle to import are given as
2030 All the information about the remote bundle to import are given as
2031 parameters. The parameters include:
2031 parameters. The parameters include:
2032 - url: the url to the bundle10.
2032 - url: the url to the bundle10.
2033 - size: the bundle10 file size. It is used to validate what was
2033 - size: the bundle10 file size. It is used to validate what was
2034 retrieved by the client matches the server knowledge about the bundle.
2034 retrieved by the client matches the server knowledge about the bundle.
2035 - digests: a space separated list of the digest types provided as
2035 - digests: a space separated list of the digest types provided as
2036 parameters.
2036 parameters.
2037 - digest:<digest-type>: the hexadecimal representation of the digest with
2037 - digest:<digest-type>: the hexadecimal representation of the digest with
2038 that name. Like the size, it is used to validate what was retrieved by
2038 that name. Like the size, it is used to validate what was retrieved by
2039 the client matches what the server knows about the bundle.
2039 the client matches what the server knows about the bundle.
2040
2040
2041 When multiple digest types are given, all of them are checked.
2041 When multiple digest types are given, all of them are checked.
2042 """
2042 """
2043 try:
2043 try:
2044 raw_url = inpart.params[b'url']
2044 raw_url = inpart.params[b'url']
2045 except KeyError:
2045 except KeyError:
2046 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2046 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2047 parsed_url = util.url(raw_url)
2047 parsed_url = util.url(raw_url)
2048 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2048 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2049 raise error.Abort(
2049 raise error.Abort(
2050 _(b'remote-changegroup does not support %s urls')
2050 _(b'remote-changegroup does not support %s urls')
2051 % parsed_url.scheme
2051 % parsed_url.scheme
2052 )
2052 )
2053
2053
2054 try:
2054 try:
2055 size = int(inpart.params[b'size'])
2055 size = int(inpart.params[b'size'])
2056 except ValueError:
2056 except ValueError:
2057 raise error.Abort(
2057 raise error.Abort(
2058 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2058 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2059 )
2059 )
2060 except KeyError:
2060 except KeyError:
2061 raise error.Abort(
2061 raise error.Abort(
2062 _(b'remote-changegroup: missing "%s" param') % b'size'
2062 _(b'remote-changegroup: missing "%s" param') % b'size'
2063 )
2063 )
2064
2064
2065 digests = {}
2065 digests = {}
2066 for typ in inpart.params.get(b'digests', b'').split():
2066 for typ in inpart.params.get(b'digests', b'').split():
2067 param = b'digest:%s' % typ
2067 param = b'digest:%s' % typ
2068 try:
2068 try:
2069 value = inpart.params[param]
2069 value = inpart.params[param]
2070 except KeyError:
2070 except KeyError:
2071 raise error.Abort(
2071 raise error.Abort(
2072 _(b'remote-changegroup: missing "%s" param') % param
2072 _(b'remote-changegroup: missing "%s" param') % param
2073 )
2073 )
2074 digests[typ] = value
2074 digests[typ] = value
2075
2075
2076 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2076 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2077
2077
2078 tr = op.gettransaction()
2078 tr = op.gettransaction()
2079 from . import exchange
2079 from . import exchange
2080
2080
2081 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2081 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2082 if not isinstance(cg, changegroup.cg1unpacker):
2082 if not isinstance(cg, changegroup.cg1unpacker):
2083 raise error.Abort(
2083 raise error.Abort(
2084 _(b'%s: not a bundle version 1.0') % util.hidepassword(raw_url)
2084 _(b'%s: not a bundle version 1.0') % util.hidepassword(raw_url)
2085 )
2085 )
2086 ret = _processchangegroup(op, cg, tr, b'bundle2', b'bundle2')
2086 ret = _processchangegroup(op, cg, tr, b'bundle2', b'bundle2')
2087 if op.reply is not None:
2087 if op.reply is not None:
2088 # This is definitely not the final form of this
2088 # This is definitely not the final form of this
2089 # return. But one need to start somewhere.
2089 # return. But one need to start somewhere.
2090 part = op.reply.newpart(b'reply:changegroup')
2090 part = op.reply.newpart(b'reply:changegroup')
2091 part.addparam(
2091 part.addparam(
2092 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2092 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2093 )
2093 )
2094 part.addparam(b'return', b'%i' % ret, mandatory=False)
2094 part.addparam(b'return', b'%i' % ret, mandatory=False)
2095 try:
2095 try:
2096 real_part.validate()
2096 real_part.validate()
2097 except error.Abort as e:
2097 except error.Abort as e:
2098 raise error.Abort(
2098 raise error.Abort(
2099 _(b'bundle at %s is corrupted:\n%s')
2099 _(b'bundle at %s is corrupted:\n%s')
2100 % (util.hidepassword(raw_url), e.message)
2100 % (util.hidepassword(raw_url), e.message)
2101 )
2101 )
2102 assert not inpart.read()
2102 assert not inpart.read()
2103
2103
2104
2104
2105 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2105 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2106 def handlereplychangegroup(op, inpart):
2106 def handlereplychangegroup(op, inpart):
2107 ret = int(inpart.params[b'return'])
2107 ret = int(inpart.params[b'return'])
2108 replyto = int(inpart.params[b'in-reply-to'])
2108 replyto = int(inpart.params[b'in-reply-to'])
2109 op.records.add(b'changegroup', {b'return': ret}, replyto)
2109 op.records.add(b'changegroup', {b'return': ret}, replyto)
2110
2110
2111
2111
2112 @parthandler(b'check:bookmarks')
2112 @parthandler(b'check:bookmarks')
2113 def handlecheckbookmarks(op, inpart):
2113 def handlecheckbookmarks(op, inpart):
2114 """check location of bookmarks
2114 """check location of bookmarks
2115
2115
2116 This part is to be used to detect push race regarding bookmark, it
2116 This part is to be used to detect push race regarding bookmark, it
2117 contains binary encoded (bookmark, node) tuple. If the local state does
2117 contains binary encoded (bookmark, node) tuple. If the local state does
2118 not marks the one in the part, a PushRaced exception is raised
2118 not marks the one in the part, a PushRaced exception is raised
2119 """
2119 """
2120 bookdata = bookmarks.binarydecode(inpart)
2120 bookdata = bookmarks.binarydecode(inpart)
2121
2121
2122 msgstandard = (
2122 msgstandard = (
2123 b'remote repository changed while pushing - please try again '
2123 b'remote repository changed while pushing - please try again '
2124 b'(bookmark "%s" move from %s to %s)'
2124 b'(bookmark "%s" move from %s to %s)'
2125 )
2125 )
2126 msgmissing = (
2126 msgmissing = (
2127 b'remote repository changed while pushing - please try again '
2127 b'remote repository changed while pushing - please try again '
2128 b'(bookmark "%s" is missing, expected %s)'
2128 b'(bookmark "%s" is missing, expected %s)'
2129 )
2129 )
2130 msgexist = (
2130 msgexist = (
2131 b'remote repository changed while pushing - please try again '
2131 b'remote repository changed while pushing - please try again '
2132 b'(bookmark "%s" set on %s, expected missing)'
2132 b'(bookmark "%s" set on %s, expected missing)'
2133 )
2133 )
2134 for book, node in bookdata:
2134 for book, node in bookdata:
2135 currentnode = op.repo._bookmarks.get(book)
2135 currentnode = op.repo._bookmarks.get(book)
2136 if currentnode != node:
2136 if currentnode != node:
2137 if node is None:
2137 if node is None:
2138 finalmsg = msgexist % (book, short(currentnode))
2138 finalmsg = msgexist % (book, short(currentnode))
2139 elif currentnode is None:
2139 elif currentnode is None:
2140 finalmsg = msgmissing % (book, short(node))
2140 finalmsg = msgmissing % (book, short(node))
2141 else:
2141 else:
2142 finalmsg = msgstandard % (
2142 finalmsg = msgstandard % (
2143 book,
2143 book,
2144 short(node),
2144 short(node),
2145 short(currentnode),
2145 short(currentnode),
2146 )
2146 )
2147 raise error.PushRaced(finalmsg)
2147 raise error.PushRaced(finalmsg)
2148
2148
2149
2149
2150 @parthandler(b'check:heads')
2150 @parthandler(b'check:heads')
2151 def handlecheckheads(op, inpart):
2151 def handlecheckheads(op, inpart):
2152 """check that head of the repo did not change
2152 """check that head of the repo did not change
2153
2153
2154 This is used to detect a push race when using unbundle.
2154 This is used to detect a push race when using unbundle.
2155 This replaces the "heads" argument of unbundle."""
2155 This replaces the "heads" argument of unbundle."""
2156 h = inpart.read(20)
2156 h = inpart.read(20)
2157 heads = []
2157 heads = []
2158 while len(h) == 20:
2158 while len(h) == 20:
2159 heads.append(h)
2159 heads.append(h)
2160 h = inpart.read(20)
2160 h = inpart.read(20)
2161 assert not h
2161 assert not h
2162 # Trigger a transaction so that we are guaranteed to have the lock now.
2162 # Trigger a transaction so that we are guaranteed to have the lock now.
2163 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2163 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2164 op.gettransaction()
2164 op.gettransaction()
2165 if sorted(heads) != sorted(op.repo.heads()):
2165 if sorted(heads) != sorted(op.repo.heads()):
2166 raise error.PushRaced(
2166 raise error.PushRaced(
2167 b'remote repository changed while pushing - please try again'
2167 b'remote repository changed while pushing - please try again'
2168 )
2168 )
2169
2169
2170
2170
2171 @parthandler(b'check:updated-heads')
2171 @parthandler(b'check:updated-heads')
2172 def handlecheckupdatedheads(op, inpart):
2172 def handlecheckupdatedheads(op, inpart):
2173 """check for race on the heads touched by a push
2173 """check for race on the heads touched by a push
2174
2174
2175 This is similar to 'check:heads' but focus on the heads actually updated
2175 This is similar to 'check:heads' but focus on the heads actually updated
2176 during the push. If other activities happen on unrelated heads, it is
2176 during the push. If other activities happen on unrelated heads, it is
2177 ignored.
2177 ignored.
2178
2178
2179 This allow server with high traffic to avoid push contention as long as
2179 This allow server with high traffic to avoid push contention as long as
2180 unrelated parts of the graph are involved."""
2180 unrelated parts of the graph are involved."""
2181 h = inpart.read(20)
2181 h = inpart.read(20)
2182 heads = []
2182 heads = []
2183 while len(h) == 20:
2183 while len(h) == 20:
2184 heads.append(h)
2184 heads.append(h)
2185 h = inpart.read(20)
2185 h = inpart.read(20)
2186 assert not h
2186 assert not h
2187 # trigger a transaction so that we are guaranteed to have the lock now.
2187 # trigger a transaction so that we are guaranteed to have the lock now.
2188 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2188 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2189 op.gettransaction()
2189 op.gettransaction()
2190
2190
2191 currentheads = set()
2191 currentheads = set()
2192 for ls in op.repo.branchmap().iterheads():
2192 for ls in op.repo.branchmap().iterheads():
2193 currentheads.update(ls)
2193 currentheads.update(ls)
2194
2194
2195 for h in heads:
2195 for h in heads:
2196 if h not in currentheads:
2196 if h not in currentheads:
2197 raise error.PushRaced(
2197 raise error.PushRaced(
2198 b'remote repository changed while pushing - '
2198 b'remote repository changed while pushing - '
2199 b'please try again'
2199 b'please try again'
2200 )
2200 )
2201
2201
2202
2202
2203 @parthandler(b'check:phases')
2203 @parthandler(b'check:phases')
2204 def handlecheckphases(op, inpart):
2204 def handlecheckphases(op, inpart):
2205 """check that phase boundaries of the repository did not change
2205 """check that phase boundaries of the repository did not change
2206
2206
2207 This is used to detect a push race.
2207 This is used to detect a push race.
2208 """
2208 """
2209 phasetonodes = phases.binarydecode(inpart)
2209 phasetonodes = phases.binarydecode(inpart)
2210 unfi = op.repo.unfiltered()
2210 unfi = op.repo.unfiltered()
2211 cl = unfi.changelog
2211 cl = unfi.changelog
2212 phasecache = unfi._phasecache
2212 phasecache = unfi._phasecache
2213 msg = (
2213 msg = (
2214 b'remote repository changed while pushing - please try again '
2214 b'remote repository changed while pushing - please try again '
2215 b'(%s is %s expected %s)'
2215 b'(%s is %s expected %s)'
2216 )
2216 )
2217 for expectedphase, nodes in pycompat.iteritems(phasetonodes):
2217 for expectedphase, nodes in pycompat.iteritems(phasetonodes):
2218 for n in nodes:
2218 for n in nodes:
2219 actualphase = phasecache.phase(unfi, cl.rev(n))
2219 actualphase = phasecache.phase(unfi, cl.rev(n))
2220 if actualphase != expectedphase:
2220 if actualphase != expectedphase:
2221 finalmsg = msg % (
2221 finalmsg = msg % (
2222 short(n),
2222 short(n),
2223 phases.phasenames[actualphase],
2223 phases.phasenames[actualphase],
2224 phases.phasenames[expectedphase],
2224 phases.phasenames[expectedphase],
2225 )
2225 )
2226 raise error.PushRaced(finalmsg)
2226 raise error.PushRaced(finalmsg)
2227
2227
2228
2228
2229 @parthandler(b'output')
2229 @parthandler(b'output')
2230 def handleoutput(op, inpart):
2230 def handleoutput(op, inpart):
2231 """forward output captured on the server to the client"""
2231 """forward output captured on the server to the client"""
2232 for line in inpart.read().splitlines():
2232 for line in inpart.read().splitlines():
2233 op.ui.status(_(b'remote: %s\n') % line)
2233 op.ui.status(_(b'remote: %s\n') % line)
2234
2234
2235
2235
2236 @parthandler(b'replycaps')
2236 @parthandler(b'replycaps')
2237 def handlereplycaps(op, inpart):
2237 def handlereplycaps(op, inpart):
2238 """Notify that a reply bundle should be created
2238 """Notify that a reply bundle should be created
2239
2239
2240 The payload contains the capabilities information for the reply"""
2240 The payload contains the capabilities information for the reply"""
2241 caps = decodecaps(inpart.read())
2241 caps = decodecaps(inpart.read())
2242 if op.reply is None:
2242 if op.reply is None:
2243 op.reply = bundle20(op.ui, caps)
2243 op.reply = bundle20(op.ui, caps)
2244
2244
2245
2245
2246 class AbortFromPart(error.Abort):
2246 class AbortFromPart(error.Abort):
2247 """Sub-class of Abort that denotes an error from a bundle2 part."""
2247 """Sub-class of Abort that denotes an error from a bundle2 part."""
2248
2248
2249
2249
2250 @parthandler(b'error:abort', (b'message', b'hint'))
2250 @parthandler(b'error:abort', (b'message', b'hint'))
2251 def handleerrorabort(op, inpart):
2251 def handleerrorabort(op, inpart):
2252 """Used to transmit abort error over the wire"""
2252 """Used to transmit abort error over the wire"""
2253 raise AbortFromPart(
2253 raise AbortFromPart(
2254 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2254 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2255 )
2255 )
2256
2256
2257
2257
2258 @parthandler(
2258 @parthandler(
2259 b'error:pushkey',
2259 b'error:pushkey',
2260 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2260 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2261 )
2261 )
2262 def handleerrorpushkey(op, inpart):
2262 def handleerrorpushkey(op, inpart):
2263 """Used to transmit failure of a mandatory pushkey over the wire"""
2263 """Used to transmit failure of a mandatory pushkey over the wire"""
2264 kwargs = {}
2264 kwargs = {}
2265 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2265 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2266 value = inpart.params.get(name)
2266 value = inpart.params.get(name)
2267 if value is not None:
2267 if value is not None:
2268 kwargs[name] = value
2268 kwargs[name] = value
2269 raise error.PushkeyFailed(
2269 raise error.PushkeyFailed(
2270 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2270 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2271 )
2271 )
2272
2272
2273
2273
2274 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2274 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2275 def handleerrorunsupportedcontent(op, inpart):
2275 def handleerrorunsupportedcontent(op, inpart):
2276 """Used to transmit unknown content error over the wire"""
2276 """Used to transmit unknown content error over the wire"""
2277 kwargs = {}
2277 kwargs = {}
2278 parttype = inpart.params.get(b'parttype')
2278 parttype = inpart.params.get(b'parttype')
2279 if parttype is not None:
2279 if parttype is not None:
2280 kwargs[b'parttype'] = parttype
2280 kwargs[b'parttype'] = parttype
2281 params = inpart.params.get(b'params')
2281 params = inpart.params.get(b'params')
2282 if params is not None:
2282 if params is not None:
2283 kwargs[b'params'] = params.split(b'\0')
2283 kwargs[b'params'] = params.split(b'\0')
2284
2284
2285 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2285 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2286
2286
2287
2287
2288 @parthandler(b'error:pushraced', (b'message',))
2288 @parthandler(b'error:pushraced', (b'message',))
2289 def handleerrorpushraced(op, inpart):
2289 def handleerrorpushraced(op, inpart):
2290 """Used to transmit push race error over the wire"""
2290 """Used to transmit push race error over the wire"""
2291 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2291 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2292
2292
2293
2293
2294 @parthandler(b'listkeys', (b'namespace',))
2294 @parthandler(b'listkeys', (b'namespace',))
2295 def handlelistkeys(op, inpart):
2295 def handlelistkeys(op, inpart):
2296 """retrieve pushkey namespace content stored in a bundle2"""
2296 """retrieve pushkey namespace content stored in a bundle2"""
2297 namespace = inpart.params[b'namespace']
2297 namespace = inpart.params[b'namespace']
2298 r = pushkey.decodekeys(inpart.read())
2298 r = pushkey.decodekeys(inpart.read())
2299 op.records.add(b'listkeys', (namespace, r))
2299 op.records.add(b'listkeys', (namespace, r))
2300
2300
2301
2301
2302 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2302 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2303 def handlepushkey(op, inpart):
2303 def handlepushkey(op, inpart):
2304 """process a pushkey request"""
2304 """process a pushkey request"""
2305 dec = pushkey.decode
2305 dec = pushkey.decode
2306 namespace = dec(inpart.params[b'namespace'])
2306 namespace = dec(inpart.params[b'namespace'])
2307 key = dec(inpart.params[b'key'])
2307 key = dec(inpart.params[b'key'])
2308 old = dec(inpart.params[b'old'])
2308 old = dec(inpart.params[b'old'])
2309 new = dec(inpart.params[b'new'])
2309 new = dec(inpart.params[b'new'])
2310 # Grab the transaction to ensure that we have the lock before performing the
2310 # Grab the transaction to ensure that we have the lock before performing the
2311 # pushkey.
2311 # pushkey.
2312 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2312 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2313 op.gettransaction()
2313 op.gettransaction()
2314 ret = op.repo.pushkey(namespace, key, old, new)
2314 ret = op.repo.pushkey(namespace, key, old, new)
2315 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2315 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2316 op.records.add(b'pushkey', record)
2316 op.records.add(b'pushkey', record)
2317 if op.reply is not None:
2317 if op.reply is not None:
2318 rpart = op.reply.newpart(b'reply:pushkey')
2318 rpart = op.reply.newpart(b'reply:pushkey')
2319 rpart.addparam(
2319 rpart.addparam(
2320 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2320 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2321 )
2321 )
2322 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2322 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2323 if inpart.mandatory and not ret:
2323 if inpart.mandatory and not ret:
2324 kwargs = {}
2324 kwargs = {}
2325 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2325 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2326 if key in inpart.params:
2326 if key in inpart.params:
2327 kwargs[key] = inpart.params[key]
2327 kwargs[key] = inpart.params[key]
2328 raise error.PushkeyFailed(
2328 raise error.PushkeyFailed(
2329 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2329 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2330 )
2330 )
2331
2331
2332
2332
2333 @parthandler(b'bookmarks')
2333 @parthandler(b'bookmarks')
2334 def handlebookmark(op, inpart):
2334 def handlebookmark(op, inpart):
2335 """transmit bookmark information
2335 """transmit bookmark information
2336
2336
2337 The part contains binary encoded bookmark information.
2337 The part contains binary encoded bookmark information.
2338
2338
2339 The exact behavior of this part can be controlled by the 'bookmarks' mode
2339 The exact behavior of this part can be controlled by the 'bookmarks' mode
2340 on the bundle operation.
2340 on the bundle operation.
2341
2341
2342 When mode is 'apply' (the default) the bookmark information is applied as
2342 When mode is 'apply' (the default) the bookmark information is applied as
2343 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2343 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2344 issued earlier to check for push races in such update. This behavior is
2344 issued earlier to check for push races in such update. This behavior is
2345 suitable for pushing.
2345 suitable for pushing.
2346
2346
2347 When mode is 'records', the information is recorded into the 'bookmarks'
2347 When mode is 'records', the information is recorded into the 'bookmarks'
2348 records of the bundle operation. This behavior is suitable for pulling.
2348 records of the bundle operation. This behavior is suitable for pulling.
2349 """
2349 """
2350 changes = bookmarks.binarydecode(inpart)
2350 changes = bookmarks.binarydecode(inpart)
2351
2351
2352 pushkeycompat = op.repo.ui.configbool(
2352 pushkeycompat = op.repo.ui.configbool(
2353 b'server', b'bookmarks-pushkey-compat'
2353 b'server', b'bookmarks-pushkey-compat'
2354 )
2354 )
2355 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2355 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2356
2356
2357 if bookmarksmode == b'apply':
2357 if bookmarksmode == b'apply':
2358 tr = op.gettransaction()
2358 tr = op.gettransaction()
2359 bookstore = op.repo._bookmarks
2359 bookstore = op.repo._bookmarks
2360 if pushkeycompat:
2360 if pushkeycompat:
2361 allhooks = []
2361 allhooks = []
2362 for book, node in changes:
2362 for book, node in changes:
2363 hookargs = tr.hookargs.copy()
2363 hookargs = tr.hookargs.copy()
2364 hookargs[b'pushkeycompat'] = b'1'
2364 hookargs[b'pushkeycompat'] = b'1'
2365 hookargs[b'namespace'] = b'bookmarks'
2365 hookargs[b'namespace'] = b'bookmarks'
2366 hookargs[b'key'] = book
2366 hookargs[b'key'] = book
2367 hookargs[b'old'] = hex(bookstore.get(book, b''))
2367 hookargs[b'old'] = hex(bookstore.get(book, b''))
2368 hookargs[b'new'] = hex(node if node is not None else b'')
2368 hookargs[b'new'] = hex(node if node is not None else b'')
2369 allhooks.append(hookargs)
2369 allhooks.append(hookargs)
2370
2370
2371 for hookargs in allhooks:
2371 for hookargs in allhooks:
2372 op.repo.hook(
2372 op.repo.hook(
2373 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2373 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2374 )
2374 )
2375
2375
2376 for book, node in changes:
2376 for book, node in changes:
2377 if bookmarks.isdivergent(book):
2377 if bookmarks.isdivergent(book):
2378 msg = _(b'cannot accept divergent bookmark %s!') % book
2378 msg = _(b'cannot accept divergent bookmark %s!') % book
2379 raise error.Abort(msg)
2379 raise error.Abort(msg)
2380
2380
2381 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2381 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2382
2382
2383 if pushkeycompat:
2383 if pushkeycompat:
2384
2384
2385 def runhook(unused_success):
2385 def runhook(unused_success):
2386 for hookargs in allhooks:
2386 for hookargs in allhooks:
2387 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2387 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2388
2388
2389 op.repo._afterlock(runhook)
2389 op.repo._afterlock(runhook)
2390
2390
2391 elif bookmarksmode == b'records':
2391 elif bookmarksmode == b'records':
2392 for book, node in changes:
2392 for book, node in changes:
2393 record = {b'bookmark': book, b'node': node}
2393 record = {b'bookmark': book, b'node': node}
2394 op.records.add(b'bookmarks', record)
2394 op.records.add(b'bookmarks', record)
2395 else:
2395 else:
2396 raise error.ProgrammingError(
2396 raise error.ProgrammingError(
2397 b'unkown bookmark mode: %s' % bookmarksmode
2397 b'unkown bookmark mode: %s' % bookmarksmode
2398 )
2398 )
2399
2399
2400
2400
2401 @parthandler(b'phase-heads')
2401 @parthandler(b'phase-heads')
2402 def handlephases(op, inpart):
2402 def handlephases(op, inpart):
2403 """apply phases from bundle part to repo"""
2403 """apply phases from bundle part to repo"""
2404 headsbyphase = phases.binarydecode(inpart)
2404 headsbyphase = phases.binarydecode(inpart)
2405 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2405 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2406
2406
2407
2407
2408 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2408 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2409 def handlepushkeyreply(op, inpart):
2409 def handlepushkeyreply(op, inpart):
2410 """retrieve the result of a pushkey request"""
2410 """retrieve the result of a pushkey request"""
2411 ret = int(inpart.params[b'return'])
2411 ret = int(inpart.params[b'return'])
2412 partid = int(inpart.params[b'in-reply-to'])
2412 partid = int(inpart.params[b'in-reply-to'])
2413 op.records.add(b'pushkey', {b'return': ret}, partid)
2413 op.records.add(b'pushkey', {b'return': ret}, partid)
2414
2414
2415
2415
2416 @parthandler(b'obsmarkers')
2416 @parthandler(b'obsmarkers')
2417 def handleobsmarker(op, inpart):
2417 def handleobsmarker(op, inpart):
2418 """add a stream of obsmarkers to the repo"""
2418 """add a stream of obsmarkers to the repo"""
2419 tr = op.gettransaction()
2419 tr = op.gettransaction()
2420 markerdata = inpart.read()
2420 markerdata = inpart.read()
2421 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2421 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2422 op.ui.writenoi18n(
2422 op.ui.writenoi18n(
2423 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2423 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2424 )
2424 )
2425 # The mergemarkers call will crash if marker creation is not enabled.
2425 # The mergemarkers call will crash if marker creation is not enabled.
2426 # we want to avoid this if the part is advisory.
2426 # we want to avoid this if the part is advisory.
2427 if not inpart.mandatory and op.repo.obsstore.readonly:
2427 if not inpart.mandatory and op.repo.obsstore.readonly:
2428 op.repo.ui.debug(
2428 op.repo.ui.debug(
2429 b'ignoring obsolescence markers, feature not enabled\n'
2429 b'ignoring obsolescence markers, feature not enabled\n'
2430 )
2430 )
2431 return
2431 return
2432 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2432 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2433 op.repo.invalidatevolatilesets()
2433 op.repo.invalidatevolatilesets()
2434 op.records.add(b'obsmarkers', {b'new': new})
2434 op.records.add(b'obsmarkers', {b'new': new})
2435 if op.reply is not None:
2435 if op.reply is not None:
2436 rpart = op.reply.newpart(b'reply:obsmarkers')
2436 rpart = op.reply.newpart(b'reply:obsmarkers')
2437 rpart.addparam(
2437 rpart.addparam(
2438 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2438 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2439 )
2439 )
2440 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2440 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2441
2441
2442
2442
2443 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2443 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2444 def handleobsmarkerreply(op, inpart):
2444 def handleobsmarkerreply(op, inpart):
2445 """retrieve the result of a pushkey request"""
2445 """retrieve the result of a pushkey request"""
2446 ret = int(inpart.params[b'new'])
2446 ret = int(inpart.params[b'new'])
2447 partid = int(inpart.params[b'in-reply-to'])
2447 partid = int(inpart.params[b'in-reply-to'])
2448 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2448 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2449
2449
2450
2450
2451 @parthandler(b'hgtagsfnodes')
2451 @parthandler(b'hgtagsfnodes')
2452 def handlehgtagsfnodes(op, inpart):
2452 def handlehgtagsfnodes(op, inpart):
2453 """Applies .hgtags fnodes cache entries to the local repo.
2453 """Applies .hgtags fnodes cache entries to the local repo.
2454
2454
2455 Payload is pairs of 20 byte changeset nodes and filenodes.
2455 Payload is pairs of 20 byte changeset nodes and filenodes.
2456 """
2456 """
2457 # Grab the transaction so we ensure that we have the lock at this point.
2457 # Grab the transaction so we ensure that we have the lock at this point.
2458 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2458 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2459 op.gettransaction()
2459 op.gettransaction()
2460 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2460 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2461
2461
2462 count = 0
2462 count = 0
2463 while True:
2463 while True:
2464 node = inpart.read(20)
2464 node = inpart.read(20)
2465 fnode = inpart.read(20)
2465 fnode = inpart.read(20)
2466 if len(node) < 20 or len(fnode) < 20:
2466 if len(node) < 20 or len(fnode) < 20:
2467 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2467 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2468 break
2468 break
2469 cache.setfnode(node, fnode)
2469 cache.setfnode(node, fnode)
2470 count += 1
2470 count += 1
2471
2471
2472 cache.write()
2472 cache.write()
2473 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2473 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2474
2474
2475
2475
2476 rbcstruct = struct.Struct(b'>III')
2476 rbcstruct = struct.Struct(b'>III')
2477
2477
2478
2478
2479 @parthandler(b'cache:rev-branch-cache')
2479 @parthandler(b'cache:rev-branch-cache')
2480 def handlerbc(op, inpart):
2480 def handlerbc(op, inpart):
2481 """Legacy part, ignored for compatibility with bundles from or
2481 """Legacy part, ignored for compatibility with bundles from or
2482 for Mercurial before 5.7. Newer Mercurial computes the cache
2482 for Mercurial before 5.7. Newer Mercurial computes the cache
2483 efficiently enough during unbundling that the additional transfer
2483 efficiently enough during unbundling that the additional transfer
2484 is unnecessary."""
2484 is unnecessary."""
2485
2485
2486
2486
2487 @parthandler(b'pushvars')
2487 @parthandler(b'pushvars')
2488 def bundle2getvars(op, part):
2488 def bundle2getvars(op, part):
2489 '''unbundle a bundle2 containing shellvars on the server'''
2489 '''unbundle a bundle2 containing shellvars on the server'''
2490 # An option to disable unbundling on server-side for security reasons
2490 # An option to disable unbundling on server-side for security reasons
2491 if op.ui.configbool(b'push', b'pushvars.server'):
2491 if op.ui.configbool(b'push', b'pushvars.server'):
2492 hookargs = {}
2492 hookargs = {}
2493 for key, value in part.advisoryparams:
2493 for key, value in part.advisoryparams:
2494 key = key.upper()
2494 key = key.upper()
2495 # We want pushed variables to have USERVAR_ prepended so we know
2495 # We want pushed variables to have USERVAR_ prepended so we know
2496 # they came from the --pushvar flag.
2496 # they came from the --pushvar flag.
2497 key = b"USERVAR_" + key
2497 key = b"USERVAR_" + key
2498 hookargs[key] = value
2498 hookargs[key] = value
2499 op.addhookargs(hookargs)
2499 op.addhookargs(hookargs)
2500
2500
2501
2501
2502 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2502 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2503 def handlestreamv2bundle(op, part):
2503 def handlestreamv2bundle(op, part):
2504
2504
2505 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2505 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2506 filecount = int(part.params[b'filecount'])
2506 filecount = int(part.params[b'filecount'])
2507 bytecount = int(part.params[b'bytecount'])
2507 bytecount = int(part.params[b'bytecount'])
2508
2508
2509 repo = op.repo
2509 repo = op.repo
2510 if len(repo):
2510 if len(repo):
2511 msg = _(b'cannot apply stream clone to non empty repository')
2511 msg = _(b'cannot apply stream clone to non empty repository')
2512 raise error.Abort(msg)
2512 raise error.Abort(msg)
2513
2513
2514 repo.ui.debug(b'applying stream bundle\n')
2514 repo.ui.debug(b'applying stream bundle\n')
2515 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2515 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2516
2516
2517
2517
2518 def widen_bundle(
2518 def widen_bundle(
2519 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2519 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2520 ):
2520 ):
2521 """generates bundle2 for widening a narrow clone
2521 """generates bundle2 for widening a narrow clone
2522
2522
2523 bundler is the bundle to which data should be added
2523 bundler is the bundle to which data should be added
2524 repo is the localrepository instance
2524 repo is the localrepository instance
2525 oldmatcher matches what the client already has
2525 oldmatcher matches what the client already has
2526 newmatcher matches what the client needs (including what it already has)
2526 newmatcher matches what the client needs (including what it already has)
2527 common is set of common heads between server and client
2527 common is set of common heads between server and client
2528 known is a set of revs known on the client side (used in ellipses)
2528 known is a set of revs known on the client side (used in ellipses)
2529 cgversion is the changegroup version to send
2529 cgversion is the changegroup version to send
2530 ellipses is boolean value telling whether to send ellipses data or not
2530 ellipses is boolean value telling whether to send ellipses data or not
2531
2531
2532 returns bundle2 of the data required for extending
2532 returns bundle2 of the data required for extending
2533 """
2533 """
2534 commonnodes = set()
2534 commonnodes = set()
2535 cl = repo.changelog
2535 cl = repo.changelog
2536 for r in repo.revs(b"::%ln", common):
2536 for r in repo.revs(b"::%ln", common):
2537 commonnodes.add(cl.node(r))
2537 commonnodes.add(cl.node(r))
2538 if commonnodes:
2538 if commonnodes:
2539 packer = changegroup.getbundler(
2539 packer = changegroup.getbundler(
2540 cgversion,
2540 cgversion,
2541 repo,
2541 repo,
2542 oldmatcher=oldmatcher,
2542 oldmatcher=oldmatcher,
2543 matcher=newmatcher,
2543 matcher=newmatcher,
2544 fullnodes=commonnodes,
2544 fullnodes=commonnodes,
2545 )
2545 )
2546 cgdata = packer.generate(
2546 cgdata = packer.generate(
2547 {nullid},
2547 {nullid},
2548 list(commonnodes),
2548 list(commonnodes),
2549 False,
2549 False,
2550 b'narrow_widen',
2550 b'narrow_widen',
2551 changelog=False,
2551 changelog=False,
2552 )
2552 )
2553
2553
2554 part = bundler.newpart(b'changegroup', data=cgdata)
2554 part = bundler.newpart(b'changegroup', data=cgdata)
2555 part.addparam(b'version', cgversion)
2555 part.addparam(b'version', cgversion)
2556 if scmutil.istreemanifest(repo):
2556 if scmutil.istreemanifest(repo):
2557 part.addparam(b'treemanifest', b'1')
2557 part.addparam(b'treemanifest', b'1')
2558 if b'exp-sidedata-flag' in repo.requirements:
2558 if b'exp-sidedata-flag' in repo.requirements:
2559 part.addparam(b'exp-sidedata', b'1')
2559 part.addparam(b'exp-sidedata', b'1')
2560
2560
2561 return bundler
2561 return bundler
@@ -1,4749 +1,4755 b''
1 # debugcommands.py - command processing for debug* commands
1 # debugcommands.py - command processing for debug* commands
2 #
2 #
3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import codecs
10 import codecs
11 import collections
11 import collections
12 import difflib
12 import difflib
13 import errno
13 import errno
14 import glob
14 import glob
15 import operator
15 import operator
16 import os
16 import os
17 import platform
17 import platform
18 import random
18 import random
19 import re
19 import re
20 import socket
20 import socket
21 import ssl
21 import ssl
22 import stat
22 import stat
23 import string
23 import string
24 import subprocess
24 import subprocess
25 import sys
25 import sys
26 import time
26 import time
27
27
28 from .i18n import _
28 from .i18n import _
29 from .node import (
29 from .node import (
30 bin,
30 bin,
31 hex,
31 hex,
32 nullid,
32 nullid,
33 nullrev,
33 nullrev,
34 short,
34 short,
35 )
35 )
36 from .pycompat import (
36 from .pycompat import (
37 getattr,
37 getattr,
38 open,
38 open,
39 )
39 )
40 from . import (
40 from . import (
41 bundle2,
41 bundle2,
42 bundlerepo,
42 bundlerepo,
43 changegroup,
43 changegroup,
44 cmdutil,
44 cmdutil,
45 color,
45 color,
46 context,
46 context,
47 copies,
47 copies,
48 dagparser,
48 dagparser,
49 encoding,
49 encoding,
50 error,
50 error,
51 exchange,
51 exchange,
52 extensions,
52 extensions,
53 filemerge,
53 filemerge,
54 filesetlang,
54 filesetlang,
55 formatter,
55 formatter,
56 hg,
56 hg,
57 httppeer,
57 httppeer,
58 localrepo,
58 localrepo,
59 lock as lockmod,
59 lock as lockmod,
60 logcmdutil,
60 logcmdutil,
61 mergestate as mergestatemod,
61 mergestate as mergestatemod,
62 metadata,
62 metadata,
63 obsolete,
63 obsolete,
64 obsutil,
64 obsutil,
65 pathutil,
65 pathutil,
66 phases,
66 phases,
67 policy,
67 policy,
68 pvec,
68 pvec,
69 pycompat,
69 pycompat,
70 registrar,
70 registrar,
71 repair,
71 repair,
72 repoview,
72 repoview,
73 revlog,
73 revlog,
74 revset,
74 revset,
75 revsetlang,
75 revsetlang,
76 scmutil,
76 scmutil,
77 setdiscovery,
77 setdiscovery,
78 simplemerge,
78 simplemerge,
79 sshpeer,
79 sshpeer,
80 sslutil,
80 sslutil,
81 streamclone,
81 streamclone,
82 strip,
82 strip,
83 tags as tagsmod,
83 tags as tagsmod,
84 templater,
84 templater,
85 treediscovery,
85 treediscovery,
86 upgrade,
86 upgrade,
87 url as urlmod,
87 url as urlmod,
88 util,
88 util,
89 vfs as vfsmod,
89 vfs as vfsmod,
90 wireprotoframing,
90 wireprotoframing,
91 wireprotoserver,
91 wireprotoserver,
92 wireprotov2peer,
92 wireprotov2peer,
93 )
93 )
94 from .utils import (
94 from .utils import (
95 cborutil,
95 cborutil,
96 compression,
96 compression,
97 dateutil,
97 dateutil,
98 procutil,
98 procutil,
99 stringutil,
99 stringutil,
100 )
100 )
101
101
102 from .revlogutils import (
102 from .revlogutils import (
103 deltas as deltautil,
103 deltas as deltautil,
104 nodemap,
104 nodemap,
105 sidedata,
105 sidedata,
106 )
106 )
107
107
108 release = lockmod.release
108 release = lockmod.release
109
109
110 table = {}
110 table = {}
111 table.update(strip.command._table)
111 table.update(strip.command._table)
112 command = registrar.command(table)
112 command = registrar.command(table)
113
113
114
114
115 @command(b'debugancestor', [], _(b'[INDEX] REV1 REV2'), optionalrepo=True)
115 @command(b'debugancestor', [], _(b'[INDEX] REV1 REV2'), optionalrepo=True)
116 def debugancestor(ui, repo, *args):
116 def debugancestor(ui, repo, *args):
117 """find the ancestor revision of two revisions in a given index"""
117 """find the ancestor revision of two revisions in a given index"""
118 if len(args) == 3:
118 if len(args) == 3:
119 index, rev1, rev2 = args
119 index, rev1, rev2 = args
120 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False), index)
120 r = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False), index)
121 lookup = r.lookup
121 lookup = r.lookup
122 elif len(args) == 2:
122 elif len(args) == 2:
123 if not repo:
123 if not repo:
124 raise error.Abort(
124 raise error.Abort(
125 _(b'there is no Mercurial repository here (.hg not found)')
125 _(b'there is no Mercurial repository here (.hg not found)')
126 )
126 )
127 rev1, rev2 = args
127 rev1, rev2 = args
128 r = repo.changelog
128 r = repo.changelog
129 lookup = repo.lookup
129 lookup = repo.lookup
130 else:
130 else:
131 raise error.Abort(_(b'either two or three arguments required'))
131 raise error.Abort(_(b'either two or three arguments required'))
132 a = r.ancestor(lookup(rev1), lookup(rev2))
132 a = r.ancestor(lookup(rev1), lookup(rev2))
133 ui.write(b'%d:%s\n' % (r.rev(a), hex(a)))
133 ui.write(b'%d:%s\n' % (r.rev(a), hex(a)))
134
134
135
135
136 @command(b'debugantivirusrunning', [])
136 @command(b'debugantivirusrunning', [])
137 def debugantivirusrunning(ui, repo):
137 def debugantivirusrunning(ui, repo):
138 """attempt to trigger an antivirus scanner to see if one is active"""
138 """attempt to trigger an antivirus scanner to see if one is active"""
139 with repo.cachevfs.open('eicar-test-file.com', b'wb') as f:
139 with repo.cachevfs.open('eicar-test-file.com', b'wb') as f:
140 f.write(
140 f.write(
141 util.b85decode(
141 util.b85decode(
142 # This is a base85-armored version of the EICAR test file. See
142 # This is a base85-armored version of the EICAR test file. See
143 # https://en.wikipedia.org/wiki/EICAR_test_file for details.
143 # https://en.wikipedia.org/wiki/EICAR_test_file for details.
144 b'ST#=}P$fV?P+K%yP+C|uG$>GBDK|qyDK~v2MM*<JQY}+dK~6+LQba95P'
144 b'ST#=}P$fV?P+K%yP+C|uG$>GBDK|qyDK~v2MM*<JQY}+dK~6+LQba95P'
145 b'E<)&Nm5l)EmTEQR4qnHOhq9iNGnJx'
145 b'E<)&Nm5l)EmTEQR4qnHOhq9iNGnJx'
146 )
146 )
147 )
147 )
148 # Give an AV engine time to scan the file.
148 # Give an AV engine time to scan the file.
149 time.sleep(2)
149 time.sleep(2)
150 util.unlink(repo.cachevfs.join('eicar-test-file.com'))
150 util.unlink(repo.cachevfs.join('eicar-test-file.com'))
151
151
152
152
153 @command(b'debugapplystreamclonebundle', [], b'FILE')
153 @command(b'debugapplystreamclonebundle', [], b'FILE')
154 def debugapplystreamclonebundle(ui, repo, fname):
154 def debugapplystreamclonebundle(ui, repo, fname):
155 """apply a stream clone bundle file"""
155 """apply a stream clone bundle file"""
156 f = hg.openpath(ui, fname)
156 f = hg.openpath(ui, fname)
157 gen = exchange.readbundle(ui, f, fname)
157 gen = exchange.readbundle(ui, f, fname)
158 gen.apply(repo)
158 gen.apply(repo)
159
159
160
160
161 @command(
161 @command(
162 b'debugbuilddag',
162 b'debugbuilddag',
163 [
163 [
164 (
164 (
165 b'm',
165 b'm',
166 b'mergeable-file',
166 b'mergeable-file',
167 None,
167 None,
168 _(b'add single file mergeable changes'),
168 _(b'add single file mergeable changes'),
169 ),
169 ),
170 (
170 (
171 b'o',
171 b'o',
172 b'overwritten-file',
172 b'overwritten-file',
173 None,
173 None,
174 _(b'add single file all revs overwrite'),
174 _(b'add single file all revs overwrite'),
175 ),
175 ),
176 (b'n', b'new-file', None, _(b'add new file at each rev')),
176 (b'n', b'new-file', None, _(b'add new file at each rev')),
177 ],
177 ],
178 _(b'[OPTION]... [TEXT]'),
178 _(b'[OPTION]... [TEXT]'),
179 )
179 )
180 def debugbuilddag(
180 def debugbuilddag(
181 ui,
181 ui,
182 repo,
182 repo,
183 text=None,
183 text=None,
184 mergeable_file=False,
184 mergeable_file=False,
185 overwritten_file=False,
185 overwritten_file=False,
186 new_file=False,
186 new_file=False,
187 ):
187 ):
188 """builds a repo with a given DAG from scratch in the current empty repo
188 """builds a repo with a given DAG from scratch in the current empty repo
189
189
190 The description of the DAG is read from stdin if not given on the
190 The description of the DAG is read from stdin if not given on the
191 command line.
191 command line.
192
192
193 Elements:
193 Elements:
194
194
195 - "+n" is a linear run of n nodes based on the current default parent
195 - "+n" is a linear run of n nodes based on the current default parent
196 - "." is a single node based on the current default parent
196 - "." is a single node based on the current default parent
197 - "$" resets the default parent to null (implied at the start);
197 - "$" resets the default parent to null (implied at the start);
198 otherwise the default parent is always the last node created
198 otherwise the default parent is always the last node created
199 - "<p" sets the default parent to the backref p
199 - "<p" sets the default parent to the backref p
200 - "*p" is a fork at parent p, which is a backref
200 - "*p" is a fork at parent p, which is a backref
201 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
201 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
202 - "/p2" is a merge of the preceding node and p2
202 - "/p2" is a merge of the preceding node and p2
203 - ":tag" defines a local tag for the preceding node
203 - ":tag" defines a local tag for the preceding node
204 - "@branch" sets the named branch for subsequent nodes
204 - "@branch" sets the named branch for subsequent nodes
205 - "#...\\n" is a comment up to the end of the line
205 - "#...\\n" is a comment up to the end of the line
206
206
207 Whitespace between the above elements is ignored.
207 Whitespace between the above elements is ignored.
208
208
209 A backref is either
209 A backref is either
210
210
211 - a number n, which references the node curr-n, where curr is the current
211 - a number n, which references the node curr-n, where curr is the current
212 node, or
212 node, or
213 - the name of a local tag you placed earlier using ":tag", or
213 - the name of a local tag you placed earlier using ":tag", or
214 - empty to denote the default parent.
214 - empty to denote the default parent.
215
215
216 All string valued-elements are either strictly alphanumeric, or must
216 All string valued-elements are either strictly alphanumeric, or must
217 be enclosed in double quotes ("..."), with "\\" as escape character.
217 be enclosed in double quotes ("..."), with "\\" as escape character.
218 """
218 """
219
219
220 if text is None:
220 if text is None:
221 ui.status(_(b"reading DAG from stdin\n"))
221 ui.status(_(b"reading DAG from stdin\n"))
222 text = ui.fin.read()
222 text = ui.fin.read()
223
223
224 cl = repo.changelog
224 cl = repo.changelog
225 if len(cl) > 0:
225 if len(cl) > 0:
226 raise error.Abort(_(b'repository is not empty'))
226 raise error.Abort(_(b'repository is not empty'))
227
227
228 # determine number of revs in DAG
228 # determine number of revs in DAG
229 total = 0
229 total = 0
230 for type, data in dagparser.parsedag(text):
230 for type, data in dagparser.parsedag(text):
231 if type == b'n':
231 if type == b'n':
232 total += 1
232 total += 1
233
233
234 if mergeable_file:
234 if mergeable_file:
235 linesperrev = 2
235 linesperrev = 2
236 # make a file with k lines per rev
236 # make a file with k lines per rev
237 initialmergedlines = [
237 initialmergedlines = [
238 b'%d' % i for i in pycompat.xrange(0, total * linesperrev)
238 b'%d' % i for i in pycompat.xrange(0, total * linesperrev)
239 ]
239 ]
240 initialmergedlines.append(b"")
240 initialmergedlines.append(b"")
241
241
242 tags = []
242 tags = []
243 progress = ui.makeprogress(
243 progress = ui.makeprogress(
244 _(b'building'), unit=_(b'revisions'), total=total
244 _(b'building'), unit=_(b'revisions'), total=total
245 )
245 )
246 with progress, repo.wlock(), repo.lock(), repo.transaction(b"builddag"):
246 with progress, repo.wlock(), repo.lock(), repo.transaction(b"builddag"):
247 at = -1
247 at = -1
248 atbranch = b'default'
248 atbranch = b'default'
249 nodeids = []
249 nodeids = []
250 id = 0
250 id = 0
251 progress.update(id)
251 progress.update(id)
252 for type, data in dagparser.parsedag(text):
252 for type, data in dagparser.parsedag(text):
253 if type == b'n':
253 if type == b'n':
254 ui.note((b'node %s\n' % pycompat.bytestr(data)))
254 ui.note((b'node %s\n' % pycompat.bytestr(data)))
255 id, ps = data
255 id, ps = data
256
256
257 files = []
257 files = []
258 filecontent = {}
258 filecontent = {}
259
259
260 p2 = None
260 p2 = None
261 if mergeable_file:
261 if mergeable_file:
262 fn = b"mf"
262 fn = b"mf"
263 p1 = repo[ps[0]]
263 p1 = repo[ps[0]]
264 if len(ps) > 1:
264 if len(ps) > 1:
265 p2 = repo[ps[1]]
265 p2 = repo[ps[1]]
266 pa = p1.ancestor(p2)
266 pa = p1.ancestor(p2)
267 base, local, other = [
267 base, local, other = [
268 x[fn].data() for x in (pa, p1, p2)
268 x[fn].data() for x in (pa, p1, p2)
269 ]
269 ]
270 m3 = simplemerge.Merge3Text(base, local, other)
270 m3 = simplemerge.Merge3Text(base, local, other)
271 ml = [l.strip() for l in m3.merge_lines()]
271 ml = [l.strip() for l in m3.merge_lines()]
272 ml.append(b"")
272 ml.append(b"")
273 elif at > 0:
273 elif at > 0:
274 ml = p1[fn].data().split(b"\n")
274 ml = p1[fn].data().split(b"\n")
275 else:
275 else:
276 ml = initialmergedlines
276 ml = initialmergedlines
277 ml[id * linesperrev] += b" r%i" % id
277 ml[id * linesperrev] += b" r%i" % id
278 mergedtext = b"\n".join(ml)
278 mergedtext = b"\n".join(ml)
279 files.append(fn)
279 files.append(fn)
280 filecontent[fn] = mergedtext
280 filecontent[fn] = mergedtext
281
281
282 if overwritten_file:
282 if overwritten_file:
283 fn = b"of"
283 fn = b"of"
284 files.append(fn)
284 files.append(fn)
285 filecontent[fn] = b"r%i\n" % id
285 filecontent[fn] = b"r%i\n" % id
286
286
287 if new_file:
287 if new_file:
288 fn = b"nf%i" % id
288 fn = b"nf%i" % id
289 files.append(fn)
289 files.append(fn)
290 filecontent[fn] = b"r%i\n" % id
290 filecontent[fn] = b"r%i\n" % id
291 if len(ps) > 1:
291 if len(ps) > 1:
292 if not p2:
292 if not p2:
293 p2 = repo[ps[1]]
293 p2 = repo[ps[1]]
294 for fn in p2:
294 for fn in p2:
295 if fn.startswith(b"nf"):
295 if fn.startswith(b"nf"):
296 files.append(fn)
296 files.append(fn)
297 filecontent[fn] = p2[fn].data()
297 filecontent[fn] = p2[fn].data()
298
298
299 def fctxfn(repo, cx, path):
299 def fctxfn(repo, cx, path):
300 if path in filecontent:
300 if path in filecontent:
301 return context.memfilectx(
301 return context.memfilectx(
302 repo, cx, path, filecontent[path]
302 repo, cx, path, filecontent[path]
303 )
303 )
304 return None
304 return None
305
305
306 if len(ps) == 0 or ps[0] < 0:
306 if len(ps) == 0 or ps[0] < 0:
307 pars = [None, None]
307 pars = [None, None]
308 elif len(ps) == 1:
308 elif len(ps) == 1:
309 pars = [nodeids[ps[0]], None]
309 pars = [nodeids[ps[0]], None]
310 else:
310 else:
311 pars = [nodeids[p] for p in ps]
311 pars = [nodeids[p] for p in ps]
312 cx = context.memctx(
312 cx = context.memctx(
313 repo,
313 repo,
314 pars,
314 pars,
315 b"r%i" % id,
315 b"r%i" % id,
316 files,
316 files,
317 fctxfn,
317 fctxfn,
318 date=(id, 0),
318 date=(id, 0),
319 user=b"debugbuilddag",
319 user=b"debugbuilddag",
320 extra={b'branch': atbranch},
320 extra={b'branch': atbranch},
321 )
321 )
322 nodeid = repo.commitctx(cx)
322 nodeid = repo.commitctx(cx)
323 nodeids.append(nodeid)
323 nodeids.append(nodeid)
324 at = id
324 at = id
325 elif type == b'l':
325 elif type == b'l':
326 id, name = data
326 id, name = data
327 ui.note((b'tag %s\n' % name))
327 ui.note((b'tag %s\n' % name))
328 tags.append(b"%s %s\n" % (hex(repo.changelog.node(id)), name))
328 tags.append(b"%s %s\n" % (hex(repo.changelog.node(id)), name))
329 elif type == b'a':
329 elif type == b'a':
330 ui.note((b'branch %s\n' % data))
330 ui.note((b'branch %s\n' % data))
331 atbranch = data
331 atbranch = data
332 progress.update(id)
332 progress.update(id)
333
333
334 if tags:
334 if tags:
335 repo.vfs.write(b"localtags", b"".join(tags))
335 repo.vfs.write(b"localtags", b"".join(tags))
336
336
337
337
338 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
338 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
339 indent_string = b' ' * indent
339 indent_string = b' ' * indent
340 if all:
340 if all:
341 ui.writenoi18n(
341 ui.writenoi18n(
342 b"%sformat: id, p1, p2, cset, delta base, len(delta)\n"
342 b"%sformat: id, p1, p2, cset, delta base, len(delta)\n"
343 % indent_string
343 % indent_string
344 )
344 )
345
345
346 def showchunks(named):
346 def showchunks(named):
347 ui.write(b"\n%s%s\n" % (indent_string, named))
347 ui.write(b"\n%s%s\n" % (indent_string, named))
348 for deltadata in gen.deltaiter():
348 for deltadata in gen.deltaiter():
349 node, p1, p2, cs, deltabase, delta, flags = deltadata
349 node, p1, p2, cs, deltabase, delta, flags = deltadata
350 ui.write(
350 ui.write(
351 b"%s%s %s %s %s %s %d\n"
351 b"%s%s %s %s %s %s %d\n"
352 % (
352 % (
353 indent_string,
353 indent_string,
354 hex(node),
354 hex(node),
355 hex(p1),
355 hex(p1),
356 hex(p2),
356 hex(p2),
357 hex(cs),
357 hex(cs),
358 hex(deltabase),
358 hex(deltabase),
359 len(delta),
359 len(delta),
360 )
360 )
361 )
361 )
362
362
363 gen.changelogheader()
363 gen.changelogheader()
364 showchunks(b"changelog")
364 showchunks(b"changelog")
365 gen.manifestheader()
365 gen.manifestheader()
366 showchunks(b"manifest")
366 showchunks(b"manifest")
367 for chunkdata in iter(gen.filelogheader, {}):
367 for chunkdata in iter(gen.filelogheader, {}):
368 fname = chunkdata[b'filename']
368 fname = chunkdata[b'filename']
369 showchunks(fname)
369 showchunks(fname)
370 else:
370 else:
371 if isinstance(gen, bundle2.unbundle20):
371 if isinstance(gen, bundle2.unbundle20):
372 raise error.Abort(_(b'use debugbundle2 for this file'))
372 raise error.Abort(_(b'use debugbundle2 for this file'))
373 gen.changelogheader()
373 gen.changelogheader()
374 for deltadata in gen.deltaiter():
374 for deltadata in gen.deltaiter():
375 node, p1, p2, cs, deltabase, delta, flags = deltadata
375 node, p1, p2, cs, deltabase, delta, flags = deltadata
376 ui.write(b"%s%s\n" % (indent_string, hex(node)))
376 ui.write(b"%s%s\n" % (indent_string, hex(node)))
377
377
378
378
379 def _debugobsmarkers(ui, part, indent=0, **opts):
379 def _debugobsmarkers(ui, part, indent=0, **opts):
380 """display version and markers contained in 'data'"""
380 """display version and markers contained in 'data'"""
381 opts = pycompat.byteskwargs(opts)
381 opts = pycompat.byteskwargs(opts)
382 data = part.read()
382 data = part.read()
383 indent_string = b' ' * indent
383 indent_string = b' ' * indent
384 try:
384 try:
385 version, markers = obsolete._readmarkers(data)
385 version, markers = obsolete._readmarkers(data)
386 except error.UnknownVersion as exc:
386 except error.UnknownVersion as exc:
387 msg = b"%sunsupported version: %s (%d bytes)\n"
387 msg = b"%sunsupported version: %s (%d bytes)\n"
388 msg %= indent_string, exc.version, len(data)
388 msg %= indent_string, exc.version, len(data)
389 ui.write(msg)
389 ui.write(msg)
390 else:
390 else:
391 msg = b"%sversion: %d (%d bytes)\n"
391 msg = b"%sversion: %d (%d bytes)\n"
392 msg %= indent_string, version, len(data)
392 msg %= indent_string, version, len(data)
393 ui.write(msg)
393 ui.write(msg)
394 fm = ui.formatter(b'debugobsolete', opts)
394 fm = ui.formatter(b'debugobsolete', opts)
395 for rawmarker in sorted(markers):
395 for rawmarker in sorted(markers):
396 m = obsutil.marker(None, rawmarker)
396 m = obsutil.marker(None, rawmarker)
397 fm.startitem()
397 fm.startitem()
398 fm.plain(indent_string)
398 fm.plain(indent_string)
399 cmdutil.showmarker(fm, m)
399 cmdutil.showmarker(fm, m)
400 fm.end()
400 fm.end()
401
401
402
402
403 def _debugphaseheads(ui, data, indent=0):
403 def _debugphaseheads(ui, data, indent=0):
404 """display version and markers contained in 'data'"""
404 """display version and markers contained in 'data'"""
405 indent_string = b' ' * indent
405 indent_string = b' ' * indent
406 headsbyphase = phases.binarydecode(data)
406 headsbyphase = phases.binarydecode(data)
407 for phase in phases.allphases:
407 for phase in phases.allphases:
408 for head in headsbyphase[phase]:
408 for head in headsbyphase[phase]:
409 ui.write(indent_string)
409 ui.write(indent_string)
410 ui.write(b'%s %s\n' % (hex(head), phases.phasenames[phase]))
410 ui.write(b'%s %s\n' % (hex(head), phases.phasenames[phase]))
411
411
412
412
413 def _quasirepr(thing):
413 def _quasirepr(thing):
414 if isinstance(thing, (dict, util.sortdict, collections.OrderedDict)):
414 if isinstance(thing, (dict, util.sortdict, collections.OrderedDict)):
415 return b'{%s}' % (
415 return b'{%s}' % (
416 b', '.join(b'%s: %s' % (k, thing[k]) for k in sorted(thing))
416 b', '.join(b'%s: %s' % (k, thing[k]) for k in sorted(thing))
417 )
417 )
418 return pycompat.bytestr(repr(thing))
418 return pycompat.bytestr(repr(thing))
419
419
420
420
421 def _debugbundle2(ui, gen, all=None, **opts):
421 def _debugbundle2(ui, gen, all=None, **opts):
422 """lists the contents of a bundle2"""
422 """lists the contents of a bundle2"""
423 if not isinstance(gen, bundle2.unbundle20):
423 if not isinstance(gen, bundle2.unbundle20):
424 raise error.Abort(_(b'not a bundle2 file'))
424 raise error.Abort(_(b'not a bundle2 file'))
425 ui.write((b'Stream params: %s\n' % _quasirepr(gen.params)))
425 ui.write((b'Stream params: %s\n' % _quasirepr(gen.params)))
426 parttypes = opts.get('part_type', [])
426 parttypes = opts.get('part_type', [])
427 for part in gen.iterparts():
427 for part in gen.iterparts():
428 if parttypes and part.type not in parttypes:
428 if parttypes and part.type not in parttypes:
429 continue
429 continue
430 msg = b'%s -- %s (mandatory: %r)\n'
430 msg = b'%s -- %s (mandatory: %r)\n'
431 ui.write((msg % (part.type, _quasirepr(part.params), part.mandatory)))
431 ui.write((msg % (part.type, _quasirepr(part.params), part.mandatory)))
432 if part.type == b'changegroup':
432 if part.type == b'changegroup':
433 version = part.params.get(b'version', b'01')
433 version = part.params.get(b'version', b'01')
434 cg = changegroup.getunbundler(version, part, b'UN')
434 cg = changegroup.getunbundler(version, part, b'UN')
435 if not ui.quiet:
435 if not ui.quiet:
436 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
436 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
437 if part.type == b'obsmarkers':
437 if part.type == b'obsmarkers':
438 if not ui.quiet:
438 if not ui.quiet:
439 _debugobsmarkers(ui, part, indent=4, **opts)
439 _debugobsmarkers(ui, part, indent=4, **opts)
440 if part.type == b'phase-heads':
440 if part.type == b'phase-heads':
441 if not ui.quiet:
441 if not ui.quiet:
442 _debugphaseheads(ui, part, indent=4)
442 _debugphaseheads(ui, part, indent=4)
443
443
444
444
445 @command(
445 @command(
446 b'debugbundle',
446 b'debugbundle',
447 [
447 [
448 (b'a', b'all', None, _(b'show all details')),
448 (b'a', b'all', None, _(b'show all details')),
449 (b'', b'part-type', [], _(b'show only the named part type')),
449 (b'', b'part-type', [], _(b'show only the named part type')),
450 (b'', b'spec', None, _(b'print the bundlespec of the bundle')),
450 (b'', b'spec', None, _(b'print the bundlespec of the bundle')),
451 ],
451 ],
452 _(b'FILE'),
452 _(b'FILE'),
453 norepo=True,
453 norepo=True,
454 )
454 )
455 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
455 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
456 """lists the contents of a bundle"""
456 """lists the contents of a bundle"""
457 with hg.openpath(ui, bundlepath) as f:
457 with hg.openpath(ui, bundlepath) as f:
458 if spec:
458 if spec:
459 spec = exchange.getbundlespec(ui, f)
459 spec = exchange.getbundlespec(ui, f)
460 ui.write(b'%s\n' % spec)
460 ui.write(b'%s\n' % spec)
461 return
461 return
462
462
463 gen = exchange.readbundle(ui, f, bundlepath)
463 gen = exchange.readbundle(ui, f, bundlepath)
464 if isinstance(gen, bundle2.unbundle20):
464 if isinstance(gen, bundle2.unbundle20):
465 return _debugbundle2(ui, gen, all=all, **opts)
465 return _debugbundle2(ui, gen, all=all, **opts)
466 _debugchangegroup(ui, gen, all=all, **opts)
466 _debugchangegroup(ui, gen, all=all, **opts)
467
467
468
468
469 @command(b'debugcapabilities', [], _(b'PATH'), norepo=True)
469 @command(b'debugcapabilities', [], _(b'PATH'), norepo=True)
470 def debugcapabilities(ui, path, **opts):
470 def debugcapabilities(ui, path, **opts):
471 """lists the capabilities of a remote peer"""
471 """lists the capabilities of a remote peer"""
472 opts = pycompat.byteskwargs(opts)
472 opts = pycompat.byteskwargs(opts)
473 peer = hg.peer(ui, opts, path)
473 peer = hg.peer(ui, opts, path)
474 caps = peer.capabilities()
474 caps = peer.capabilities()
475 ui.writenoi18n(b'Main capabilities:\n')
475 ui.writenoi18n(b'Main capabilities:\n')
476 for c in sorted(caps):
476 for c in sorted(caps):
477 ui.write(b' %s\n' % c)
477 ui.write(b' %s\n' % c)
478 b2caps = bundle2.bundle2caps(peer)
478 b2caps = bundle2.bundle2caps(peer)
479 if b2caps:
479 if b2caps:
480 ui.writenoi18n(b'Bundle2 capabilities:\n')
480 ui.writenoi18n(b'Bundle2 capabilities:\n')
481 for key, values in sorted(pycompat.iteritems(b2caps)):
481 for key, values in sorted(pycompat.iteritems(b2caps)):
482 ui.write(b' %s\n' % key)
482 ui.write(b' %s\n' % key)
483 for v in values:
483 for v in values:
484 ui.write(b' %s\n' % v)
484 ui.write(b' %s\n' % v)
485
485
486
486
487 @command(
487 @command(
488 b'debugchangedfiles',
488 b'debugchangedfiles',
489 [
489 [
490 (
490 (
491 b'',
491 b'',
492 b'compute',
492 b'compute',
493 False,
493 False,
494 b"compute information instead of reading it from storage",
494 b"compute information instead of reading it from storage",
495 ),
495 ),
496 ],
496 ],
497 b'REV',
497 b'REV',
498 )
498 )
499 def debugchangedfiles(ui, repo, rev, **opts):
499 def debugchangedfiles(ui, repo, rev, **opts):
500 """list the stored files changes for a revision"""
500 """list the stored files changes for a revision"""
501 ctx = scmutil.revsingle(repo, rev, None)
501 ctx = scmutil.revsingle(repo, rev, None)
502 files = None
502 files = None
503
503
504 if opts['compute']:
504 if opts['compute']:
505 files = metadata.compute_all_files_changes(ctx)
505 files = metadata.compute_all_files_changes(ctx)
506 else:
506 else:
507 sd = repo.changelog.sidedata(ctx.rev())
507 sd = repo.changelog.sidedata(ctx.rev())
508 files_block = sd.get(sidedata.SD_FILES)
508 files_block = sd.get(sidedata.SD_FILES)
509 if files_block is not None:
509 if files_block is not None:
510 files = metadata.decode_files_sidedata(sd)
510 files = metadata.decode_files_sidedata(sd)
511 if files is not None:
511 if files is not None:
512 for f in sorted(files.touched):
512 for f in sorted(files.touched):
513 if f in files.added:
513 if f in files.added:
514 action = b"added"
514 action = b"added"
515 elif f in files.removed:
515 elif f in files.removed:
516 action = b"removed"
516 action = b"removed"
517 elif f in files.merged:
517 elif f in files.merged:
518 action = b"merged"
518 action = b"merged"
519 elif f in files.salvaged:
519 elif f in files.salvaged:
520 action = b"salvaged"
520 action = b"salvaged"
521 else:
521 else:
522 action = b"touched"
522 action = b"touched"
523
523
524 copy_parent = b""
524 copy_parent = b""
525 copy_source = b""
525 copy_source = b""
526 if f in files.copied_from_p1:
526 if f in files.copied_from_p1:
527 copy_parent = b"p1"
527 copy_parent = b"p1"
528 copy_source = files.copied_from_p1[f]
528 copy_source = files.copied_from_p1[f]
529 elif f in files.copied_from_p2:
529 elif f in files.copied_from_p2:
530 copy_parent = b"p2"
530 copy_parent = b"p2"
531 copy_source = files.copied_from_p2[f]
531 copy_source = files.copied_from_p2[f]
532
532
533 data = (action, copy_parent, f, copy_source)
533 data = (action, copy_parent, f, copy_source)
534 template = b"%-8s %2s: %s, %s;\n"
534 template = b"%-8s %2s: %s, %s;\n"
535 ui.write(template % data)
535 ui.write(template % data)
536
536
537
537
538 @command(b'debugcheckstate', [], b'')
538 @command(b'debugcheckstate', [], b'')
539 def debugcheckstate(ui, repo):
539 def debugcheckstate(ui, repo):
540 """validate the correctness of the current dirstate"""
540 """validate the correctness of the current dirstate"""
541 parent1, parent2 = repo.dirstate.parents()
541 parent1, parent2 = repo.dirstate.parents()
542 m1 = repo[parent1].manifest()
542 m1 = repo[parent1].manifest()
543 m2 = repo[parent2].manifest()
543 m2 = repo[parent2].manifest()
544 errors = 0
544 errors = 0
545 for f in repo.dirstate:
545 for f in repo.dirstate:
546 state = repo.dirstate[f]
546 state = repo.dirstate[f]
547 if state in b"nr" and f not in m1:
547 if state in b"nr" and f not in m1:
548 ui.warn(_(b"%s in state %s, but not in manifest1\n") % (f, state))
548 ui.warn(_(b"%s in state %s, but not in manifest1\n") % (f, state))
549 errors += 1
549 errors += 1
550 if state in b"a" and f in m1:
550 if state in b"a" and f in m1:
551 ui.warn(_(b"%s in state %s, but also in manifest1\n") % (f, state))
551 ui.warn(_(b"%s in state %s, but also in manifest1\n") % (f, state))
552 errors += 1
552 errors += 1
553 if state in b"m" and f not in m1 and f not in m2:
553 if state in b"m" and f not in m1 and f not in m2:
554 ui.warn(
554 ui.warn(
555 _(b"%s in state %s, but not in either manifest\n") % (f, state)
555 _(b"%s in state %s, but not in either manifest\n") % (f, state)
556 )
556 )
557 errors += 1
557 errors += 1
558 for f in m1:
558 for f in m1:
559 state = repo.dirstate[f]
559 state = repo.dirstate[f]
560 if state not in b"nrm":
560 if state not in b"nrm":
561 ui.warn(_(b"%s in manifest1, but listed as state %s") % (f, state))
561 ui.warn(_(b"%s in manifest1, but listed as state %s") % (f, state))
562 errors += 1
562 errors += 1
563 if errors:
563 if errors:
564 errstr = _(b".hg/dirstate inconsistent with current parent's manifest")
564 errstr = _(b".hg/dirstate inconsistent with current parent's manifest")
565 raise error.Abort(errstr)
565 raise error.Abort(errstr)
566
566
567
567
568 @command(
568 @command(
569 b'debugcolor',
569 b'debugcolor',
570 [(b'', b'style', None, _(b'show all configured styles'))],
570 [(b'', b'style', None, _(b'show all configured styles'))],
571 b'hg debugcolor',
571 b'hg debugcolor',
572 )
572 )
573 def debugcolor(ui, repo, **opts):
573 def debugcolor(ui, repo, **opts):
574 """show available color, effects or style"""
574 """show available color, effects or style"""
575 ui.writenoi18n(b'color mode: %s\n' % stringutil.pprint(ui._colormode))
575 ui.writenoi18n(b'color mode: %s\n' % stringutil.pprint(ui._colormode))
576 if opts.get('style'):
576 if opts.get('style'):
577 return _debugdisplaystyle(ui)
577 return _debugdisplaystyle(ui)
578 else:
578 else:
579 return _debugdisplaycolor(ui)
579 return _debugdisplaycolor(ui)
580
580
581
581
582 def _debugdisplaycolor(ui):
582 def _debugdisplaycolor(ui):
583 ui = ui.copy()
583 ui = ui.copy()
584 ui._styles.clear()
584 ui._styles.clear()
585 for effect in color._activeeffects(ui).keys():
585 for effect in color._activeeffects(ui).keys():
586 ui._styles[effect] = effect
586 ui._styles[effect] = effect
587 if ui._terminfoparams:
587 if ui._terminfoparams:
588 for k, v in ui.configitems(b'color'):
588 for k, v in ui.configitems(b'color'):
589 if k.startswith(b'color.'):
589 if k.startswith(b'color.'):
590 ui._styles[k] = k[6:]
590 ui._styles[k] = k[6:]
591 elif k.startswith(b'terminfo.'):
591 elif k.startswith(b'terminfo.'):
592 ui._styles[k] = k[9:]
592 ui._styles[k] = k[9:]
593 ui.write(_(b'available colors:\n'))
593 ui.write(_(b'available colors:\n'))
594 # sort label with a '_' after the other to group '_background' entry.
594 # sort label with a '_' after the other to group '_background' entry.
595 items = sorted(ui._styles.items(), key=lambda i: (b'_' in i[0], i[0], i[1]))
595 items = sorted(ui._styles.items(), key=lambda i: (b'_' in i[0], i[0], i[1]))
596 for colorname, label in items:
596 for colorname, label in items:
597 ui.write(b'%s\n' % colorname, label=label)
597 ui.write(b'%s\n' % colorname, label=label)
598
598
599
599
600 def _debugdisplaystyle(ui):
600 def _debugdisplaystyle(ui):
601 ui.write(_(b'available style:\n'))
601 ui.write(_(b'available style:\n'))
602 if not ui._styles:
602 if not ui._styles:
603 return
603 return
604 width = max(len(s) for s in ui._styles)
604 width = max(len(s) for s in ui._styles)
605 for label, effects in sorted(ui._styles.items()):
605 for label, effects in sorted(ui._styles.items()):
606 ui.write(b'%s' % label, label=label)
606 ui.write(b'%s' % label, label=label)
607 if effects:
607 if effects:
608 # 50
608 # 50
609 ui.write(b': ')
609 ui.write(b': ')
610 ui.write(b' ' * (max(0, width - len(label))))
610 ui.write(b' ' * (max(0, width - len(label))))
611 ui.write(b', '.join(ui.label(e, e) for e in effects.split()))
611 ui.write(b', '.join(ui.label(e, e) for e in effects.split()))
612 ui.write(b'\n')
612 ui.write(b'\n')
613
613
614
614
615 @command(b'debugcreatestreamclonebundle', [], b'FILE')
615 @command(b'debugcreatestreamclonebundle', [], b'FILE')
616 def debugcreatestreamclonebundle(ui, repo, fname):
616 def debugcreatestreamclonebundle(ui, repo, fname):
617 """create a stream clone bundle file
617 """create a stream clone bundle file
618
618
619 Stream bundles are special bundles that are essentially archives of
619 Stream bundles are special bundles that are essentially archives of
620 revlog files. They are commonly used for cloning very quickly.
620 revlog files. They are commonly used for cloning very quickly.
621 """
621 """
622 # TODO we may want to turn this into an abort when this functionality
622 # TODO we may want to turn this into an abort when this functionality
623 # is moved into `hg bundle`.
623 # is moved into `hg bundle`.
624 if phases.hassecret(repo):
624 if phases.hassecret(repo):
625 ui.warn(
625 ui.warn(
626 _(
626 _(
627 b'(warning: stream clone bundle will contain secret '
627 b'(warning: stream clone bundle will contain secret '
628 b'revisions)\n'
628 b'revisions)\n'
629 )
629 )
630 )
630 )
631
631
632 requirements, gen = streamclone.generatebundlev1(repo)
632 requirements, gen = streamclone.generatebundlev1(repo)
633 changegroup.writechunks(ui, gen, fname)
633 changegroup.writechunks(ui, gen, fname)
634
634
635 ui.write(_(b'bundle requirements: %s\n') % b', '.join(sorted(requirements)))
635 ui.write(_(b'bundle requirements: %s\n') % b', '.join(sorted(requirements)))
636
636
637
637
638 @command(
638 @command(
639 b'debugdag',
639 b'debugdag',
640 [
640 [
641 (b't', b'tags', None, _(b'use tags as labels')),
641 (b't', b'tags', None, _(b'use tags as labels')),
642 (b'b', b'branches', None, _(b'annotate with branch names')),
642 (b'b', b'branches', None, _(b'annotate with branch names')),
643 (b'', b'dots', None, _(b'use dots for runs')),
643 (b'', b'dots', None, _(b'use dots for runs')),
644 (b's', b'spaces', None, _(b'separate elements by spaces')),
644 (b's', b'spaces', None, _(b'separate elements by spaces')),
645 ],
645 ],
646 _(b'[OPTION]... [FILE [REV]...]'),
646 _(b'[OPTION]... [FILE [REV]...]'),
647 optionalrepo=True,
647 optionalrepo=True,
648 )
648 )
649 def debugdag(ui, repo, file_=None, *revs, **opts):
649 def debugdag(ui, repo, file_=None, *revs, **opts):
650 """format the changelog or an index DAG as a concise textual description
650 """format the changelog or an index DAG as a concise textual description
651
651
652 If you pass a revlog index, the revlog's DAG is emitted. If you list
652 If you pass a revlog index, the revlog's DAG is emitted. If you list
653 revision numbers, they get labeled in the output as rN.
653 revision numbers, they get labeled in the output as rN.
654
654
655 Otherwise, the changelog DAG of the current repo is emitted.
655 Otherwise, the changelog DAG of the current repo is emitted.
656 """
656 """
657 spaces = opts.get('spaces')
657 spaces = opts.get('spaces')
658 dots = opts.get('dots')
658 dots = opts.get('dots')
659 if file_:
659 if file_:
660 rlog = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False), file_)
660 rlog = revlog.revlog(vfsmod.vfs(encoding.getcwd(), audit=False), file_)
661 revs = {int(r) for r in revs}
661 revs = {int(r) for r in revs}
662
662
663 def events():
663 def events():
664 for r in rlog:
664 for r in rlog:
665 yield b'n', (r, list(p for p in rlog.parentrevs(r) if p != -1))
665 yield b'n', (r, list(p for p in rlog.parentrevs(r) if p != -1))
666 if r in revs:
666 if r in revs:
667 yield b'l', (r, b"r%i" % r)
667 yield b'l', (r, b"r%i" % r)
668
668
669 elif repo:
669 elif repo:
670 cl = repo.changelog
670 cl = repo.changelog
671 tags = opts.get('tags')
671 tags = opts.get('tags')
672 branches = opts.get('branches')
672 branches = opts.get('branches')
673 if tags:
673 if tags:
674 labels = {}
674 labels = {}
675 for l, n in repo.tags().items():
675 for l, n in repo.tags().items():
676 labels.setdefault(cl.rev(n), []).append(l)
676 labels.setdefault(cl.rev(n), []).append(l)
677
677
678 def events():
678 def events():
679 b = b"default"
679 b = b"default"
680 for r in cl:
680 for r in cl:
681 if branches:
681 if branches:
682 newb = cl.read(cl.node(r))[5][b'branch']
682 newb = cl.read(cl.node(r))[5][b'branch']
683 if newb != b:
683 if newb != b:
684 yield b'a', newb
684 yield b'a', newb
685 b = newb
685 b = newb
686 yield b'n', (r, list(p for p in cl.parentrevs(r) if p != -1))
686 yield b'n', (r, list(p for p in cl.parentrevs(r) if p != -1))
687 if tags:
687 if tags:
688 ls = labels.get(r)
688 ls = labels.get(r)
689 if ls:
689 if ls:
690 for l in ls:
690 for l in ls:
691 yield b'l', (r, l)
691 yield b'l', (r, l)
692
692
693 else:
693 else:
694 raise error.Abort(_(b'need repo for changelog dag'))
694 raise error.Abort(_(b'need repo for changelog dag'))
695
695
696 for line in dagparser.dagtextlines(
696 for line in dagparser.dagtextlines(
697 events(),
697 events(),
698 addspaces=spaces,
698 addspaces=spaces,
699 wraplabels=True,
699 wraplabels=True,
700 wrapannotations=True,
700 wrapannotations=True,
701 wrapnonlinear=dots,
701 wrapnonlinear=dots,
702 usedots=dots,
702 usedots=dots,
703 maxlinewidth=70,
703 maxlinewidth=70,
704 ):
704 ):
705 ui.write(line)
705 ui.write(line)
706 ui.write(b"\n")
706 ui.write(b"\n")
707
707
708
708
709 @command(b'debugdata', cmdutil.debugrevlogopts, _(b'-c|-m|FILE REV'))
709 @command(b'debugdata', cmdutil.debugrevlogopts, _(b'-c|-m|FILE REV'))
710 def debugdata(ui, repo, file_, rev=None, **opts):
710 def debugdata(ui, repo, file_, rev=None, **opts):
711 """dump the contents of a data file revision"""
711 """dump the contents of a data file revision"""
712 opts = pycompat.byteskwargs(opts)
712 opts = pycompat.byteskwargs(opts)
713 if opts.get(b'changelog') or opts.get(b'manifest') or opts.get(b'dir'):
713 if opts.get(b'changelog') or opts.get(b'manifest') or opts.get(b'dir'):
714 if rev is not None:
714 if rev is not None:
715 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
715 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
716 file_, rev = None, file_
716 file_, rev = None, file_
717 elif rev is None:
717 elif rev is None:
718 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
718 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
719 r = cmdutil.openstorage(repo, b'debugdata', file_, opts)
719 r = cmdutil.openstorage(repo, b'debugdata', file_, opts)
720 try:
720 try:
721 ui.write(r.rawdata(r.lookup(rev)))
721 ui.write(r.rawdata(r.lookup(rev)))
722 except KeyError:
722 except KeyError:
723 raise error.Abort(_(b'invalid revision identifier %s') % rev)
723 raise error.Abort(_(b'invalid revision identifier %s') % rev)
724
724
725
725
726 @command(
726 @command(
727 b'debugdate',
727 b'debugdate',
728 [(b'e', b'extended', None, _(b'try extended date formats'))],
728 [(b'e', b'extended', None, _(b'try extended date formats'))],
729 _(b'[-e] DATE [RANGE]'),
729 _(b'[-e] DATE [RANGE]'),
730 norepo=True,
730 norepo=True,
731 optionalrepo=True,
731 optionalrepo=True,
732 )
732 )
733 def debugdate(ui, date, range=None, **opts):
733 def debugdate(ui, date, range=None, **opts):
734 """parse and display a date"""
734 """parse and display a date"""
735 if opts["extended"]:
735 if opts["extended"]:
736 d = dateutil.parsedate(date, dateutil.extendeddateformats)
736 d = dateutil.parsedate(date, dateutil.extendeddateformats)
737 else:
737 else:
738 d = dateutil.parsedate(date)
738 d = dateutil.parsedate(date)
739 ui.writenoi18n(b"internal: %d %d\n" % d)
739 ui.writenoi18n(b"internal: %d %d\n" % d)
740 ui.writenoi18n(b"standard: %s\n" % dateutil.datestr(d))
740 ui.writenoi18n(b"standard: %s\n" % dateutil.datestr(d))
741 if range:
741 if range:
742 m = dateutil.matchdate(range)
742 m = dateutil.matchdate(range)
743 ui.writenoi18n(b"match: %s\n" % m(d[0]))
743 ui.writenoi18n(b"match: %s\n" % m(d[0]))
744
744
745
745
746 @command(
746 @command(
747 b'debugdeltachain',
747 b'debugdeltachain',
748 cmdutil.debugrevlogopts + cmdutil.formatteropts,
748 cmdutil.debugrevlogopts + cmdutil.formatteropts,
749 _(b'-c|-m|FILE'),
749 _(b'-c|-m|FILE'),
750 optionalrepo=True,
750 optionalrepo=True,
751 )
751 )
752 def debugdeltachain(ui, repo, file_=None, **opts):
752 def debugdeltachain(ui, repo, file_=None, **opts):
753 """dump information about delta chains in a revlog
753 """dump information about delta chains in a revlog
754
754
755 Output can be templatized. Available template keywords are:
755 Output can be templatized. Available template keywords are:
756
756
757 :``rev``: revision number
757 :``rev``: revision number
758 :``chainid``: delta chain identifier (numbered by unique base)
758 :``chainid``: delta chain identifier (numbered by unique base)
759 :``chainlen``: delta chain length to this revision
759 :``chainlen``: delta chain length to this revision
760 :``prevrev``: previous revision in delta chain
760 :``prevrev``: previous revision in delta chain
761 :``deltatype``: role of delta / how it was computed
761 :``deltatype``: role of delta / how it was computed
762 :``compsize``: compressed size of revision
762 :``compsize``: compressed size of revision
763 :``uncompsize``: uncompressed size of revision
763 :``uncompsize``: uncompressed size of revision
764 :``chainsize``: total size of compressed revisions in chain
764 :``chainsize``: total size of compressed revisions in chain
765 :``chainratio``: total chain size divided by uncompressed revision size
765 :``chainratio``: total chain size divided by uncompressed revision size
766 (new delta chains typically start at ratio 2.00)
766 (new delta chains typically start at ratio 2.00)
767 :``lindist``: linear distance from base revision in delta chain to end
767 :``lindist``: linear distance from base revision in delta chain to end
768 of this revision
768 of this revision
769 :``extradist``: total size of revisions not part of this delta chain from
769 :``extradist``: total size of revisions not part of this delta chain from
770 base of delta chain to end of this revision; a measurement
770 base of delta chain to end of this revision; a measurement
771 of how much extra data we need to read/seek across to read
771 of how much extra data we need to read/seek across to read
772 the delta chain for this revision
772 the delta chain for this revision
773 :``extraratio``: extradist divided by chainsize; another representation of
773 :``extraratio``: extradist divided by chainsize; another representation of
774 how much unrelated data is needed to load this delta chain
774 how much unrelated data is needed to load this delta chain
775
775
776 If the repository is configured to use the sparse read, additional keywords
776 If the repository is configured to use the sparse read, additional keywords
777 are available:
777 are available:
778
778
779 :``readsize``: total size of data read from the disk for a revision
779 :``readsize``: total size of data read from the disk for a revision
780 (sum of the sizes of all the blocks)
780 (sum of the sizes of all the blocks)
781 :``largestblock``: size of the largest block of data read from the disk
781 :``largestblock``: size of the largest block of data read from the disk
782 :``readdensity``: density of useful bytes in the data read from the disk
782 :``readdensity``: density of useful bytes in the data read from the disk
783 :``srchunks``: in how many data hunks the whole revision would be read
783 :``srchunks``: in how many data hunks the whole revision would be read
784
784
785 The sparse read can be enabled with experimental.sparse-read = True
785 The sparse read can be enabled with experimental.sparse-read = True
786 """
786 """
787 opts = pycompat.byteskwargs(opts)
787 opts = pycompat.byteskwargs(opts)
788 r = cmdutil.openrevlog(repo, b'debugdeltachain', file_, opts)
788 r = cmdutil.openrevlog(repo, b'debugdeltachain', file_, opts)
789 index = r.index
789 index = r.index
790 start = r.start
790 start = r.start
791 length = r.length
791 length = r.length
792 generaldelta = r.version & revlog.FLAG_GENERALDELTA
792 generaldelta = r.version & revlog.FLAG_GENERALDELTA
793 withsparseread = getattr(r, '_withsparseread', False)
793 withsparseread = getattr(r, '_withsparseread', False)
794
794
795 def revinfo(rev):
795 def revinfo(rev):
796 e = index[rev]
796 e = index[rev]
797 compsize = e[1]
797 compsize = e[1]
798 uncompsize = e[2]
798 uncompsize = e[2]
799 chainsize = 0
799 chainsize = 0
800
800
801 if generaldelta:
801 if generaldelta:
802 if e[3] == e[5]:
802 if e[3] == e[5]:
803 deltatype = b'p1'
803 deltatype = b'p1'
804 elif e[3] == e[6]:
804 elif e[3] == e[6]:
805 deltatype = b'p2'
805 deltatype = b'p2'
806 elif e[3] == rev - 1:
806 elif e[3] == rev - 1:
807 deltatype = b'prev'
807 deltatype = b'prev'
808 elif e[3] == rev:
808 elif e[3] == rev:
809 deltatype = b'base'
809 deltatype = b'base'
810 else:
810 else:
811 deltatype = b'other'
811 deltatype = b'other'
812 else:
812 else:
813 if e[3] == rev:
813 if e[3] == rev:
814 deltatype = b'base'
814 deltatype = b'base'
815 else:
815 else:
816 deltatype = b'prev'
816 deltatype = b'prev'
817
817
818 chain = r._deltachain(rev)[0]
818 chain = r._deltachain(rev)[0]
819 for iterrev in chain:
819 for iterrev in chain:
820 e = index[iterrev]
820 e = index[iterrev]
821 chainsize += e[1]
821 chainsize += e[1]
822
822
823 return compsize, uncompsize, deltatype, chain, chainsize
823 return compsize, uncompsize, deltatype, chain, chainsize
824
824
825 fm = ui.formatter(b'debugdeltachain', opts)
825 fm = ui.formatter(b'debugdeltachain', opts)
826
826
827 fm.plain(
827 fm.plain(
828 b' rev chain# chainlen prev delta '
828 b' rev chain# chainlen prev delta '
829 b'size rawsize chainsize ratio lindist extradist '
829 b'size rawsize chainsize ratio lindist extradist '
830 b'extraratio'
830 b'extraratio'
831 )
831 )
832 if withsparseread:
832 if withsparseread:
833 fm.plain(b' readsize largestblk rddensity srchunks')
833 fm.plain(b' readsize largestblk rddensity srchunks')
834 fm.plain(b'\n')
834 fm.plain(b'\n')
835
835
836 chainbases = {}
836 chainbases = {}
837 for rev in r:
837 for rev in r:
838 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
838 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
839 chainbase = chain[0]
839 chainbase = chain[0]
840 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
840 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
841 basestart = start(chainbase)
841 basestart = start(chainbase)
842 revstart = start(rev)
842 revstart = start(rev)
843 lineardist = revstart + comp - basestart
843 lineardist = revstart + comp - basestart
844 extradist = lineardist - chainsize
844 extradist = lineardist - chainsize
845 try:
845 try:
846 prevrev = chain[-2]
846 prevrev = chain[-2]
847 except IndexError:
847 except IndexError:
848 prevrev = -1
848 prevrev = -1
849
849
850 if uncomp != 0:
850 if uncomp != 0:
851 chainratio = float(chainsize) / float(uncomp)
851 chainratio = float(chainsize) / float(uncomp)
852 else:
852 else:
853 chainratio = chainsize
853 chainratio = chainsize
854
854
855 if chainsize != 0:
855 if chainsize != 0:
856 extraratio = float(extradist) / float(chainsize)
856 extraratio = float(extradist) / float(chainsize)
857 else:
857 else:
858 extraratio = extradist
858 extraratio = extradist
859
859
860 fm.startitem()
860 fm.startitem()
861 fm.write(
861 fm.write(
862 b'rev chainid chainlen prevrev deltatype compsize '
862 b'rev chainid chainlen prevrev deltatype compsize '
863 b'uncompsize chainsize chainratio lindist extradist '
863 b'uncompsize chainsize chainratio lindist extradist '
864 b'extraratio',
864 b'extraratio',
865 b'%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f',
865 b'%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f',
866 rev,
866 rev,
867 chainid,
867 chainid,
868 len(chain),
868 len(chain),
869 prevrev,
869 prevrev,
870 deltatype,
870 deltatype,
871 comp,
871 comp,
872 uncomp,
872 uncomp,
873 chainsize,
873 chainsize,
874 chainratio,
874 chainratio,
875 lineardist,
875 lineardist,
876 extradist,
876 extradist,
877 extraratio,
877 extraratio,
878 rev=rev,
878 rev=rev,
879 chainid=chainid,
879 chainid=chainid,
880 chainlen=len(chain),
880 chainlen=len(chain),
881 prevrev=prevrev,
881 prevrev=prevrev,
882 deltatype=deltatype,
882 deltatype=deltatype,
883 compsize=comp,
883 compsize=comp,
884 uncompsize=uncomp,
884 uncompsize=uncomp,
885 chainsize=chainsize,
885 chainsize=chainsize,
886 chainratio=chainratio,
886 chainratio=chainratio,
887 lindist=lineardist,
887 lindist=lineardist,
888 extradist=extradist,
888 extradist=extradist,
889 extraratio=extraratio,
889 extraratio=extraratio,
890 )
890 )
891 if withsparseread:
891 if withsparseread:
892 readsize = 0
892 readsize = 0
893 largestblock = 0
893 largestblock = 0
894 srchunks = 0
894 srchunks = 0
895
895
896 for revschunk in deltautil.slicechunk(r, chain):
896 for revschunk in deltautil.slicechunk(r, chain):
897 srchunks += 1
897 srchunks += 1
898 blkend = start(revschunk[-1]) + length(revschunk[-1])
898 blkend = start(revschunk[-1]) + length(revschunk[-1])
899 blksize = blkend - start(revschunk[0])
899 blksize = blkend - start(revschunk[0])
900
900
901 readsize += blksize
901 readsize += blksize
902 if largestblock < blksize:
902 if largestblock < blksize:
903 largestblock = blksize
903 largestblock = blksize
904
904
905 if readsize:
905 if readsize:
906 readdensity = float(chainsize) / float(readsize)
906 readdensity = float(chainsize) / float(readsize)
907 else:
907 else:
908 readdensity = 1
908 readdensity = 1
909
909
910 fm.write(
910 fm.write(
911 b'readsize largestblock readdensity srchunks',
911 b'readsize largestblock readdensity srchunks',
912 b' %10d %10d %9.5f %8d',
912 b' %10d %10d %9.5f %8d',
913 readsize,
913 readsize,
914 largestblock,
914 largestblock,
915 readdensity,
915 readdensity,
916 srchunks,
916 srchunks,
917 readsize=readsize,
917 readsize=readsize,
918 largestblock=largestblock,
918 largestblock=largestblock,
919 readdensity=readdensity,
919 readdensity=readdensity,
920 srchunks=srchunks,
920 srchunks=srchunks,
921 )
921 )
922
922
923 fm.plain(b'\n')
923 fm.plain(b'\n')
924
924
925 fm.end()
925 fm.end()
926
926
927
927
928 @command(
928 @command(
929 b'debugdirstate|debugstate',
929 b'debugdirstate|debugstate',
930 [
930 [
931 (
931 (
932 b'',
932 b'',
933 b'nodates',
933 b'nodates',
934 None,
934 None,
935 _(b'do not display the saved mtime (DEPRECATED)'),
935 _(b'do not display the saved mtime (DEPRECATED)'),
936 ),
936 ),
937 (b'', b'dates', True, _(b'display the saved mtime')),
937 (b'', b'dates', True, _(b'display the saved mtime')),
938 (b'', b'datesort', None, _(b'sort by saved mtime')),
938 (b'', b'datesort', None, _(b'sort by saved mtime')),
939 ],
939 ],
940 _(b'[OPTION]...'),
940 _(b'[OPTION]...'),
941 )
941 )
942 def debugstate(ui, repo, **opts):
942 def debugstate(ui, repo, **opts):
943 """show the contents of the current dirstate"""
943 """show the contents of the current dirstate"""
944
944
945 nodates = not opts['dates']
945 nodates = not opts['dates']
946 if opts.get('nodates') is not None:
946 if opts.get('nodates') is not None:
947 nodates = True
947 nodates = True
948 datesort = opts.get('datesort')
948 datesort = opts.get('datesort')
949
949
950 if datesort:
950 if datesort:
951 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
951 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
952 else:
952 else:
953 keyfunc = None # sort by filename
953 keyfunc = None # sort by filename
954 for file_, ent in sorted(pycompat.iteritems(repo.dirstate), key=keyfunc):
954 for file_, ent in sorted(pycompat.iteritems(repo.dirstate), key=keyfunc):
955 if ent[3] == -1:
955 if ent[3] == -1:
956 timestr = b'unset '
956 timestr = b'unset '
957 elif nodates:
957 elif nodates:
958 timestr = b'set '
958 timestr = b'set '
959 else:
959 else:
960 timestr = time.strftime(
960 timestr = time.strftime(
961 "%Y-%m-%d %H:%M:%S ", time.localtime(ent[3])
961 "%Y-%m-%d %H:%M:%S ", time.localtime(ent[3])
962 )
962 )
963 timestr = encoding.strtolocal(timestr)
963 timestr = encoding.strtolocal(timestr)
964 if ent[1] & 0o20000:
964 if ent[1] & 0o20000:
965 mode = b'lnk'
965 mode = b'lnk'
966 else:
966 else:
967 mode = b'%3o' % (ent[1] & 0o777 & ~util.umask)
967 mode = b'%3o' % (ent[1] & 0o777 & ~util.umask)
968 ui.write(b"%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
968 ui.write(b"%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
969 for f in repo.dirstate.copies():
969 for f in repo.dirstate.copies():
970 ui.write(_(b"copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
970 ui.write(_(b"copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
971
971
972
972
973 @command(
973 @command(
974 b'debugdiscovery',
974 b'debugdiscovery',
975 [
975 [
976 (b'', b'old', None, _(b'use old-style discovery')),
976 (b'', b'old', None, _(b'use old-style discovery')),
977 (
977 (
978 b'',
978 b'',
979 b'nonheads',
979 b'nonheads',
980 None,
980 None,
981 _(b'use old-style discovery with non-heads included'),
981 _(b'use old-style discovery with non-heads included'),
982 ),
982 ),
983 (b'', b'rev', [], b'restrict discovery to this set of revs'),
983 (b'', b'rev', [], b'restrict discovery to this set of revs'),
984 (b'', b'seed', b'12323', b'specify the random seed use for discovery'),
984 (b'', b'seed', b'12323', b'specify the random seed use for discovery'),
985 (
985 (
986 b'',
986 b'',
987 b'local-as-revs',
987 b'local-as-revs',
988 "",
988 "",
989 'treat local has having these revisions only',
989 'treat local has having these revisions only',
990 ),
990 ),
991 (
991 (
992 b'',
992 b'',
993 b'remote-as-revs',
993 b'remote-as-revs',
994 "",
994 "",
995 'use local as remote, with only these these revisions',
995 'use local as remote, with only these these revisions',
996 ),
996 ),
997 ]
997 ]
998 + cmdutil.remoteopts,
998 + cmdutil.remoteopts,
999 _(b'[--rev REV] [OTHER]'),
999 _(b'[--rev REV] [OTHER]'),
1000 )
1000 )
1001 def debugdiscovery(ui, repo, remoteurl=b"default", **opts):
1001 def debugdiscovery(ui, repo, remoteurl=b"default", **opts):
1002 """runs the changeset discovery protocol in isolation
1002 """runs the changeset discovery protocol in isolation
1003
1003
1004 The local peer can be "replaced" by a subset of the local repository by
1004 The local peer can be "replaced" by a subset of the local repository by
1005 using the `--local-as-revs` flag. Int he same way, usual `remote` peer can
1005 using the `--local-as-revs` flag. Int he same way, usual `remote` peer can
1006 be "replaced" by a subset of the local repository using the
1006 be "replaced" by a subset of the local repository using the
1007 `--local-as-revs` flag. This is useful to efficiently debug pathological
1007 `--local-as-revs` flag. This is useful to efficiently debug pathological
1008 discovery situation.
1008 discovery situation.
1009 """
1009 """
1010 opts = pycompat.byteskwargs(opts)
1010 opts = pycompat.byteskwargs(opts)
1011 unfi = repo.unfiltered()
1011 unfi = repo.unfiltered()
1012
1012
1013 # setup potential extra filtering
1013 # setup potential extra filtering
1014 local_revs = opts[b"local_as_revs"]
1014 local_revs = opts[b"local_as_revs"]
1015 remote_revs = opts[b"remote_as_revs"]
1015 remote_revs = opts[b"remote_as_revs"]
1016
1016
1017 # make sure tests are repeatable
1017 # make sure tests are repeatable
1018 random.seed(int(opts[b'seed']))
1018 random.seed(int(opts[b'seed']))
1019
1019
1020 if not remote_revs:
1020 if not remote_revs:
1021
1021
1022 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl))
1022 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl))
1023 remote = hg.peer(repo, opts, remoteurl)
1023 remote = hg.peer(repo, opts, remoteurl)
1024 ui.status(_(b'comparing with %s\n') % util.hidepassword(remoteurl))
1024 ui.status(_(b'comparing with %s\n') % util.hidepassword(remoteurl))
1025 else:
1025 else:
1026 branches = (None, [])
1026 branches = (None, [])
1027 remote_filtered_revs = scmutil.revrange(
1027 remote_filtered_revs = scmutil.revrange(
1028 unfi, [b"not (::(%s))" % remote_revs]
1028 unfi, [b"not (::(%s))" % remote_revs]
1029 )
1029 )
1030 remote_filtered_revs = frozenset(remote_filtered_revs)
1030 remote_filtered_revs = frozenset(remote_filtered_revs)
1031
1031
1032 def remote_func(x):
1032 def remote_func(x):
1033 return remote_filtered_revs
1033 return remote_filtered_revs
1034
1034
1035 repoview.filtertable[b'debug-discovery-remote-filter'] = remote_func
1035 repoview.filtertable[b'debug-discovery-remote-filter'] = remote_func
1036
1036
1037 remote = repo.peer()
1037 remote = repo.peer()
1038 remote._repo = remote._repo.filtered(b'debug-discovery-remote-filter')
1038 remote._repo = remote._repo.filtered(b'debug-discovery-remote-filter')
1039
1039
1040 if local_revs:
1040 if local_revs:
1041 local_filtered_revs = scmutil.revrange(
1041 local_filtered_revs = scmutil.revrange(
1042 unfi, [b"not (::(%s))" % local_revs]
1042 unfi, [b"not (::(%s))" % local_revs]
1043 )
1043 )
1044 local_filtered_revs = frozenset(local_filtered_revs)
1044 local_filtered_revs = frozenset(local_filtered_revs)
1045
1045
1046 def local_func(x):
1046 def local_func(x):
1047 return local_filtered_revs
1047 return local_filtered_revs
1048
1048
1049 repoview.filtertable[b'debug-discovery-local-filter'] = local_func
1049 repoview.filtertable[b'debug-discovery-local-filter'] = local_func
1050 repo = repo.filtered(b'debug-discovery-local-filter')
1050 repo = repo.filtered(b'debug-discovery-local-filter')
1051
1051
1052 data = {}
1052 data = {}
1053 if opts.get(b'old'):
1053 if opts.get(b'old'):
1054
1054
1055 def doit(pushedrevs, remoteheads, remote=remote):
1055 def doit(pushedrevs, remoteheads, remote=remote):
1056 if not util.safehasattr(remote, b'branches'):
1056 if not util.safehasattr(remote, b'branches'):
1057 # enable in-client legacy support
1057 # enable in-client legacy support
1058 remote = localrepo.locallegacypeer(remote.local())
1058 remote = localrepo.locallegacypeer(remote.local())
1059 common, _in, hds = treediscovery.findcommonincoming(
1059 common, _in, hds = treediscovery.findcommonincoming(
1060 repo, remote, force=True, audit=data
1060 repo, remote, force=True, audit=data
1061 )
1061 )
1062 common = set(common)
1062 common = set(common)
1063 if not opts.get(b'nonheads'):
1063 if not opts.get(b'nonheads'):
1064 ui.writenoi18n(
1064 ui.writenoi18n(
1065 b"unpruned common: %s\n"
1065 b"unpruned common: %s\n"
1066 % b" ".join(sorted(short(n) for n in common))
1066 % b" ".join(sorted(short(n) for n in common))
1067 )
1067 )
1068
1068
1069 clnode = repo.changelog.node
1069 clnode = repo.changelog.node
1070 common = repo.revs(b'heads(::%ln)', common)
1070 common = repo.revs(b'heads(::%ln)', common)
1071 common = {clnode(r) for r in common}
1071 common = {clnode(r) for r in common}
1072 return common, hds
1072 return common, hds
1073
1073
1074 else:
1074 else:
1075
1075
1076 def doit(pushedrevs, remoteheads, remote=remote):
1076 def doit(pushedrevs, remoteheads, remote=remote):
1077 nodes = None
1077 nodes = None
1078 if pushedrevs:
1078 if pushedrevs:
1079 revs = scmutil.revrange(repo, pushedrevs)
1079 revs = scmutil.revrange(repo, pushedrevs)
1080 nodes = [repo[r].node() for r in revs]
1080 nodes = [repo[r].node() for r in revs]
1081 common, any, hds = setdiscovery.findcommonheads(
1081 common, any, hds = setdiscovery.findcommonheads(
1082 ui, repo, remote, ancestorsof=nodes, audit=data
1082 ui, repo, remote, ancestorsof=nodes, audit=data
1083 )
1083 )
1084 return common, hds
1084 return common, hds
1085
1085
1086 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches, revs=None)
1086 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches, revs=None)
1087 localrevs = opts[b'rev']
1087 localrevs = opts[b'rev']
1088 with util.timedcm('debug-discovery') as t:
1088 with util.timedcm('debug-discovery') as t:
1089 common, hds = doit(localrevs, remoterevs)
1089 common, hds = doit(localrevs, remoterevs)
1090
1090
1091 # compute all statistics
1091 # compute all statistics
1092 heads_common = set(common)
1092 heads_common = set(common)
1093 heads_remote = set(hds)
1093 heads_remote = set(hds)
1094 heads_local = set(repo.heads())
1094 heads_local = set(repo.heads())
1095 # note: they cannot be a local or remote head that is in common and not
1095 # note: they cannot be a local or remote head that is in common and not
1096 # itself a head of common.
1096 # itself a head of common.
1097 heads_common_local = heads_common & heads_local
1097 heads_common_local = heads_common & heads_local
1098 heads_common_remote = heads_common & heads_remote
1098 heads_common_remote = heads_common & heads_remote
1099 heads_common_both = heads_common & heads_remote & heads_local
1099 heads_common_both = heads_common & heads_remote & heads_local
1100
1100
1101 all = repo.revs(b'all()')
1101 all = repo.revs(b'all()')
1102 common = repo.revs(b'::%ln', common)
1102 common = repo.revs(b'::%ln', common)
1103 roots_common = repo.revs(b'roots(::%ld)', common)
1103 roots_common = repo.revs(b'roots(::%ld)', common)
1104 missing = repo.revs(b'not ::%ld', common)
1104 missing = repo.revs(b'not ::%ld', common)
1105 heads_missing = repo.revs(b'heads(%ld)', missing)
1105 heads_missing = repo.revs(b'heads(%ld)', missing)
1106 roots_missing = repo.revs(b'roots(%ld)', missing)
1106 roots_missing = repo.revs(b'roots(%ld)', missing)
1107 assert len(common) + len(missing) == len(all)
1107 assert len(common) + len(missing) == len(all)
1108
1108
1109 initial_undecided = repo.revs(
1109 initial_undecided = repo.revs(
1110 b'not (::%ln or %ln::)', heads_common_remote, heads_common_local
1110 b'not (::%ln or %ln::)', heads_common_remote, heads_common_local
1111 )
1111 )
1112 heads_initial_undecided = repo.revs(b'heads(%ld)', initial_undecided)
1112 heads_initial_undecided = repo.revs(b'heads(%ld)', initial_undecided)
1113 roots_initial_undecided = repo.revs(b'roots(%ld)', initial_undecided)
1113 roots_initial_undecided = repo.revs(b'roots(%ld)', initial_undecided)
1114 common_initial_undecided = initial_undecided & common
1114 common_initial_undecided = initial_undecided & common
1115 missing_initial_undecided = initial_undecided & missing
1115 missing_initial_undecided = initial_undecided & missing
1116
1116
1117 data[b'elapsed'] = t.elapsed
1117 data[b'elapsed'] = t.elapsed
1118 data[b'nb-common-heads'] = len(heads_common)
1118 data[b'nb-common-heads'] = len(heads_common)
1119 data[b'nb-common-heads-local'] = len(heads_common_local)
1119 data[b'nb-common-heads-local'] = len(heads_common_local)
1120 data[b'nb-common-heads-remote'] = len(heads_common_remote)
1120 data[b'nb-common-heads-remote'] = len(heads_common_remote)
1121 data[b'nb-common-heads-both'] = len(heads_common_both)
1121 data[b'nb-common-heads-both'] = len(heads_common_both)
1122 data[b'nb-common-roots'] = len(roots_common)
1122 data[b'nb-common-roots'] = len(roots_common)
1123 data[b'nb-head-local'] = len(heads_local)
1123 data[b'nb-head-local'] = len(heads_local)
1124 data[b'nb-head-local-missing'] = len(heads_local) - len(heads_common_local)
1124 data[b'nb-head-local-missing'] = len(heads_local) - len(heads_common_local)
1125 data[b'nb-head-remote'] = len(heads_remote)
1125 data[b'nb-head-remote'] = len(heads_remote)
1126 data[b'nb-head-remote-unknown'] = len(heads_remote) - len(
1126 data[b'nb-head-remote-unknown'] = len(heads_remote) - len(
1127 heads_common_remote
1127 heads_common_remote
1128 )
1128 )
1129 data[b'nb-revs'] = len(all)
1129 data[b'nb-revs'] = len(all)
1130 data[b'nb-revs-common'] = len(common)
1130 data[b'nb-revs-common'] = len(common)
1131 data[b'nb-revs-missing'] = len(missing)
1131 data[b'nb-revs-missing'] = len(missing)
1132 data[b'nb-missing-heads'] = len(heads_missing)
1132 data[b'nb-missing-heads'] = len(heads_missing)
1133 data[b'nb-missing-roots'] = len(roots_missing)
1133 data[b'nb-missing-roots'] = len(roots_missing)
1134 data[b'nb-ini_und'] = len(initial_undecided)
1134 data[b'nb-ini_und'] = len(initial_undecided)
1135 data[b'nb-ini_und-heads'] = len(heads_initial_undecided)
1135 data[b'nb-ini_und-heads'] = len(heads_initial_undecided)
1136 data[b'nb-ini_und-roots'] = len(roots_initial_undecided)
1136 data[b'nb-ini_und-roots'] = len(roots_initial_undecided)
1137 data[b'nb-ini_und-common'] = len(common_initial_undecided)
1137 data[b'nb-ini_und-common'] = len(common_initial_undecided)
1138 data[b'nb-ini_und-missing'] = len(missing_initial_undecided)
1138 data[b'nb-ini_und-missing'] = len(missing_initial_undecided)
1139
1139
1140 # display discovery summary
1140 # display discovery summary
1141 ui.writenoi18n(b"elapsed time: %(elapsed)f seconds\n" % data)
1141 ui.writenoi18n(b"elapsed time: %(elapsed)f seconds\n" % data)
1142 ui.writenoi18n(b"round-trips: %(total-roundtrips)9d\n" % data)
1142 ui.writenoi18n(b"round-trips: %(total-roundtrips)9d\n" % data)
1143 ui.writenoi18n(b"heads summary:\n")
1143 ui.writenoi18n(b"heads summary:\n")
1144 ui.writenoi18n(b" total common heads: %(nb-common-heads)9d\n" % data)
1144 ui.writenoi18n(b" total common heads: %(nb-common-heads)9d\n" % data)
1145 ui.writenoi18n(
1145 ui.writenoi18n(
1146 b" also local heads: %(nb-common-heads-local)9d\n" % data
1146 b" also local heads: %(nb-common-heads-local)9d\n" % data
1147 )
1147 )
1148 ui.writenoi18n(
1148 ui.writenoi18n(
1149 b" also remote heads: %(nb-common-heads-remote)9d\n" % data
1149 b" also remote heads: %(nb-common-heads-remote)9d\n" % data
1150 )
1150 )
1151 ui.writenoi18n(b" both: %(nb-common-heads-both)9d\n" % data)
1151 ui.writenoi18n(b" both: %(nb-common-heads-both)9d\n" % data)
1152 ui.writenoi18n(b" local heads: %(nb-head-local)9d\n" % data)
1152 ui.writenoi18n(b" local heads: %(nb-head-local)9d\n" % data)
1153 ui.writenoi18n(
1153 ui.writenoi18n(
1154 b" common: %(nb-common-heads-local)9d\n" % data
1154 b" common: %(nb-common-heads-local)9d\n" % data
1155 )
1155 )
1156 ui.writenoi18n(
1156 ui.writenoi18n(
1157 b" missing: %(nb-head-local-missing)9d\n" % data
1157 b" missing: %(nb-head-local-missing)9d\n" % data
1158 )
1158 )
1159 ui.writenoi18n(b" remote heads: %(nb-head-remote)9d\n" % data)
1159 ui.writenoi18n(b" remote heads: %(nb-head-remote)9d\n" % data)
1160 ui.writenoi18n(
1160 ui.writenoi18n(
1161 b" common: %(nb-common-heads-remote)9d\n" % data
1161 b" common: %(nb-common-heads-remote)9d\n" % data
1162 )
1162 )
1163 ui.writenoi18n(
1163 ui.writenoi18n(
1164 b" unknown: %(nb-head-remote-unknown)9d\n" % data
1164 b" unknown: %(nb-head-remote-unknown)9d\n" % data
1165 )
1165 )
1166 ui.writenoi18n(b"local changesets: %(nb-revs)9d\n" % data)
1166 ui.writenoi18n(b"local changesets: %(nb-revs)9d\n" % data)
1167 ui.writenoi18n(b" common: %(nb-revs-common)9d\n" % data)
1167 ui.writenoi18n(b" common: %(nb-revs-common)9d\n" % data)
1168 ui.writenoi18n(b" heads: %(nb-common-heads)9d\n" % data)
1168 ui.writenoi18n(b" heads: %(nb-common-heads)9d\n" % data)
1169 ui.writenoi18n(b" roots: %(nb-common-roots)9d\n" % data)
1169 ui.writenoi18n(b" roots: %(nb-common-roots)9d\n" % data)
1170 ui.writenoi18n(b" missing: %(nb-revs-missing)9d\n" % data)
1170 ui.writenoi18n(b" missing: %(nb-revs-missing)9d\n" % data)
1171 ui.writenoi18n(b" heads: %(nb-missing-heads)9d\n" % data)
1171 ui.writenoi18n(b" heads: %(nb-missing-heads)9d\n" % data)
1172 ui.writenoi18n(b" roots: %(nb-missing-roots)9d\n" % data)
1172 ui.writenoi18n(b" roots: %(nb-missing-roots)9d\n" % data)
1173 ui.writenoi18n(b" first undecided set: %(nb-ini_und)9d\n" % data)
1173 ui.writenoi18n(b" first undecided set: %(nb-ini_und)9d\n" % data)
1174 ui.writenoi18n(b" heads: %(nb-ini_und-heads)9d\n" % data)
1174 ui.writenoi18n(b" heads: %(nb-ini_und-heads)9d\n" % data)
1175 ui.writenoi18n(b" roots: %(nb-ini_und-roots)9d\n" % data)
1175 ui.writenoi18n(b" roots: %(nb-ini_und-roots)9d\n" % data)
1176 ui.writenoi18n(b" common: %(nb-ini_und-common)9d\n" % data)
1176 ui.writenoi18n(b" common: %(nb-ini_und-common)9d\n" % data)
1177 ui.writenoi18n(b" missing: %(nb-ini_und-missing)9d\n" % data)
1177 ui.writenoi18n(b" missing: %(nb-ini_und-missing)9d\n" % data)
1178
1178
1179 if ui.verbose:
1179 if ui.verbose:
1180 ui.writenoi18n(
1180 ui.writenoi18n(
1181 b"common heads: %s\n"
1181 b"common heads: %s\n"
1182 % b" ".join(sorted(short(n) for n in heads_common))
1182 % b" ".join(sorted(short(n) for n in heads_common))
1183 )
1183 )
1184
1184
1185
1185
1186 _chunksize = 4 << 10
1186 _chunksize = 4 << 10
1187
1187
1188
1188
1189 @command(
1189 @command(
1190 b'debugdownload',
1190 b'debugdownload',
1191 [
1191 [
1192 (b'o', b'output', b'', _(b'path')),
1192 (b'o', b'output', b'', _(b'path')),
1193 ],
1193 ],
1194 optionalrepo=True,
1194 optionalrepo=True,
1195 )
1195 )
1196 def debugdownload(ui, repo, url, output=None, **opts):
1196 def debugdownload(ui, repo, url, output=None, **opts):
1197 """download a resource using Mercurial logic and config"""
1197 """download a resource using Mercurial logic and config"""
1198 fh = urlmod.open(ui, url, output)
1198 fh = urlmod.open(ui, url, output)
1199
1199
1200 dest = ui
1200 dest = ui
1201 if output:
1201 if output:
1202 dest = open(output, b"wb", _chunksize)
1202 dest = open(output, b"wb", _chunksize)
1203 try:
1203 try:
1204 data = fh.read(_chunksize)
1204 data = fh.read(_chunksize)
1205 while data:
1205 while data:
1206 dest.write(data)
1206 dest.write(data)
1207 data = fh.read(_chunksize)
1207 data = fh.read(_chunksize)
1208 finally:
1208 finally:
1209 if output:
1209 if output:
1210 dest.close()
1210 dest.close()
1211
1211
1212
1212
1213 @command(b'debugextensions', cmdutil.formatteropts, [], optionalrepo=True)
1213 @command(b'debugextensions', cmdutil.formatteropts, [], optionalrepo=True)
1214 def debugextensions(ui, repo, **opts):
1214 def debugextensions(ui, repo, **opts):
1215 '''show information about active extensions'''
1215 '''show information about active extensions'''
1216 opts = pycompat.byteskwargs(opts)
1216 opts = pycompat.byteskwargs(opts)
1217 exts = extensions.extensions(ui)
1217 exts = extensions.extensions(ui)
1218 hgver = util.version()
1218 hgver = util.version()
1219 fm = ui.formatter(b'debugextensions', opts)
1219 fm = ui.formatter(b'debugextensions', opts)
1220 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
1220 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
1221 isinternal = extensions.ismoduleinternal(extmod)
1221 isinternal = extensions.ismoduleinternal(extmod)
1222 extsource = None
1222 extsource = None
1223
1223
1224 if util.safehasattr(extmod, '__file__'):
1224 if util.safehasattr(extmod, '__file__'):
1225 extsource = pycompat.fsencode(extmod.__file__)
1225 extsource = pycompat.fsencode(extmod.__file__)
1226 elif getattr(sys, 'oxidized', False):
1226 elif getattr(sys, 'oxidized', False):
1227 extsource = pycompat.sysexecutable
1227 extsource = pycompat.sysexecutable
1228 if isinternal:
1228 if isinternal:
1229 exttestedwith = [] # never expose magic string to users
1229 exttestedwith = [] # never expose magic string to users
1230 else:
1230 else:
1231 exttestedwith = getattr(extmod, 'testedwith', b'').split()
1231 exttestedwith = getattr(extmod, 'testedwith', b'').split()
1232 extbuglink = getattr(extmod, 'buglink', None)
1232 extbuglink = getattr(extmod, 'buglink', None)
1233
1233
1234 fm.startitem()
1234 fm.startitem()
1235
1235
1236 if ui.quiet or ui.verbose:
1236 if ui.quiet or ui.verbose:
1237 fm.write(b'name', b'%s\n', extname)
1237 fm.write(b'name', b'%s\n', extname)
1238 else:
1238 else:
1239 fm.write(b'name', b'%s', extname)
1239 fm.write(b'name', b'%s', extname)
1240 if isinternal or hgver in exttestedwith:
1240 if isinternal or hgver in exttestedwith:
1241 fm.plain(b'\n')
1241 fm.plain(b'\n')
1242 elif not exttestedwith:
1242 elif not exttestedwith:
1243 fm.plain(_(b' (untested!)\n'))
1243 fm.plain(_(b' (untested!)\n'))
1244 else:
1244 else:
1245 lasttestedversion = exttestedwith[-1]
1245 lasttestedversion = exttestedwith[-1]
1246 fm.plain(b' (%s!)\n' % lasttestedversion)
1246 fm.plain(b' (%s!)\n' % lasttestedversion)
1247
1247
1248 fm.condwrite(
1248 fm.condwrite(
1249 ui.verbose and extsource,
1249 ui.verbose and extsource,
1250 b'source',
1250 b'source',
1251 _(b' location: %s\n'),
1251 _(b' location: %s\n'),
1252 extsource or b"",
1252 extsource or b"",
1253 )
1253 )
1254
1254
1255 if ui.verbose:
1255 if ui.verbose:
1256 fm.plain(_(b' bundled: %s\n') % [b'no', b'yes'][isinternal])
1256 fm.plain(_(b' bundled: %s\n') % [b'no', b'yes'][isinternal])
1257 fm.data(bundled=isinternal)
1257 fm.data(bundled=isinternal)
1258
1258
1259 fm.condwrite(
1259 fm.condwrite(
1260 ui.verbose and exttestedwith,
1260 ui.verbose and exttestedwith,
1261 b'testedwith',
1261 b'testedwith',
1262 _(b' tested with: %s\n'),
1262 _(b' tested with: %s\n'),
1263 fm.formatlist(exttestedwith, name=b'ver'),
1263 fm.formatlist(exttestedwith, name=b'ver'),
1264 )
1264 )
1265
1265
1266 fm.condwrite(
1266 fm.condwrite(
1267 ui.verbose and extbuglink,
1267 ui.verbose and extbuglink,
1268 b'buglink',
1268 b'buglink',
1269 _(b' bug reporting: %s\n'),
1269 _(b' bug reporting: %s\n'),
1270 extbuglink or b"",
1270 extbuglink or b"",
1271 )
1271 )
1272
1272
1273 fm.end()
1273 fm.end()
1274
1274
1275
1275
1276 @command(
1276 @command(
1277 b'debugfileset',
1277 b'debugfileset',
1278 [
1278 [
1279 (
1279 (
1280 b'r',
1280 b'r',
1281 b'rev',
1281 b'rev',
1282 b'',
1282 b'',
1283 _(b'apply the filespec on this revision'),
1283 _(b'apply the filespec on this revision'),
1284 _(b'REV'),
1284 _(b'REV'),
1285 ),
1285 ),
1286 (
1286 (
1287 b'',
1287 b'',
1288 b'all-files',
1288 b'all-files',
1289 False,
1289 False,
1290 _(b'test files from all revisions and working directory'),
1290 _(b'test files from all revisions and working directory'),
1291 ),
1291 ),
1292 (
1292 (
1293 b's',
1293 b's',
1294 b'show-matcher',
1294 b'show-matcher',
1295 None,
1295 None,
1296 _(b'print internal representation of matcher'),
1296 _(b'print internal representation of matcher'),
1297 ),
1297 ),
1298 (
1298 (
1299 b'p',
1299 b'p',
1300 b'show-stage',
1300 b'show-stage',
1301 [],
1301 [],
1302 _(b'print parsed tree at the given stage'),
1302 _(b'print parsed tree at the given stage'),
1303 _(b'NAME'),
1303 _(b'NAME'),
1304 ),
1304 ),
1305 ],
1305 ],
1306 _(b'[-r REV] [--all-files] [OPTION]... FILESPEC'),
1306 _(b'[-r REV] [--all-files] [OPTION]... FILESPEC'),
1307 )
1307 )
1308 def debugfileset(ui, repo, expr, **opts):
1308 def debugfileset(ui, repo, expr, **opts):
1309 '''parse and apply a fileset specification'''
1309 '''parse and apply a fileset specification'''
1310 from . import fileset
1310 from . import fileset
1311
1311
1312 fileset.symbols # force import of fileset so we have predicates to optimize
1312 fileset.symbols # force import of fileset so we have predicates to optimize
1313 opts = pycompat.byteskwargs(opts)
1313 opts = pycompat.byteskwargs(opts)
1314 ctx = scmutil.revsingle(repo, opts.get(b'rev'), None)
1314 ctx = scmutil.revsingle(repo, opts.get(b'rev'), None)
1315
1315
1316 stages = [
1316 stages = [
1317 (b'parsed', pycompat.identity),
1317 (b'parsed', pycompat.identity),
1318 (b'analyzed', filesetlang.analyze),
1318 (b'analyzed', filesetlang.analyze),
1319 (b'optimized', filesetlang.optimize),
1319 (b'optimized', filesetlang.optimize),
1320 ]
1320 ]
1321 stagenames = {n for n, f in stages}
1321 stagenames = {n for n, f in stages}
1322
1322
1323 showalways = set()
1323 showalways = set()
1324 if ui.verbose and not opts[b'show_stage']:
1324 if ui.verbose and not opts[b'show_stage']:
1325 # show parsed tree by --verbose (deprecated)
1325 # show parsed tree by --verbose (deprecated)
1326 showalways.add(b'parsed')
1326 showalways.add(b'parsed')
1327 if opts[b'show_stage'] == [b'all']:
1327 if opts[b'show_stage'] == [b'all']:
1328 showalways.update(stagenames)
1328 showalways.update(stagenames)
1329 else:
1329 else:
1330 for n in opts[b'show_stage']:
1330 for n in opts[b'show_stage']:
1331 if n not in stagenames:
1331 if n not in stagenames:
1332 raise error.Abort(_(b'invalid stage name: %s') % n)
1332 raise error.Abort(_(b'invalid stage name: %s') % n)
1333 showalways.update(opts[b'show_stage'])
1333 showalways.update(opts[b'show_stage'])
1334
1334
1335 tree = filesetlang.parse(expr)
1335 tree = filesetlang.parse(expr)
1336 for n, f in stages:
1336 for n, f in stages:
1337 tree = f(tree)
1337 tree = f(tree)
1338 if n in showalways:
1338 if n in showalways:
1339 if opts[b'show_stage'] or n != b'parsed':
1339 if opts[b'show_stage'] or n != b'parsed':
1340 ui.write(b"* %s:\n" % n)
1340 ui.write(b"* %s:\n" % n)
1341 ui.write(filesetlang.prettyformat(tree), b"\n")
1341 ui.write(filesetlang.prettyformat(tree), b"\n")
1342
1342
1343 files = set()
1343 files = set()
1344 if opts[b'all_files']:
1344 if opts[b'all_files']:
1345 for r in repo:
1345 for r in repo:
1346 c = repo[r]
1346 c = repo[r]
1347 files.update(c.files())
1347 files.update(c.files())
1348 files.update(c.substate)
1348 files.update(c.substate)
1349 if opts[b'all_files'] or ctx.rev() is None:
1349 if opts[b'all_files'] or ctx.rev() is None:
1350 wctx = repo[None]
1350 wctx = repo[None]
1351 files.update(
1351 files.update(
1352 repo.dirstate.walk(
1352 repo.dirstate.walk(
1353 scmutil.matchall(repo),
1353 scmutil.matchall(repo),
1354 subrepos=list(wctx.substate),
1354 subrepos=list(wctx.substate),
1355 unknown=True,
1355 unknown=True,
1356 ignored=True,
1356 ignored=True,
1357 )
1357 )
1358 )
1358 )
1359 files.update(wctx.substate)
1359 files.update(wctx.substate)
1360 else:
1360 else:
1361 files.update(ctx.files())
1361 files.update(ctx.files())
1362 files.update(ctx.substate)
1362 files.update(ctx.substate)
1363
1363
1364 m = ctx.matchfileset(repo.getcwd(), expr)
1364 m = ctx.matchfileset(repo.getcwd(), expr)
1365 if opts[b'show_matcher'] or (opts[b'show_matcher'] is None and ui.verbose):
1365 if opts[b'show_matcher'] or (opts[b'show_matcher'] is None and ui.verbose):
1366 ui.writenoi18n(b'* matcher:\n', stringutil.prettyrepr(m), b'\n')
1366 ui.writenoi18n(b'* matcher:\n', stringutil.prettyrepr(m), b'\n')
1367 for f in sorted(files):
1367 for f in sorted(files):
1368 if not m(f):
1368 if not m(f):
1369 continue
1369 continue
1370 ui.write(b"%s\n" % f)
1370 ui.write(b"%s\n" % f)
1371
1371
1372
1372
1373 @command(b'debugformat', [] + cmdutil.formatteropts)
1373 @command(b'debugformat', [] + cmdutil.formatteropts)
1374 def debugformat(ui, repo, **opts):
1374 def debugformat(ui, repo, **opts):
1375 """display format information about the current repository
1375 """display format information about the current repository
1376
1376
1377 Use --verbose to get extra information about current config value and
1377 Use --verbose to get extra information about current config value and
1378 Mercurial default."""
1378 Mercurial default."""
1379 opts = pycompat.byteskwargs(opts)
1379 opts = pycompat.byteskwargs(opts)
1380 maxvariantlength = max(len(fv.name) for fv in upgrade.allformatvariant)
1380 maxvariantlength = max(len(fv.name) for fv in upgrade.allformatvariant)
1381 maxvariantlength = max(len(b'format-variant'), maxvariantlength)
1381 maxvariantlength = max(len(b'format-variant'), maxvariantlength)
1382
1382
1383 def makeformatname(name):
1383 def makeformatname(name):
1384 return b'%s:' + (b' ' * (maxvariantlength - len(name)))
1384 return b'%s:' + (b' ' * (maxvariantlength - len(name)))
1385
1385
1386 fm = ui.formatter(b'debugformat', opts)
1386 fm = ui.formatter(b'debugformat', opts)
1387 if fm.isplain():
1387 if fm.isplain():
1388
1388
1389 def formatvalue(value):
1389 def formatvalue(value):
1390 if util.safehasattr(value, b'startswith'):
1390 if util.safehasattr(value, b'startswith'):
1391 return value
1391 return value
1392 if value:
1392 if value:
1393 return b'yes'
1393 return b'yes'
1394 else:
1394 else:
1395 return b'no'
1395 return b'no'
1396
1396
1397 else:
1397 else:
1398 formatvalue = pycompat.identity
1398 formatvalue = pycompat.identity
1399
1399
1400 fm.plain(b'format-variant')
1400 fm.plain(b'format-variant')
1401 fm.plain(b' ' * (maxvariantlength - len(b'format-variant')))
1401 fm.plain(b' ' * (maxvariantlength - len(b'format-variant')))
1402 fm.plain(b' repo')
1402 fm.plain(b' repo')
1403 if ui.verbose:
1403 if ui.verbose:
1404 fm.plain(b' config default')
1404 fm.plain(b' config default')
1405 fm.plain(b'\n')
1405 fm.plain(b'\n')
1406 for fv in upgrade.allformatvariant:
1406 for fv in upgrade.allformatvariant:
1407 fm.startitem()
1407 fm.startitem()
1408 repovalue = fv.fromrepo(repo)
1408 repovalue = fv.fromrepo(repo)
1409 configvalue = fv.fromconfig(repo)
1409 configvalue = fv.fromconfig(repo)
1410
1410
1411 if repovalue != configvalue:
1411 if repovalue != configvalue:
1412 namelabel = b'formatvariant.name.mismatchconfig'
1412 namelabel = b'formatvariant.name.mismatchconfig'
1413 repolabel = b'formatvariant.repo.mismatchconfig'
1413 repolabel = b'formatvariant.repo.mismatchconfig'
1414 elif repovalue != fv.default:
1414 elif repovalue != fv.default:
1415 namelabel = b'formatvariant.name.mismatchdefault'
1415 namelabel = b'formatvariant.name.mismatchdefault'
1416 repolabel = b'formatvariant.repo.mismatchdefault'
1416 repolabel = b'formatvariant.repo.mismatchdefault'
1417 else:
1417 else:
1418 namelabel = b'formatvariant.name.uptodate'
1418 namelabel = b'formatvariant.name.uptodate'
1419 repolabel = b'formatvariant.repo.uptodate'
1419 repolabel = b'formatvariant.repo.uptodate'
1420
1420
1421 fm.write(b'name', makeformatname(fv.name), fv.name, label=namelabel)
1421 fm.write(b'name', makeformatname(fv.name), fv.name, label=namelabel)
1422 fm.write(b'repo', b' %3s', formatvalue(repovalue), label=repolabel)
1422 fm.write(b'repo', b' %3s', formatvalue(repovalue), label=repolabel)
1423 if fv.default != configvalue:
1423 if fv.default != configvalue:
1424 configlabel = b'formatvariant.config.special'
1424 configlabel = b'formatvariant.config.special'
1425 else:
1425 else:
1426 configlabel = b'formatvariant.config.default'
1426 configlabel = b'formatvariant.config.default'
1427 fm.condwrite(
1427 fm.condwrite(
1428 ui.verbose,
1428 ui.verbose,
1429 b'config',
1429 b'config',
1430 b' %6s',
1430 b' %6s',
1431 formatvalue(configvalue),
1431 formatvalue(configvalue),
1432 label=configlabel,
1432 label=configlabel,
1433 )
1433 )
1434 fm.condwrite(
1434 fm.condwrite(
1435 ui.verbose,
1435 ui.verbose,
1436 b'default',
1436 b'default',
1437 b' %7s',
1437 b' %7s',
1438 formatvalue(fv.default),
1438 formatvalue(fv.default),
1439 label=b'formatvariant.default',
1439 label=b'formatvariant.default',
1440 )
1440 )
1441 fm.plain(b'\n')
1441 fm.plain(b'\n')
1442 fm.end()
1442 fm.end()
1443
1443
1444
1444
1445 @command(b'debugfsinfo', [], _(b'[PATH]'), norepo=True)
1445 @command(b'debugfsinfo', [], _(b'[PATH]'), norepo=True)
1446 def debugfsinfo(ui, path=b"."):
1446 def debugfsinfo(ui, path=b"."):
1447 """show information detected about current filesystem"""
1447 """show information detected about current filesystem"""
1448 ui.writenoi18n(b'path: %s\n' % path)
1448 ui.writenoi18n(b'path: %s\n' % path)
1449 ui.writenoi18n(
1449 ui.writenoi18n(
1450 b'mounted on: %s\n' % (util.getfsmountpoint(path) or b'(unknown)')
1450 b'mounted on: %s\n' % (util.getfsmountpoint(path) or b'(unknown)')
1451 )
1451 )
1452 ui.writenoi18n(b'exec: %s\n' % (util.checkexec(path) and b'yes' or b'no'))
1452 ui.writenoi18n(b'exec: %s\n' % (util.checkexec(path) and b'yes' or b'no'))
1453 ui.writenoi18n(b'fstype: %s\n' % (util.getfstype(path) or b'(unknown)'))
1453 ui.writenoi18n(b'fstype: %s\n' % (util.getfstype(path) or b'(unknown)'))
1454 ui.writenoi18n(
1454 ui.writenoi18n(
1455 b'symlink: %s\n' % (util.checklink(path) and b'yes' or b'no')
1455 b'symlink: %s\n' % (util.checklink(path) and b'yes' or b'no')
1456 )
1456 )
1457 ui.writenoi18n(
1457 ui.writenoi18n(
1458 b'hardlink: %s\n' % (util.checknlink(path) and b'yes' or b'no')
1458 b'hardlink: %s\n' % (util.checknlink(path) and b'yes' or b'no')
1459 )
1459 )
1460 casesensitive = b'(unknown)'
1460 casesensitive = b'(unknown)'
1461 try:
1461 try:
1462 with pycompat.namedtempfile(prefix=b'.debugfsinfo', dir=path) as f:
1462 with pycompat.namedtempfile(prefix=b'.debugfsinfo', dir=path) as f:
1463 casesensitive = util.fscasesensitive(f.name) and b'yes' or b'no'
1463 casesensitive = util.fscasesensitive(f.name) and b'yes' or b'no'
1464 except OSError:
1464 except OSError:
1465 pass
1465 pass
1466 ui.writenoi18n(b'case-sensitive: %s\n' % casesensitive)
1466 ui.writenoi18n(b'case-sensitive: %s\n' % casesensitive)
1467
1467
1468
1468
1469 @command(
1469 @command(
1470 b'debuggetbundle',
1470 b'debuggetbundle',
1471 [
1471 [
1472 (b'H', b'head', [], _(b'id of head node'), _(b'ID')),
1472 (b'H', b'head', [], _(b'id of head node'), _(b'ID')),
1473 (b'C', b'common', [], _(b'id of common node'), _(b'ID')),
1473 (b'C', b'common', [], _(b'id of common node'), _(b'ID')),
1474 (
1474 (
1475 b't',
1475 b't',
1476 b'type',
1476 b'type',
1477 b'bzip2',
1477 b'bzip2',
1478 _(b'bundle compression type to use'),
1478 _(b'bundle compression type to use'),
1479 _(b'TYPE'),
1479 _(b'TYPE'),
1480 ),
1480 ),
1481 ],
1481 ],
1482 _(b'REPO FILE [-H|-C ID]...'),
1482 _(b'REPO FILE [-H|-C ID]...'),
1483 norepo=True,
1483 norepo=True,
1484 )
1484 )
1485 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
1485 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
1486 """retrieves a bundle from a repo
1486 """retrieves a bundle from a repo
1487
1487
1488 Every ID must be a full-length hex node id string. Saves the bundle to the
1488 Every ID must be a full-length hex node id string. Saves the bundle to the
1489 given file.
1489 given file.
1490 """
1490 """
1491 opts = pycompat.byteskwargs(opts)
1491 opts = pycompat.byteskwargs(opts)
1492 repo = hg.peer(ui, opts, repopath)
1492 repo = hg.peer(ui, opts, repopath)
1493 if not repo.capable(b'getbundle'):
1493 if not repo.capable(b'getbundle'):
1494 raise error.Abort(b"getbundle() not supported by target repository")
1494 raise error.Abort(b"getbundle() not supported by target repository")
1495 args = {}
1495 args = {}
1496 if common:
1496 if common:
1497 args['common'] = [bin(s) for s in common]
1497 args['common'] = [bin(s) for s in common]
1498 if head:
1498 if head:
1499 args['heads'] = [bin(s) for s in head]
1499 args['heads'] = [bin(s) for s in head]
1500 # TODO: get desired bundlecaps from command line.
1500 # TODO: get desired bundlecaps from command line.
1501 args['bundlecaps'] = None
1501 args['bundlecaps'] = None
1502 bundle = repo.getbundle(b'debug', **args)
1502 bundle = repo.getbundle(b'debug', **args)
1503
1503
1504 bundletype = opts.get(b'type', b'bzip2').lower()
1504 bundletype = opts.get(b'type', b'bzip2').lower()
1505 btypes = {
1505 btypes = {
1506 b'none': b'HG10UN',
1506 b'none': b'HG10UN',
1507 b'bzip2': b'HG10BZ',
1507 b'bzip2': b'HG10BZ',
1508 b'gzip': b'HG10GZ',
1508 b'gzip': b'HG10GZ',
1509 b'bundle2': b'HG20',
1509 b'bundle2': b'HG20',
1510 }
1510 }
1511 bundletype = btypes.get(bundletype)
1511 bundletype = btypes.get(bundletype)
1512 if bundletype not in bundle2.bundletypes:
1512 if bundletype not in bundle2.bundletypes:
1513 raise error.Abort(_(b'unknown bundle type specified with --type'))
1513 raise error.Abort(_(b'unknown bundle type specified with --type'))
1514 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
1514 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
1515
1515
1516
1516
1517 @command(b'debugignore', [], b'[FILE]')
1517 @command(b'debugignore', [], b'[FILE]')
1518 def debugignore(ui, repo, *files, **opts):
1518 def debugignore(ui, repo, *files, **opts):
1519 """display the combined ignore pattern and information about ignored files
1519 """display the combined ignore pattern and information about ignored files
1520
1520
1521 With no argument display the combined ignore pattern.
1521 With no argument display the combined ignore pattern.
1522
1522
1523 Given space separated file names, shows if the given file is ignored and
1523 Given space separated file names, shows if the given file is ignored and
1524 if so, show the ignore rule (file and line number) that matched it.
1524 if so, show the ignore rule (file and line number) that matched it.
1525 """
1525 """
1526 ignore = repo.dirstate._ignore
1526 ignore = repo.dirstate._ignore
1527 if not files:
1527 if not files:
1528 # Show all the patterns
1528 # Show all the patterns
1529 ui.write(b"%s\n" % pycompat.byterepr(ignore))
1529 ui.write(b"%s\n" % pycompat.byterepr(ignore))
1530 else:
1530 else:
1531 m = scmutil.match(repo[None], pats=files)
1531 m = scmutil.match(repo[None], pats=files)
1532 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
1532 uipathfn = scmutil.getuipathfn(repo, legacyrelativevalue=True)
1533 for f in m.files():
1533 for f in m.files():
1534 nf = util.normpath(f)
1534 nf = util.normpath(f)
1535 ignored = None
1535 ignored = None
1536 ignoredata = None
1536 ignoredata = None
1537 if nf != b'.':
1537 if nf != b'.':
1538 if ignore(nf):
1538 if ignore(nf):
1539 ignored = nf
1539 ignored = nf
1540 ignoredata = repo.dirstate._ignorefileandline(nf)
1540 ignoredata = repo.dirstate._ignorefileandline(nf)
1541 else:
1541 else:
1542 for p in pathutil.finddirs(nf):
1542 for p in pathutil.finddirs(nf):
1543 if ignore(p):
1543 if ignore(p):
1544 ignored = p
1544 ignored = p
1545 ignoredata = repo.dirstate._ignorefileandline(p)
1545 ignoredata = repo.dirstate._ignorefileandline(p)
1546 break
1546 break
1547 if ignored:
1547 if ignored:
1548 if ignored == nf:
1548 if ignored == nf:
1549 ui.write(_(b"%s is ignored\n") % uipathfn(f))
1549 ui.write(_(b"%s is ignored\n") % uipathfn(f))
1550 else:
1550 else:
1551 ui.write(
1551 ui.write(
1552 _(
1552 _(
1553 b"%s is ignored because of "
1553 b"%s is ignored because of "
1554 b"containing directory %s\n"
1554 b"containing directory %s\n"
1555 )
1555 )
1556 % (uipathfn(f), ignored)
1556 % (uipathfn(f), ignored)
1557 )
1557 )
1558 ignorefile, lineno, line = ignoredata
1558 ignorefile, lineno, line = ignoredata
1559 ui.write(
1559 ui.write(
1560 _(b"(ignore rule in %s, line %d: '%s')\n")
1560 _(b"(ignore rule in %s, line %d: '%s')\n")
1561 % (ignorefile, lineno, line)
1561 % (ignorefile, lineno, line)
1562 )
1562 )
1563 else:
1563 else:
1564 ui.write(_(b"%s is not ignored\n") % uipathfn(f))
1564 ui.write(_(b"%s is not ignored\n") % uipathfn(f))
1565
1565
1566
1566
1567 @command(
1567 @command(
1568 b'debugindex',
1568 b'debugindex',
1569 cmdutil.debugrevlogopts + cmdutil.formatteropts,
1569 cmdutil.debugrevlogopts + cmdutil.formatteropts,
1570 _(b'-c|-m|FILE'),
1570 _(b'-c|-m|FILE'),
1571 )
1571 )
1572 def debugindex(ui, repo, file_=None, **opts):
1572 def debugindex(ui, repo, file_=None, **opts):
1573 """dump index data for a storage primitive"""
1573 """dump index data for a storage primitive"""
1574 opts = pycompat.byteskwargs(opts)
1574 opts = pycompat.byteskwargs(opts)
1575 store = cmdutil.openstorage(repo, b'debugindex', file_, opts)
1575 store = cmdutil.openstorage(repo, b'debugindex', file_, opts)
1576
1576
1577 if ui.debugflag:
1577 if ui.debugflag:
1578 shortfn = hex
1578 shortfn = hex
1579 else:
1579 else:
1580 shortfn = short
1580 shortfn = short
1581
1581
1582 idlen = 12
1582 idlen = 12
1583 for i in store:
1583 for i in store:
1584 idlen = len(shortfn(store.node(i)))
1584 idlen = len(shortfn(store.node(i)))
1585 break
1585 break
1586
1586
1587 fm = ui.formatter(b'debugindex', opts)
1587 fm = ui.formatter(b'debugindex', opts)
1588 fm.plain(
1588 fm.plain(
1589 b' rev linkrev %s %s p2\n'
1589 b' rev linkrev %s %s p2\n'
1590 % (b'nodeid'.ljust(idlen), b'p1'.ljust(idlen))
1590 % (b'nodeid'.ljust(idlen), b'p1'.ljust(idlen))
1591 )
1591 )
1592
1592
1593 for rev in store:
1593 for rev in store:
1594 node = store.node(rev)
1594 node = store.node(rev)
1595 parents = store.parents(node)
1595 parents = store.parents(node)
1596
1596
1597 fm.startitem()
1597 fm.startitem()
1598 fm.write(b'rev', b'%6d ', rev)
1598 fm.write(b'rev', b'%6d ', rev)
1599 fm.write(b'linkrev', b'%7d ', store.linkrev(rev))
1599 fm.write(b'linkrev', b'%7d ', store.linkrev(rev))
1600 fm.write(b'node', b'%s ', shortfn(node))
1600 fm.write(b'node', b'%s ', shortfn(node))
1601 fm.write(b'p1', b'%s ', shortfn(parents[0]))
1601 fm.write(b'p1', b'%s ', shortfn(parents[0]))
1602 fm.write(b'p2', b'%s', shortfn(parents[1]))
1602 fm.write(b'p2', b'%s', shortfn(parents[1]))
1603 fm.plain(b'\n')
1603 fm.plain(b'\n')
1604
1604
1605 fm.end()
1605 fm.end()
1606
1606
1607
1607
1608 @command(
1608 @command(
1609 b'debugindexdot',
1609 b'debugindexdot',
1610 cmdutil.debugrevlogopts,
1610 cmdutil.debugrevlogopts,
1611 _(b'-c|-m|FILE'),
1611 _(b'-c|-m|FILE'),
1612 optionalrepo=True,
1612 optionalrepo=True,
1613 )
1613 )
1614 def debugindexdot(ui, repo, file_=None, **opts):
1614 def debugindexdot(ui, repo, file_=None, **opts):
1615 """dump an index DAG as a graphviz dot file"""
1615 """dump an index DAG as a graphviz dot file"""
1616 opts = pycompat.byteskwargs(opts)
1616 opts = pycompat.byteskwargs(opts)
1617 r = cmdutil.openstorage(repo, b'debugindexdot', file_, opts)
1617 r = cmdutil.openstorage(repo, b'debugindexdot', file_, opts)
1618 ui.writenoi18n(b"digraph G {\n")
1618 ui.writenoi18n(b"digraph G {\n")
1619 for i in r:
1619 for i in r:
1620 node = r.node(i)
1620 node = r.node(i)
1621 pp = r.parents(node)
1621 pp = r.parents(node)
1622 ui.write(b"\t%d -> %d\n" % (r.rev(pp[0]), i))
1622 ui.write(b"\t%d -> %d\n" % (r.rev(pp[0]), i))
1623 if pp[1] != nullid:
1623 if pp[1] != nullid:
1624 ui.write(b"\t%d -> %d\n" % (r.rev(pp[1]), i))
1624 ui.write(b"\t%d -> %d\n" % (r.rev(pp[1]), i))
1625 ui.write(b"}\n")
1625 ui.write(b"}\n")
1626
1626
1627
1627
1628 @command(b'debugindexstats', [])
1628 @command(b'debugindexstats', [])
1629 def debugindexstats(ui, repo):
1629 def debugindexstats(ui, repo):
1630 """show stats related to the changelog index"""
1630 """show stats related to the changelog index"""
1631 repo.changelog.shortest(nullid, 1)
1631 repo.changelog.shortest(nullid, 1)
1632 index = repo.changelog.index
1632 index = repo.changelog.index
1633 if not util.safehasattr(index, b'stats'):
1633 if not util.safehasattr(index, b'stats'):
1634 raise error.Abort(_(b'debugindexstats only works with native code'))
1634 raise error.Abort(_(b'debugindexstats only works with native code'))
1635 for k, v in sorted(index.stats().items()):
1635 for k, v in sorted(index.stats().items()):
1636 ui.write(b'%s: %d\n' % (k, v))
1636 ui.write(b'%s: %d\n' % (k, v))
1637
1637
1638
1638
1639 @command(b'debuginstall', [] + cmdutil.formatteropts, b'', norepo=True)
1639 @command(b'debuginstall', [] + cmdutil.formatteropts, b'', norepo=True)
1640 def debuginstall(ui, **opts):
1640 def debuginstall(ui, **opts):
1641 """test Mercurial installation
1641 """test Mercurial installation
1642
1642
1643 Returns 0 on success.
1643 Returns 0 on success.
1644 """
1644 """
1645 opts = pycompat.byteskwargs(opts)
1645 opts = pycompat.byteskwargs(opts)
1646
1646
1647 problems = 0
1647 problems = 0
1648
1648
1649 fm = ui.formatter(b'debuginstall', opts)
1649 fm = ui.formatter(b'debuginstall', opts)
1650 fm.startitem()
1650 fm.startitem()
1651
1651
1652 # encoding might be unknown or wrong. don't translate these messages.
1652 # encoding might be unknown or wrong. don't translate these messages.
1653 fm.write(b'encoding', b"checking encoding (%s)...\n", encoding.encoding)
1653 fm.write(b'encoding', b"checking encoding (%s)...\n", encoding.encoding)
1654 err = None
1654 err = None
1655 try:
1655 try:
1656 codecs.lookup(pycompat.sysstr(encoding.encoding))
1656 codecs.lookup(pycompat.sysstr(encoding.encoding))
1657 except LookupError as inst:
1657 except LookupError as inst:
1658 err = stringutil.forcebytestr(inst)
1658 err = stringutil.forcebytestr(inst)
1659 problems += 1
1659 problems += 1
1660 fm.condwrite(
1660 fm.condwrite(
1661 err,
1661 err,
1662 b'encodingerror',
1662 b'encodingerror',
1663 b" %s\n (check that your locale is properly set)\n",
1663 b" %s\n (check that your locale is properly set)\n",
1664 err,
1664 err,
1665 )
1665 )
1666
1666
1667 # Python
1667 # Python
1668 pythonlib = None
1668 pythonlib = None
1669 if util.safehasattr(os, '__file__'):
1669 if util.safehasattr(os, '__file__'):
1670 pythonlib = os.path.dirname(pycompat.fsencode(os.__file__))
1670 pythonlib = os.path.dirname(pycompat.fsencode(os.__file__))
1671 elif getattr(sys, 'oxidized', False):
1671 elif getattr(sys, 'oxidized', False):
1672 pythonlib = pycompat.sysexecutable
1672 pythonlib = pycompat.sysexecutable
1673
1673
1674 fm.write(
1674 fm.write(
1675 b'pythonexe',
1675 b'pythonexe',
1676 _(b"checking Python executable (%s)\n"),
1676 _(b"checking Python executable (%s)\n"),
1677 pycompat.sysexecutable or _(b"unknown"),
1677 pycompat.sysexecutable or _(b"unknown"),
1678 )
1678 )
1679 fm.write(
1679 fm.write(
1680 b'pythonimplementation',
1680 b'pythonimplementation',
1681 _(b"checking Python implementation (%s)\n"),
1681 _(b"checking Python implementation (%s)\n"),
1682 pycompat.sysbytes(platform.python_implementation()),
1682 pycompat.sysbytes(platform.python_implementation()),
1683 )
1683 )
1684 fm.write(
1684 fm.write(
1685 b'pythonver',
1685 b'pythonver',
1686 _(b"checking Python version (%s)\n"),
1686 _(b"checking Python version (%s)\n"),
1687 (b"%d.%d.%d" % sys.version_info[:3]),
1687 (b"%d.%d.%d" % sys.version_info[:3]),
1688 )
1688 )
1689 fm.write(
1689 fm.write(
1690 b'pythonlib',
1690 b'pythonlib',
1691 _(b"checking Python lib (%s)...\n"),
1691 _(b"checking Python lib (%s)...\n"),
1692 pythonlib or _(b"unknown"),
1692 pythonlib or _(b"unknown"),
1693 )
1693 )
1694
1694
1695 try:
1695 try:
1696 from . import rustext
1696 from . import rustext
1697
1697
1698 rustext.__doc__ # trigger lazy import
1698 rustext.__doc__ # trigger lazy import
1699 except ImportError:
1699 except ImportError:
1700 rustext = None
1700 rustext = None
1701
1701
1702 security = set(sslutil.supportedprotocols)
1702 security = set(sslutil.supportedprotocols)
1703 if sslutil.hassni:
1703 if sslutil.hassni:
1704 security.add(b'sni')
1704 security.add(b'sni')
1705
1705
1706 fm.write(
1706 fm.write(
1707 b'pythonsecurity',
1707 b'pythonsecurity',
1708 _(b"checking Python security support (%s)\n"),
1708 _(b"checking Python security support (%s)\n"),
1709 fm.formatlist(sorted(security), name=b'protocol', fmt=b'%s', sep=b','),
1709 fm.formatlist(sorted(security), name=b'protocol', fmt=b'%s', sep=b','),
1710 )
1710 )
1711
1711
1712 # These are warnings, not errors. So don't increment problem count. This
1712 # These are warnings, not errors. So don't increment problem count. This
1713 # may change in the future.
1713 # may change in the future.
1714 if b'tls1.2' not in security:
1714 if b'tls1.2' not in security:
1715 fm.plain(
1715 fm.plain(
1716 _(
1716 _(
1717 b' TLS 1.2 not supported by Python install; '
1717 b' TLS 1.2 not supported by Python install; '
1718 b'network connections lack modern security\n'
1718 b'network connections lack modern security\n'
1719 )
1719 )
1720 )
1720 )
1721 if b'sni' not in security:
1721 if b'sni' not in security:
1722 fm.plain(
1722 fm.plain(
1723 _(
1723 _(
1724 b' SNI not supported by Python install; may have '
1724 b' SNI not supported by Python install; may have '
1725 b'connectivity issues with some servers\n'
1725 b'connectivity issues with some servers\n'
1726 )
1726 )
1727 )
1727 )
1728
1728
1729 fm.plain(
1729 fm.plain(
1730 _(
1730 _(
1731 b"checking Rust extensions (%s)\n"
1731 b"checking Rust extensions (%s)\n"
1732 % (b'missing' if rustext is None else b'installed')
1732 % (b'missing' if rustext is None else b'installed')
1733 ),
1733 ),
1734 )
1734 )
1735
1735
1736 # TODO print CA cert info
1736 # TODO print CA cert info
1737
1737
1738 # hg version
1738 # hg version
1739 hgver = util.version()
1739 hgver = util.version()
1740 fm.write(
1740 fm.write(
1741 b'hgver', _(b"checking Mercurial version (%s)\n"), hgver.split(b'+')[0]
1741 b'hgver', _(b"checking Mercurial version (%s)\n"), hgver.split(b'+')[0]
1742 )
1742 )
1743 fm.write(
1743 fm.write(
1744 b'hgverextra',
1744 b'hgverextra',
1745 _(b"checking Mercurial custom build (%s)\n"),
1745 _(b"checking Mercurial custom build (%s)\n"),
1746 b'+'.join(hgver.split(b'+')[1:]),
1746 b'+'.join(hgver.split(b'+')[1:]),
1747 )
1747 )
1748
1748
1749 # compiled modules
1749 # compiled modules
1750 hgmodules = None
1750 hgmodules = None
1751 if util.safehasattr(sys.modules[__name__], '__file__'):
1751 if util.safehasattr(sys.modules[__name__], '__file__'):
1752 hgmodules = os.path.dirname(pycompat.fsencode(__file__))
1752 hgmodules = os.path.dirname(pycompat.fsencode(__file__))
1753 elif getattr(sys, 'oxidized', False):
1753 elif getattr(sys, 'oxidized', False):
1754 hgmodules = pycompat.sysexecutable
1754 hgmodules = pycompat.sysexecutable
1755
1755
1756 fm.write(
1756 fm.write(
1757 b'hgmodulepolicy', _(b"checking module policy (%s)\n"), policy.policy
1757 b'hgmodulepolicy', _(b"checking module policy (%s)\n"), policy.policy
1758 )
1758 )
1759 fm.write(
1759 fm.write(
1760 b'hgmodules',
1760 b'hgmodules',
1761 _(b"checking installed modules (%s)...\n"),
1761 _(b"checking installed modules (%s)...\n"),
1762 hgmodules or _(b"unknown"),
1762 hgmodules or _(b"unknown"),
1763 )
1763 )
1764
1764
1765 rustandc = policy.policy in (b'rust+c', b'rust+c-allow')
1765 rustandc = policy.policy in (b'rust+c', b'rust+c-allow')
1766 rustext = rustandc # for now, that's the only case
1766 rustext = rustandc # for now, that's the only case
1767 cext = policy.policy in (b'c', b'allow') or rustandc
1767 cext = policy.policy in (b'c', b'allow') or rustandc
1768 nopure = cext or rustext
1768 nopure = cext or rustext
1769 if nopure:
1769 if nopure:
1770 err = None
1770 err = None
1771 try:
1771 try:
1772 if cext:
1772 if cext:
1773 from .cext import ( # pytype: disable=import-error
1773 from .cext import ( # pytype: disable=import-error
1774 base85,
1774 base85,
1775 bdiff,
1775 bdiff,
1776 mpatch,
1776 mpatch,
1777 osutil,
1777 osutil,
1778 )
1778 )
1779
1779
1780 # quiet pyflakes
1780 # quiet pyflakes
1781 dir(bdiff), dir(mpatch), dir(base85), dir(osutil)
1781 dir(bdiff), dir(mpatch), dir(base85), dir(osutil)
1782 if rustext:
1782 if rustext:
1783 from .rustext import ( # pytype: disable=import-error
1783 from .rustext import ( # pytype: disable=import-error
1784 ancestor,
1784 ancestor,
1785 dirstate,
1785 dirstate,
1786 )
1786 )
1787
1787
1788 dir(ancestor), dir(dirstate) # quiet pyflakes
1788 dir(ancestor), dir(dirstate) # quiet pyflakes
1789 except Exception as inst:
1789 except Exception as inst:
1790 err = stringutil.forcebytestr(inst)
1790 err = stringutil.forcebytestr(inst)
1791 problems += 1
1791 problems += 1
1792 fm.condwrite(err, b'extensionserror', b" %s\n", err)
1792 fm.condwrite(err, b'extensionserror', b" %s\n", err)
1793
1793
1794 compengines = util.compengines._engines.values()
1794 compengines = util.compengines._engines.values()
1795 fm.write(
1795 fm.write(
1796 b'compengines',
1796 b'compengines',
1797 _(b'checking registered compression engines (%s)\n'),
1797 _(b'checking registered compression engines (%s)\n'),
1798 fm.formatlist(
1798 fm.formatlist(
1799 sorted(e.name() for e in compengines),
1799 sorted(e.name() for e in compengines),
1800 name=b'compengine',
1800 name=b'compengine',
1801 fmt=b'%s',
1801 fmt=b'%s',
1802 sep=b', ',
1802 sep=b', ',
1803 ),
1803 ),
1804 )
1804 )
1805 fm.write(
1805 fm.write(
1806 b'compenginesavail',
1806 b'compenginesavail',
1807 _(b'checking available compression engines (%s)\n'),
1807 _(b'checking available compression engines (%s)\n'),
1808 fm.formatlist(
1808 fm.formatlist(
1809 sorted(e.name() for e in compengines if e.available()),
1809 sorted(e.name() for e in compengines if e.available()),
1810 name=b'compengine',
1810 name=b'compengine',
1811 fmt=b'%s',
1811 fmt=b'%s',
1812 sep=b', ',
1812 sep=b', ',
1813 ),
1813 ),
1814 )
1814 )
1815 wirecompengines = compression.compengines.supportedwireengines(
1815 wirecompengines = compression.compengines.supportedwireengines(
1816 compression.SERVERROLE
1816 compression.SERVERROLE
1817 )
1817 )
1818 fm.write(
1818 fm.write(
1819 b'compenginesserver',
1819 b'compenginesserver',
1820 _(
1820 _(
1821 b'checking available compression engines '
1821 b'checking available compression engines '
1822 b'for wire protocol (%s)\n'
1822 b'for wire protocol (%s)\n'
1823 ),
1823 ),
1824 fm.formatlist(
1824 fm.formatlist(
1825 [e.name() for e in wirecompengines if e.wireprotosupport()],
1825 [e.name() for e in wirecompengines if e.wireprotosupport()],
1826 name=b'compengine',
1826 name=b'compengine',
1827 fmt=b'%s',
1827 fmt=b'%s',
1828 sep=b', ',
1828 sep=b', ',
1829 ),
1829 ),
1830 )
1830 )
1831 re2 = b'missing'
1831 re2 = b'missing'
1832 if util._re2:
1832 if util._re2:
1833 re2 = b'available'
1833 re2 = b'available'
1834 fm.plain(_(b'checking "re2" regexp engine (%s)\n') % re2)
1834 fm.plain(_(b'checking "re2" regexp engine (%s)\n') % re2)
1835 fm.data(re2=bool(util._re2))
1835 fm.data(re2=bool(util._re2))
1836
1836
1837 # templates
1837 # templates
1838 p = templater.templatedir()
1838 p = templater.templatedir()
1839 fm.write(b'templatedirs', b'checking templates (%s)...\n', p or b'')
1839 fm.write(b'templatedirs', b'checking templates (%s)...\n', p or b'')
1840 fm.condwrite(not p, b'', _(b" no template directories found\n"))
1840 fm.condwrite(not p, b'', _(b" no template directories found\n"))
1841 if p:
1841 if p:
1842 (m, fp) = templater.try_open_template(b"map-cmdline.default")
1842 (m, fp) = templater.try_open_template(b"map-cmdline.default")
1843 if m:
1843 if m:
1844 # template found, check if it is working
1844 # template found, check if it is working
1845 err = None
1845 err = None
1846 try:
1846 try:
1847 templater.templater.frommapfile(m)
1847 templater.templater.frommapfile(m)
1848 except Exception as inst:
1848 except Exception as inst:
1849 err = stringutil.forcebytestr(inst)
1849 err = stringutil.forcebytestr(inst)
1850 p = None
1850 p = None
1851 fm.condwrite(err, b'defaulttemplateerror', b" %s\n", err)
1851 fm.condwrite(err, b'defaulttemplateerror', b" %s\n", err)
1852 else:
1852 else:
1853 p = None
1853 p = None
1854 fm.condwrite(
1854 fm.condwrite(
1855 p, b'defaulttemplate', _(b"checking default template (%s)\n"), m
1855 p, b'defaulttemplate', _(b"checking default template (%s)\n"), m
1856 )
1856 )
1857 fm.condwrite(
1857 fm.condwrite(
1858 not m,
1858 not m,
1859 b'defaulttemplatenotfound',
1859 b'defaulttemplatenotfound',
1860 _(b" template '%s' not found\n"),
1860 _(b" template '%s' not found\n"),
1861 b"default",
1861 b"default",
1862 )
1862 )
1863 if not p:
1863 if not p:
1864 problems += 1
1864 problems += 1
1865 fm.condwrite(
1865 fm.condwrite(
1866 not p, b'', _(b" (templates seem to have been installed incorrectly)\n")
1866 not p, b'', _(b" (templates seem to have been installed incorrectly)\n")
1867 )
1867 )
1868
1868
1869 # editor
1869 # editor
1870 editor = ui.geteditor()
1870 editor = ui.geteditor()
1871 editor = util.expandpath(editor)
1871 editor = util.expandpath(editor)
1872 editorbin = procutil.shellsplit(editor)[0]
1872 editorbin = procutil.shellsplit(editor)[0]
1873 fm.write(b'editor', _(b"checking commit editor... (%s)\n"), editorbin)
1873 fm.write(b'editor', _(b"checking commit editor... (%s)\n"), editorbin)
1874 cmdpath = procutil.findexe(editorbin)
1874 cmdpath = procutil.findexe(editorbin)
1875 fm.condwrite(
1875 fm.condwrite(
1876 not cmdpath and editor == b'vi',
1876 not cmdpath and editor == b'vi',
1877 b'vinotfound',
1877 b'vinotfound',
1878 _(
1878 _(
1879 b" No commit editor set and can't find %s in PATH\n"
1879 b" No commit editor set and can't find %s in PATH\n"
1880 b" (specify a commit editor in your configuration"
1880 b" (specify a commit editor in your configuration"
1881 b" file)\n"
1881 b" file)\n"
1882 ),
1882 ),
1883 not cmdpath and editor == b'vi' and editorbin,
1883 not cmdpath and editor == b'vi' and editorbin,
1884 )
1884 )
1885 fm.condwrite(
1885 fm.condwrite(
1886 not cmdpath and editor != b'vi',
1886 not cmdpath and editor != b'vi',
1887 b'editornotfound',
1887 b'editornotfound',
1888 _(
1888 _(
1889 b" Can't find editor '%s' in PATH\n"
1889 b" Can't find editor '%s' in PATH\n"
1890 b" (specify a commit editor in your configuration"
1890 b" (specify a commit editor in your configuration"
1891 b" file)\n"
1891 b" file)\n"
1892 ),
1892 ),
1893 not cmdpath and editorbin,
1893 not cmdpath and editorbin,
1894 )
1894 )
1895 if not cmdpath and editor != b'vi':
1895 if not cmdpath and editor != b'vi':
1896 problems += 1
1896 problems += 1
1897
1897
1898 # check username
1898 # check username
1899 username = None
1899 username = None
1900 err = None
1900 err = None
1901 try:
1901 try:
1902 username = ui.username()
1902 username = ui.username()
1903 except error.Abort as e:
1903 except error.Abort as e:
1904 err = e.message
1904 err = e.message
1905 problems += 1
1905 problems += 1
1906
1906
1907 fm.condwrite(
1907 fm.condwrite(
1908 username, b'username', _(b"checking username (%s)\n"), username
1908 username, b'username', _(b"checking username (%s)\n"), username
1909 )
1909 )
1910 fm.condwrite(
1910 fm.condwrite(
1911 err,
1911 err,
1912 b'usernameerror',
1912 b'usernameerror',
1913 _(
1913 _(
1914 b"checking username...\n %s\n"
1914 b"checking username...\n %s\n"
1915 b" (specify a username in your configuration file)\n"
1915 b" (specify a username in your configuration file)\n"
1916 ),
1916 ),
1917 err,
1917 err,
1918 )
1918 )
1919
1919
1920 for name, mod in extensions.extensions():
1920 for name, mod in extensions.extensions():
1921 handler = getattr(mod, 'debuginstall', None)
1921 handler = getattr(mod, 'debuginstall', None)
1922 if handler is not None:
1922 if handler is not None:
1923 problems += handler(ui, fm)
1923 problems += handler(ui, fm)
1924
1924
1925 fm.condwrite(not problems, b'', _(b"no problems detected\n"))
1925 fm.condwrite(not problems, b'', _(b"no problems detected\n"))
1926 if not problems:
1926 if not problems:
1927 fm.data(problems=problems)
1927 fm.data(problems=problems)
1928 fm.condwrite(
1928 fm.condwrite(
1929 problems,
1929 problems,
1930 b'problems',
1930 b'problems',
1931 _(b"%d problems detected, please check your install!\n"),
1931 _(b"%d problems detected, please check your install!\n"),
1932 problems,
1932 problems,
1933 )
1933 )
1934 fm.end()
1934 fm.end()
1935
1935
1936 return problems
1936 return problems
1937
1937
1938
1938
1939 @command(b'debugknown', [], _(b'REPO ID...'), norepo=True)
1939 @command(b'debugknown', [], _(b'REPO ID...'), norepo=True)
1940 def debugknown(ui, repopath, *ids, **opts):
1940 def debugknown(ui, repopath, *ids, **opts):
1941 """test whether node ids are known to a repo
1941 """test whether node ids are known to a repo
1942
1942
1943 Every ID must be a full-length hex node id string. Returns a list of 0s
1943 Every ID must be a full-length hex node id string. Returns a list of 0s
1944 and 1s indicating unknown/known.
1944 and 1s indicating unknown/known.
1945 """
1945 """
1946 opts = pycompat.byteskwargs(opts)
1946 opts = pycompat.byteskwargs(opts)
1947 repo = hg.peer(ui, opts, repopath)
1947 repo = hg.peer(ui, opts, repopath)
1948 if not repo.capable(b'known'):
1948 if not repo.capable(b'known'):
1949 raise error.Abort(b"known() not supported by target repository")
1949 raise error.Abort(b"known() not supported by target repository")
1950 flags = repo.known([bin(s) for s in ids])
1950 flags = repo.known([bin(s) for s in ids])
1951 ui.write(b"%s\n" % (b"".join([f and b"1" or b"0" for f in flags])))
1951 ui.write(b"%s\n" % (b"".join([f and b"1" or b"0" for f in flags])))
1952
1952
1953
1953
1954 @command(b'debuglabelcomplete', [], _(b'LABEL...'))
1954 @command(b'debuglabelcomplete', [], _(b'LABEL...'))
1955 def debuglabelcomplete(ui, repo, *args):
1955 def debuglabelcomplete(ui, repo, *args):
1956 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1956 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1957 debugnamecomplete(ui, repo, *args)
1957 debugnamecomplete(ui, repo, *args)
1958
1958
1959
1959
1960 @command(
1960 @command(
1961 b'debuglocks',
1961 b'debuglocks',
1962 [
1962 [
1963 (b'L', b'force-free-lock', None, _(b'free the store lock (DANGEROUS)')),
1963 (b'L', b'force-free-lock', None, _(b'free the store lock (DANGEROUS)')),
1964 (
1964 (
1965 b'W',
1965 b'W',
1966 b'force-free-wlock',
1966 b'force-free-wlock',
1967 None,
1967 None,
1968 _(b'free the working state lock (DANGEROUS)'),
1968 _(b'free the working state lock (DANGEROUS)'),
1969 ),
1969 ),
1970 (b's', b'set-lock', None, _(b'set the store lock until stopped')),
1970 (b's', b'set-lock', None, _(b'set the store lock until stopped')),
1971 (
1971 (
1972 b'S',
1972 b'S',
1973 b'set-wlock',
1973 b'set-wlock',
1974 None,
1974 None,
1975 _(b'set the working state lock until stopped'),
1975 _(b'set the working state lock until stopped'),
1976 ),
1976 ),
1977 ],
1977 ],
1978 _(b'[OPTION]...'),
1978 _(b'[OPTION]...'),
1979 )
1979 )
1980 def debuglocks(ui, repo, **opts):
1980 def debuglocks(ui, repo, **opts):
1981 """show or modify state of locks
1981 """show or modify state of locks
1982
1982
1983 By default, this command will show which locks are held. This
1983 By default, this command will show which locks are held. This
1984 includes the user and process holding the lock, the amount of time
1984 includes the user and process holding the lock, the amount of time
1985 the lock has been held, and the machine name where the process is
1985 the lock has been held, and the machine name where the process is
1986 running if it's not local.
1986 running if it's not local.
1987
1987
1988 Locks protect the integrity of Mercurial's data, so should be
1988 Locks protect the integrity of Mercurial's data, so should be
1989 treated with care. System crashes or other interruptions may cause
1989 treated with care. System crashes or other interruptions may cause
1990 locks to not be properly released, though Mercurial will usually
1990 locks to not be properly released, though Mercurial will usually
1991 detect and remove such stale locks automatically.
1991 detect and remove such stale locks automatically.
1992
1992
1993 However, detecting stale locks may not always be possible (for
1993 However, detecting stale locks may not always be possible (for
1994 instance, on a shared filesystem). Removing locks may also be
1994 instance, on a shared filesystem). Removing locks may also be
1995 blocked by filesystem permissions.
1995 blocked by filesystem permissions.
1996
1996
1997 Setting a lock will prevent other commands from changing the data.
1997 Setting a lock will prevent other commands from changing the data.
1998 The command will wait until an interruption (SIGINT, SIGTERM, ...) occurs.
1998 The command will wait until an interruption (SIGINT, SIGTERM, ...) occurs.
1999 The set locks are removed when the command exits.
1999 The set locks are removed when the command exits.
2000
2000
2001 Returns 0 if no locks are held.
2001 Returns 0 if no locks are held.
2002
2002
2003 """
2003 """
2004
2004
2005 if opts.get('force_free_lock'):
2005 if opts.get('force_free_lock'):
2006 repo.svfs.unlink(b'lock')
2006 repo.svfs.unlink(b'lock')
2007 if opts.get('force_free_wlock'):
2007 if opts.get('force_free_wlock'):
2008 repo.vfs.unlink(b'wlock')
2008 repo.vfs.unlink(b'wlock')
2009 if opts.get('force_free_lock') or opts.get('force_free_wlock'):
2009 if opts.get('force_free_lock') or opts.get('force_free_wlock'):
2010 return 0
2010 return 0
2011
2011
2012 locks = []
2012 locks = []
2013 try:
2013 try:
2014 if opts.get('set_wlock'):
2014 if opts.get('set_wlock'):
2015 try:
2015 try:
2016 locks.append(repo.wlock(False))
2016 locks.append(repo.wlock(False))
2017 except error.LockHeld:
2017 except error.LockHeld:
2018 raise error.Abort(_(b'wlock is already held'))
2018 raise error.Abort(_(b'wlock is already held'))
2019 if opts.get('set_lock'):
2019 if opts.get('set_lock'):
2020 try:
2020 try:
2021 locks.append(repo.lock(False))
2021 locks.append(repo.lock(False))
2022 except error.LockHeld:
2022 except error.LockHeld:
2023 raise error.Abort(_(b'lock is already held'))
2023 raise error.Abort(_(b'lock is already held'))
2024 if len(locks):
2024 if len(locks):
2025 ui.promptchoice(_(b"ready to release the lock (y)? $$ &Yes"))
2025 ui.promptchoice(_(b"ready to release the lock (y)? $$ &Yes"))
2026 return 0
2026 return 0
2027 finally:
2027 finally:
2028 release(*locks)
2028 release(*locks)
2029
2029
2030 now = time.time()
2030 now = time.time()
2031 held = 0
2031 held = 0
2032
2032
2033 def report(vfs, name, method):
2033 def report(vfs, name, method):
2034 # this causes stale locks to get reaped for more accurate reporting
2034 # this causes stale locks to get reaped for more accurate reporting
2035 try:
2035 try:
2036 l = method(False)
2036 l = method(False)
2037 except error.LockHeld:
2037 except error.LockHeld:
2038 l = None
2038 l = None
2039
2039
2040 if l:
2040 if l:
2041 l.release()
2041 l.release()
2042 else:
2042 else:
2043 try:
2043 try:
2044 st = vfs.lstat(name)
2044 st = vfs.lstat(name)
2045 age = now - st[stat.ST_MTIME]
2045 age = now - st[stat.ST_MTIME]
2046 user = util.username(st.st_uid)
2046 user = util.username(st.st_uid)
2047 locker = vfs.readlock(name)
2047 locker = vfs.readlock(name)
2048 if b":" in locker:
2048 if b":" in locker:
2049 host, pid = locker.split(b':')
2049 host, pid = locker.split(b':')
2050 if host == socket.gethostname():
2050 if host == socket.gethostname():
2051 locker = b'user %s, process %s' % (user or b'None', pid)
2051 locker = b'user %s, process %s' % (user or b'None', pid)
2052 else:
2052 else:
2053 locker = b'user %s, process %s, host %s' % (
2053 locker = b'user %s, process %s, host %s' % (
2054 user or b'None',
2054 user or b'None',
2055 pid,
2055 pid,
2056 host,
2056 host,
2057 )
2057 )
2058 ui.writenoi18n(b"%-6s %s (%ds)\n" % (name + b":", locker, age))
2058 ui.writenoi18n(b"%-6s %s (%ds)\n" % (name + b":", locker, age))
2059 return 1
2059 return 1
2060 except OSError as e:
2060 except OSError as e:
2061 if e.errno != errno.ENOENT:
2061 if e.errno != errno.ENOENT:
2062 raise
2062 raise
2063
2063
2064 ui.writenoi18n(b"%-6s free\n" % (name + b":"))
2064 ui.writenoi18n(b"%-6s free\n" % (name + b":"))
2065 return 0
2065 return 0
2066
2066
2067 held += report(repo.svfs, b"lock", repo.lock)
2067 held += report(repo.svfs, b"lock", repo.lock)
2068 held += report(repo.vfs, b"wlock", repo.wlock)
2068 held += report(repo.vfs, b"wlock", repo.wlock)
2069
2069
2070 return held
2070 return held
2071
2071
2072
2072
2073 @command(
2073 @command(
2074 b'debugmanifestfulltextcache',
2074 b'debugmanifestfulltextcache',
2075 [
2075 [
2076 (b'', b'clear', False, _(b'clear the cache')),
2076 (b'', b'clear', False, _(b'clear the cache')),
2077 (
2077 (
2078 b'a',
2078 b'a',
2079 b'add',
2079 b'add',
2080 [],
2080 [],
2081 _(b'add the given manifest nodes to the cache'),
2081 _(b'add the given manifest nodes to the cache'),
2082 _(b'NODE'),
2082 _(b'NODE'),
2083 ),
2083 ),
2084 ],
2084 ],
2085 b'',
2085 b'',
2086 )
2086 )
2087 def debugmanifestfulltextcache(ui, repo, add=(), **opts):
2087 def debugmanifestfulltextcache(ui, repo, add=(), **opts):
2088 """show, clear or amend the contents of the manifest fulltext cache"""
2088 """show, clear or amend the contents of the manifest fulltext cache"""
2089
2089
2090 def getcache():
2090 def getcache():
2091 r = repo.manifestlog.getstorage(b'')
2091 r = repo.manifestlog.getstorage(b'')
2092 try:
2092 try:
2093 return r._fulltextcache
2093 return r._fulltextcache
2094 except AttributeError:
2094 except AttributeError:
2095 msg = _(
2095 msg = _(
2096 b"Current revlog implementation doesn't appear to have a "
2096 b"Current revlog implementation doesn't appear to have a "
2097 b"manifest fulltext cache\n"
2097 b"manifest fulltext cache\n"
2098 )
2098 )
2099 raise error.Abort(msg)
2099 raise error.Abort(msg)
2100
2100
2101 if opts.get('clear'):
2101 if opts.get('clear'):
2102 with repo.wlock():
2102 with repo.wlock():
2103 cache = getcache()
2103 cache = getcache()
2104 cache.clear(clear_persisted_data=True)
2104 cache.clear(clear_persisted_data=True)
2105 return
2105 return
2106
2106
2107 if add:
2107 if add:
2108 with repo.wlock():
2108 with repo.wlock():
2109 m = repo.manifestlog
2109 m = repo.manifestlog
2110 store = m.getstorage(b'')
2110 store = m.getstorage(b'')
2111 for n in add:
2111 for n in add:
2112 try:
2112 try:
2113 manifest = m[store.lookup(n)]
2113 manifest = m[store.lookup(n)]
2114 except error.LookupError as e:
2114 except error.LookupError as e:
2115 raise error.Abort(e, hint=b"Check your manifest node id")
2115 raise error.Abort(e, hint=b"Check your manifest node id")
2116 manifest.read() # stores revisision in cache too
2116 manifest.read() # stores revisision in cache too
2117 return
2117 return
2118
2118
2119 cache = getcache()
2119 cache = getcache()
2120 if not len(cache):
2120 if not len(cache):
2121 ui.write(_(b'cache empty\n'))
2121 ui.write(_(b'cache empty\n'))
2122 else:
2122 else:
2123 ui.write(
2123 ui.write(
2124 _(
2124 _(
2125 b'cache contains %d manifest entries, in order of most to '
2125 b'cache contains %d manifest entries, in order of most to '
2126 b'least recent:\n'
2126 b'least recent:\n'
2127 )
2127 )
2128 % (len(cache),)
2128 % (len(cache),)
2129 )
2129 )
2130 totalsize = 0
2130 totalsize = 0
2131 for nodeid in cache:
2131 for nodeid in cache:
2132 # Use cache.get to not update the LRU order
2132 # Use cache.get to not update the LRU order
2133 data = cache.peek(nodeid)
2133 data = cache.peek(nodeid)
2134 size = len(data)
2134 size = len(data)
2135 totalsize += size + 24 # 20 bytes nodeid, 4 bytes size
2135 totalsize += size + 24 # 20 bytes nodeid, 4 bytes size
2136 ui.write(
2136 ui.write(
2137 _(b'id: %s, size %s\n') % (hex(nodeid), util.bytecount(size))
2137 _(b'id: %s, size %s\n') % (hex(nodeid), util.bytecount(size))
2138 )
2138 )
2139 ondisk = cache._opener.stat(b'manifestfulltextcache').st_size
2139 ondisk = cache._opener.stat(b'manifestfulltextcache').st_size
2140 ui.write(
2140 ui.write(
2141 _(b'total cache data size %s, on-disk %s\n')
2141 _(b'total cache data size %s, on-disk %s\n')
2142 % (util.bytecount(totalsize), util.bytecount(ondisk))
2142 % (util.bytecount(totalsize), util.bytecount(ondisk))
2143 )
2143 )
2144
2144
2145
2145
2146 @command(b'debugmergestate', [] + cmdutil.templateopts, b'')
2146 @command(b'debugmergestate', [] + cmdutil.templateopts, b'')
2147 def debugmergestate(ui, repo, *args, **opts):
2147 def debugmergestate(ui, repo, *args, **opts):
2148 """print merge state
2148 """print merge state
2149
2149
2150 Use --verbose to print out information about whether v1 or v2 merge state
2150 Use --verbose to print out information about whether v1 or v2 merge state
2151 was chosen."""
2151 was chosen."""
2152
2152
2153 if ui.verbose:
2153 if ui.verbose:
2154 ms = mergestatemod.mergestate(repo)
2154 ms = mergestatemod.mergestate(repo)
2155
2155
2156 # sort so that reasonable information is on top
2156 # sort so that reasonable information is on top
2157 v1records = ms._readrecordsv1()
2157 v1records = ms._readrecordsv1()
2158 v2records = ms._readrecordsv2()
2158 v2records = ms._readrecordsv2()
2159
2159
2160 if not v1records and not v2records:
2160 if not v1records and not v2records:
2161 pass
2161 pass
2162 elif not v2records:
2162 elif not v2records:
2163 ui.writenoi18n(b'no version 2 merge state\n')
2163 ui.writenoi18n(b'no version 2 merge state\n')
2164 elif ms._v1v2match(v1records, v2records):
2164 elif ms._v1v2match(v1records, v2records):
2165 ui.writenoi18n(b'v1 and v2 states match: using v2\n')
2165 ui.writenoi18n(b'v1 and v2 states match: using v2\n')
2166 else:
2166 else:
2167 ui.writenoi18n(b'v1 and v2 states mismatch: using v1\n')
2167 ui.writenoi18n(b'v1 and v2 states mismatch: using v1\n')
2168
2168
2169 opts = pycompat.byteskwargs(opts)
2169 opts = pycompat.byteskwargs(opts)
2170 if not opts[b'template']:
2170 if not opts[b'template']:
2171 opts[b'template'] = (
2171 opts[b'template'] = (
2172 b'{if(commits, "", "no merge state found\n")}'
2172 b'{if(commits, "", "no merge state found\n")}'
2173 b'{commits % "{name}{if(label, " ({label})")}: {node}\n"}'
2173 b'{commits % "{name}{if(label, " ({label})")}: {node}\n"}'
2174 b'{files % "file: {path} (state \\"{state}\\")\n'
2174 b'{files % "file: {path} (state \\"{state}\\")\n'
2175 b'{if(local_path, "'
2175 b'{if(local_path, "'
2176 b' local path: {local_path} (hash {local_key}, flags \\"{local_flags}\\")\n'
2176 b' local path: {local_path} (hash {local_key}, flags \\"{local_flags}\\")\n'
2177 b' ancestor path: {ancestor_path} (node {ancestor_node})\n'
2177 b' ancestor path: {ancestor_path} (node {ancestor_node})\n'
2178 b' other path: {other_path} (node {other_node})\n'
2178 b' other path: {other_path} (node {other_node})\n'
2179 b'")}'
2179 b'")}'
2180 b'{if(rename_side, "'
2180 b'{if(rename_side, "'
2181 b' rename side: {rename_side}\n'
2181 b' rename side: {rename_side}\n'
2182 b' renamed path: {renamed_path}\n'
2182 b' renamed path: {renamed_path}\n'
2183 b'")}'
2183 b'")}'
2184 b'{extras % " extra: {key} = {value}\n"}'
2184 b'{extras % " extra: {key} = {value}\n"}'
2185 b'"}'
2185 b'"}'
2186 b'{extras % "extra: {file} ({key} = {value})\n"}'
2186 b'{extras % "extra: {file} ({key} = {value})\n"}'
2187 )
2187 )
2188
2188
2189 ms = mergestatemod.mergestate.read(repo)
2189 ms = mergestatemod.mergestate.read(repo)
2190
2190
2191 fm = ui.formatter(b'debugmergestate', opts)
2191 fm = ui.formatter(b'debugmergestate', opts)
2192 fm.startitem()
2192 fm.startitem()
2193
2193
2194 fm_commits = fm.nested(b'commits')
2194 fm_commits = fm.nested(b'commits')
2195 if ms.active():
2195 if ms.active():
2196 for name, node, label_index in (
2196 for name, node, label_index in (
2197 (b'local', ms.local, 0),
2197 (b'local', ms.local, 0),
2198 (b'other', ms.other, 1),
2198 (b'other', ms.other, 1),
2199 ):
2199 ):
2200 fm_commits.startitem()
2200 fm_commits.startitem()
2201 fm_commits.data(name=name)
2201 fm_commits.data(name=name)
2202 fm_commits.data(node=hex(node))
2202 fm_commits.data(node=hex(node))
2203 if ms._labels and len(ms._labels) > label_index:
2203 if ms._labels and len(ms._labels) > label_index:
2204 fm_commits.data(label=ms._labels[label_index])
2204 fm_commits.data(label=ms._labels[label_index])
2205 fm_commits.end()
2205 fm_commits.end()
2206
2206
2207 fm_files = fm.nested(b'files')
2207 fm_files = fm.nested(b'files')
2208 if ms.active():
2208 if ms.active():
2209 for f in ms:
2209 for f in ms:
2210 fm_files.startitem()
2210 fm_files.startitem()
2211 fm_files.data(path=f)
2211 fm_files.data(path=f)
2212 state = ms._state[f]
2212 state = ms._state[f]
2213 fm_files.data(state=state[0])
2213 fm_files.data(state=state[0])
2214 if state[0] in (
2214 if state[0] in (
2215 mergestatemod.MERGE_RECORD_UNRESOLVED,
2215 mergestatemod.MERGE_RECORD_UNRESOLVED,
2216 mergestatemod.MERGE_RECORD_RESOLVED,
2216 mergestatemod.MERGE_RECORD_RESOLVED,
2217 ):
2217 ):
2218 fm_files.data(local_key=state[1])
2218 fm_files.data(local_key=state[1])
2219 fm_files.data(local_path=state[2])
2219 fm_files.data(local_path=state[2])
2220 fm_files.data(ancestor_path=state[3])
2220 fm_files.data(ancestor_path=state[3])
2221 fm_files.data(ancestor_node=state[4])
2221 fm_files.data(ancestor_node=state[4])
2222 fm_files.data(other_path=state[5])
2222 fm_files.data(other_path=state[5])
2223 fm_files.data(other_node=state[6])
2223 fm_files.data(other_node=state[6])
2224 fm_files.data(local_flags=state[7])
2224 fm_files.data(local_flags=state[7])
2225 elif state[0] in (
2225 elif state[0] in (
2226 mergestatemod.MERGE_RECORD_UNRESOLVED_PATH,
2226 mergestatemod.MERGE_RECORD_UNRESOLVED_PATH,
2227 mergestatemod.MERGE_RECORD_RESOLVED_PATH,
2227 mergestatemod.MERGE_RECORD_RESOLVED_PATH,
2228 ):
2228 ):
2229 fm_files.data(renamed_path=state[1])
2229 fm_files.data(renamed_path=state[1])
2230 fm_files.data(rename_side=state[2])
2230 fm_files.data(rename_side=state[2])
2231 fm_extras = fm_files.nested(b'extras')
2231 fm_extras = fm_files.nested(b'extras')
2232 for k, v in sorted(ms.extras(f).items()):
2232 for k, v in sorted(ms.extras(f).items()):
2233 fm_extras.startitem()
2233 fm_extras.startitem()
2234 fm_extras.data(key=k)
2234 fm_extras.data(key=k)
2235 fm_extras.data(value=v)
2235 fm_extras.data(value=v)
2236 fm_extras.end()
2236 fm_extras.end()
2237
2237
2238 fm_files.end()
2238 fm_files.end()
2239
2239
2240 fm_extras = fm.nested(b'extras')
2240 fm_extras = fm.nested(b'extras')
2241 for f, d in sorted(pycompat.iteritems(ms.allextras())):
2241 for f, d in sorted(pycompat.iteritems(ms.allextras())):
2242 if f in ms:
2242 if f in ms:
2243 # If file is in mergestate, we have already processed it's extras
2243 # If file is in mergestate, we have already processed it's extras
2244 continue
2244 continue
2245 for k, v in pycompat.iteritems(d):
2245 for k, v in pycompat.iteritems(d):
2246 fm_extras.startitem()
2246 fm_extras.startitem()
2247 fm_extras.data(file=f)
2247 fm_extras.data(file=f)
2248 fm_extras.data(key=k)
2248 fm_extras.data(key=k)
2249 fm_extras.data(value=v)
2249 fm_extras.data(value=v)
2250 fm_extras.end()
2250 fm_extras.end()
2251
2251
2252 fm.end()
2252 fm.end()
2253
2253
2254
2254
2255 @command(b'debugnamecomplete', [], _(b'NAME...'))
2255 @command(b'debugnamecomplete', [], _(b'NAME...'))
2256 def debugnamecomplete(ui, repo, *args):
2256 def debugnamecomplete(ui, repo, *args):
2257 '''complete "names" - tags, open branch names, bookmark names'''
2257 '''complete "names" - tags, open branch names, bookmark names'''
2258
2258
2259 names = set()
2259 names = set()
2260 # since we previously only listed open branches, we will handle that
2260 # since we previously only listed open branches, we will handle that
2261 # specially (after this for loop)
2261 # specially (after this for loop)
2262 for name, ns in pycompat.iteritems(repo.names):
2262 for name, ns in pycompat.iteritems(repo.names):
2263 if name != b'branches':
2263 if name != b'branches':
2264 names.update(ns.listnames(repo))
2264 names.update(ns.listnames(repo))
2265 names.update(
2265 names.update(
2266 tag
2266 tag
2267 for (tag, heads, tip, closed) in repo.branchmap().iterbranches()
2267 for (tag, heads, tip, closed) in repo.branchmap().iterbranches()
2268 if not closed
2268 if not closed
2269 )
2269 )
2270 completions = set()
2270 completions = set()
2271 if not args:
2271 if not args:
2272 args = [b'']
2272 args = [b'']
2273 for a in args:
2273 for a in args:
2274 completions.update(n for n in names if n.startswith(a))
2274 completions.update(n for n in names if n.startswith(a))
2275 ui.write(b'\n'.join(sorted(completions)))
2275 ui.write(b'\n'.join(sorted(completions)))
2276 ui.write(b'\n')
2276 ui.write(b'\n')
2277
2277
2278
2278
2279 @command(
2279 @command(
2280 b'debugnodemap',
2280 b'debugnodemap',
2281 [
2281 [
2282 (
2282 (
2283 b'',
2283 b'',
2284 b'dump-new',
2284 b'dump-new',
2285 False,
2285 False,
2286 _(b'write a (new) persistent binary nodemap on stdout'),
2286 _(b'write a (new) persistent binary nodemap on stdout'),
2287 ),
2287 ),
2288 (b'', b'dump-disk', False, _(b'dump on-disk data on stdout')),
2288 (b'', b'dump-disk', False, _(b'dump on-disk data on stdout')),
2289 (
2289 (
2290 b'',
2290 b'',
2291 b'check',
2291 b'check',
2292 False,
2292 False,
2293 _(b'check that the data on disk data are correct.'),
2293 _(b'check that the data on disk data are correct.'),
2294 ),
2294 ),
2295 (
2295 (
2296 b'',
2296 b'',
2297 b'metadata',
2297 b'metadata',
2298 False,
2298 False,
2299 _(b'display the on disk meta data for the nodemap'),
2299 _(b'display the on disk meta data for the nodemap'),
2300 ),
2300 ),
2301 ],
2301 ],
2302 )
2302 )
2303 def debugnodemap(ui, repo, **opts):
2303 def debugnodemap(ui, repo, **opts):
2304 """write and inspect on disk nodemap"""
2304 """write and inspect on disk nodemap"""
2305 if opts['dump_new']:
2305 if opts['dump_new']:
2306 unfi = repo.unfiltered()
2306 unfi = repo.unfiltered()
2307 cl = unfi.changelog
2307 cl = unfi.changelog
2308 if util.safehasattr(cl.index, "nodemap_data_all"):
2308 if util.safehasattr(cl.index, "nodemap_data_all"):
2309 data = cl.index.nodemap_data_all()
2309 data = cl.index.nodemap_data_all()
2310 else:
2310 else:
2311 data = nodemap.persistent_data(cl.index)
2311 data = nodemap.persistent_data(cl.index)
2312 ui.write(data)
2312 ui.write(data)
2313 elif opts['dump_disk']:
2313 elif opts['dump_disk']:
2314 unfi = repo.unfiltered()
2314 unfi = repo.unfiltered()
2315 cl = unfi.changelog
2315 cl = unfi.changelog
2316 nm_data = nodemap.persisted_data(cl)
2316 nm_data = nodemap.persisted_data(cl)
2317 if nm_data is not None:
2317 if nm_data is not None:
2318 docket, data = nm_data
2318 docket, data = nm_data
2319 ui.write(data[:])
2319 ui.write(data[:])
2320 elif opts['check']:
2320 elif opts['check']:
2321 unfi = repo.unfiltered()
2321 unfi = repo.unfiltered()
2322 cl = unfi.changelog
2322 cl = unfi.changelog
2323 nm_data = nodemap.persisted_data(cl)
2323 nm_data = nodemap.persisted_data(cl)
2324 if nm_data is not None:
2324 if nm_data is not None:
2325 docket, data = nm_data
2325 docket, data = nm_data
2326 return nodemap.check_data(ui, cl.index, data)
2326 return nodemap.check_data(ui, cl.index, data)
2327 elif opts['metadata']:
2327 elif opts['metadata']:
2328 unfi = repo.unfiltered()
2328 unfi = repo.unfiltered()
2329 cl = unfi.changelog
2329 cl = unfi.changelog
2330 nm_data = nodemap.persisted_data(cl)
2330 nm_data = nodemap.persisted_data(cl)
2331 if nm_data is not None:
2331 if nm_data is not None:
2332 docket, data = nm_data
2332 docket, data = nm_data
2333 ui.write((b"uid: %s\n") % docket.uid)
2333 ui.write((b"uid: %s\n") % docket.uid)
2334 ui.write((b"tip-rev: %d\n") % docket.tip_rev)
2334 ui.write((b"tip-rev: %d\n") % docket.tip_rev)
2335 ui.write((b"tip-node: %s\n") % hex(docket.tip_node))
2335 ui.write((b"tip-node: %s\n") % hex(docket.tip_node))
2336 ui.write((b"data-length: %d\n") % docket.data_length)
2336 ui.write((b"data-length: %d\n") % docket.data_length)
2337 ui.write((b"data-unused: %d\n") % docket.data_unused)
2337 ui.write((b"data-unused: %d\n") % docket.data_unused)
2338 unused_perc = docket.data_unused * 100.0 / docket.data_length
2338 unused_perc = docket.data_unused * 100.0 / docket.data_length
2339 ui.write((b"data-unused: %2.3f%%\n") % unused_perc)
2339 ui.write((b"data-unused: %2.3f%%\n") % unused_perc)
2340
2340
2341
2341
2342 @command(
2342 @command(
2343 b'debugobsolete',
2343 b'debugobsolete',
2344 [
2344 [
2345 (b'', b'flags', 0, _(b'markers flag')),
2345 (b'', b'flags', 0, _(b'markers flag')),
2346 (
2346 (
2347 b'',
2347 b'',
2348 b'record-parents',
2348 b'record-parents',
2349 False,
2349 False,
2350 _(b'record parent information for the precursor'),
2350 _(b'record parent information for the precursor'),
2351 ),
2351 ),
2352 (b'r', b'rev', [], _(b'display markers relevant to REV')),
2352 (b'r', b'rev', [], _(b'display markers relevant to REV')),
2353 (
2353 (
2354 b'',
2354 b'',
2355 b'exclusive',
2355 b'exclusive',
2356 False,
2356 False,
2357 _(b'restrict display to markers only relevant to REV'),
2357 _(b'restrict display to markers only relevant to REV'),
2358 ),
2358 ),
2359 (b'', b'index', False, _(b'display index of the marker')),
2359 (b'', b'index', False, _(b'display index of the marker')),
2360 (b'', b'delete', [], _(b'delete markers specified by indices')),
2360 (b'', b'delete', [], _(b'delete markers specified by indices')),
2361 ]
2361 ]
2362 + cmdutil.commitopts2
2362 + cmdutil.commitopts2
2363 + cmdutil.formatteropts,
2363 + cmdutil.formatteropts,
2364 _(b'[OBSOLETED [REPLACEMENT ...]]'),
2364 _(b'[OBSOLETED [REPLACEMENT ...]]'),
2365 )
2365 )
2366 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
2366 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
2367 """create arbitrary obsolete marker
2367 """create arbitrary obsolete marker
2368
2368
2369 With no arguments, displays the list of obsolescence markers."""
2369 With no arguments, displays the list of obsolescence markers."""
2370
2370
2371 opts = pycompat.byteskwargs(opts)
2371 opts = pycompat.byteskwargs(opts)
2372
2372
2373 def parsenodeid(s):
2373 def parsenodeid(s):
2374 try:
2374 try:
2375 # We do not use revsingle/revrange functions here to accept
2375 # We do not use revsingle/revrange functions here to accept
2376 # arbitrary node identifiers, possibly not present in the
2376 # arbitrary node identifiers, possibly not present in the
2377 # local repository.
2377 # local repository.
2378 n = bin(s)
2378 n = bin(s)
2379 if len(n) != len(nullid):
2379 if len(n) != len(nullid):
2380 raise TypeError()
2380 raise TypeError()
2381 return n
2381 return n
2382 except TypeError:
2382 except TypeError:
2383 raise error.InputError(
2383 raise error.InputError(
2384 b'changeset references must be full hexadecimal '
2384 b'changeset references must be full hexadecimal '
2385 b'node identifiers'
2385 b'node identifiers'
2386 )
2386 )
2387
2387
2388 if opts.get(b'delete'):
2388 if opts.get(b'delete'):
2389 indices = []
2389 indices = []
2390 for v in opts.get(b'delete'):
2390 for v in opts.get(b'delete'):
2391 try:
2391 try:
2392 indices.append(int(v))
2392 indices.append(int(v))
2393 except ValueError:
2393 except ValueError:
2394 raise error.InputError(
2394 raise error.InputError(
2395 _(b'invalid index value: %r') % v,
2395 _(b'invalid index value: %r') % v,
2396 hint=_(b'use integers for indices'),
2396 hint=_(b'use integers for indices'),
2397 )
2397 )
2398
2398
2399 if repo.currenttransaction():
2399 if repo.currenttransaction():
2400 raise error.Abort(
2400 raise error.Abort(
2401 _(b'cannot delete obsmarkers in the middle of transaction.')
2401 _(b'cannot delete obsmarkers in the middle of transaction.')
2402 )
2402 )
2403
2403
2404 with repo.lock():
2404 with repo.lock():
2405 n = repair.deleteobsmarkers(repo.obsstore, indices)
2405 n = repair.deleteobsmarkers(repo.obsstore, indices)
2406 ui.write(_(b'deleted %i obsolescence markers\n') % n)
2406 ui.write(_(b'deleted %i obsolescence markers\n') % n)
2407
2407
2408 return
2408 return
2409
2409
2410 if precursor is not None:
2410 if precursor is not None:
2411 if opts[b'rev']:
2411 if opts[b'rev']:
2412 raise error.InputError(
2412 raise error.InputError(
2413 b'cannot select revision when creating marker'
2413 b'cannot select revision when creating marker'
2414 )
2414 )
2415 metadata = {}
2415 metadata = {}
2416 metadata[b'user'] = encoding.fromlocal(opts[b'user'] or ui.username())
2416 metadata[b'user'] = encoding.fromlocal(opts[b'user'] or ui.username())
2417 succs = tuple(parsenodeid(succ) for succ in successors)
2417 succs = tuple(parsenodeid(succ) for succ in successors)
2418 l = repo.lock()
2418 l = repo.lock()
2419 try:
2419 try:
2420 tr = repo.transaction(b'debugobsolete')
2420 tr = repo.transaction(b'debugobsolete')
2421 try:
2421 try:
2422 date = opts.get(b'date')
2422 date = opts.get(b'date')
2423 if date:
2423 if date:
2424 date = dateutil.parsedate(date)
2424 date = dateutil.parsedate(date)
2425 else:
2425 else:
2426 date = None
2426 date = None
2427 prec = parsenodeid(precursor)
2427 prec = parsenodeid(precursor)
2428 parents = None
2428 parents = None
2429 if opts[b'record_parents']:
2429 if opts[b'record_parents']:
2430 if prec not in repo.unfiltered():
2430 if prec not in repo.unfiltered():
2431 raise error.Abort(
2431 raise error.Abort(
2432 b'cannot used --record-parents on '
2432 b'cannot used --record-parents on '
2433 b'unknown changesets'
2433 b'unknown changesets'
2434 )
2434 )
2435 parents = repo.unfiltered()[prec].parents()
2435 parents = repo.unfiltered()[prec].parents()
2436 parents = tuple(p.node() for p in parents)
2436 parents = tuple(p.node() for p in parents)
2437 repo.obsstore.create(
2437 repo.obsstore.create(
2438 tr,
2438 tr,
2439 prec,
2439 prec,
2440 succs,
2440 succs,
2441 opts[b'flags'],
2441 opts[b'flags'],
2442 parents=parents,
2442 parents=parents,
2443 date=date,
2443 date=date,
2444 metadata=metadata,
2444 metadata=metadata,
2445 ui=ui,
2445 ui=ui,
2446 )
2446 )
2447 tr.close()
2447 tr.close()
2448 except ValueError as exc:
2448 except ValueError as exc:
2449 raise error.Abort(
2449 raise error.Abort(
2450 _(b'bad obsmarker input: %s') % pycompat.bytestr(exc)
2450 _(b'bad obsmarker input: %s') % pycompat.bytestr(exc)
2451 )
2451 )
2452 finally:
2452 finally:
2453 tr.release()
2453 tr.release()
2454 finally:
2454 finally:
2455 l.release()
2455 l.release()
2456 else:
2456 else:
2457 if opts[b'rev']:
2457 if opts[b'rev']:
2458 revs = scmutil.revrange(repo, opts[b'rev'])
2458 revs = scmutil.revrange(repo, opts[b'rev'])
2459 nodes = [repo[r].node() for r in revs]
2459 nodes = [repo[r].node() for r in revs]
2460 markers = list(
2460 markers = list(
2461 obsutil.getmarkers(
2461 obsutil.getmarkers(
2462 repo, nodes=nodes, exclusive=opts[b'exclusive']
2462 repo, nodes=nodes, exclusive=opts[b'exclusive']
2463 )
2463 )
2464 )
2464 )
2465 markers.sort(key=lambda x: x._data)
2465 markers.sort(key=lambda x: x._data)
2466 else:
2466 else:
2467 markers = obsutil.getmarkers(repo)
2467 markers = obsutil.getmarkers(repo)
2468
2468
2469 markerstoiter = markers
2469 markerstoiter = markers
2470 isrelevant = lambda m: True
2470 isrelevant = lambda m: True
2471 if opts.get(b'rev') and opts.get(b'index'):
2471 if opts.get(b'rev') and opts.get(b'index'):
2472 markerstoiter = obsutil.getmarkers(repo)
2472 markerstoiter = obsutil.getmarkers(repo)
2473 markerset = set(markers)
2473 markerset = set(markers)
2474 isrelevant = lambda m: m in markerset
2474 isrelevant = lambda m: m in markerset
2475
2475
2476 fm = ui.formatter(b'debugobsolete', opts)
2476 fm = ui.formatter(b'debugobsolete', opts)
2477 for i, m in enumerate(markerstoiter):
2477 for i, m in enumerate(markerstoiter):
2478 if not isrelevant(m):
2478 if not isrelevant(m):
2479 # marker can be irrelevant when we're iterating over a set
2479 # marker can be irrelevant when we're iterating over a set
2480 # of markers (markerstoiter) which is bigger than the set
2480 # of markers (markerstoiter) which is bigger than the set
2481 # of markers we want to display (markers)
2481 # of markers we want to display (markers)
2482 # this can happen if both --index and --rev options are
2482 # this can happen if both --index and --rev options are
2483 # provided and thus we need to iterate over all of the markers
2483 # provided and thus we need to iterate over all of the markers
2484 # to get the correct indices, but only display the ones that
2484 # to get the correct indices, but only display the ones that
2485 # are relevant to --rev value
2485 # are relevant to --rev value
2486 continue
2486 continue
2487 fm.startitem()
2487 fm.startitem()
2488 ind = i if opts.get(b'index') else None
2488 ind = i if opts.get(b'index') else None
2489 cmdutil.showmarker(fm, m, index=ind)
2489 cmdutil.showmarker(fm, m, index=ind)
2490 fm.end()
2490 fm.end()
2491
2491
2492
2492
2493 @command(
2493 @command(
2494 b'debugp1copies',
2494 b'debugp1copies',
2495 [(b'r', b'rev', b'', _(b'revision to debug'), _(b'REV'))],
2495 [(b'r', b'rev', b'', _(b'revision to debug'), _(b'REV'))],
2496 _(b'[-r REV]'),
2496 _(b'[-r REV]'),
2497 )
2497 )
2498 def debugp1copies(ui, repo, **opts):
2498 def debugp1copies(ui, repo, **opts):
2499 """dump copy information compared to p1"""
2499 """dump copy information compared to p1"""
2500
2500
2501 opts = pycompat.byteskwargs(opts)
2501 opts = pycompat.byteskwargs(opts)
2502 ctx = scmutil.revsingle(repo, opts.get(b'rev'), default=None)
2502 ctx = scmutil.revsingle(repo, opts.get(b'rev'), default=None)
2503 for dst, src in ctx.p1copies().items():
2503 for dst, src in ctx.p1copies().items():
2504 ui.write(b'%s -> %s\n' % (src, dst))
2504 ui.write(b'%s -> %s\n' % (src, dst))
2505
2505
2506
2506
2507 @command(
2507 @command(
2508 b'debugp2copies',
2508 b'debugp2copies',
2509 [(b'r', b'rev', b'', _(b'revision to debug'), _(b'REV'))],
2509 [(b'r', b'rev', b'', _(b'revision to debug'), _(b'REV'))],
2510 _(b'[-r REV]'),
2510 _(b'[-r REV]'),
2511 )
2511 )
2512 def debugp1copies(ui, repo, **opts):
2512 def debugp1copies(ui, repo, **opts):
2513 """dump copy information compared to p2"""
2513 """dump copy information compared to p2"""
2514
2514
2515 opts = pycompat.byteskwargs(opts)
2515 opts = pycompat.byteskwargs(opts)
2516 ctx = scmutil.revsingle(repo, opts.get(b'rev'), default=None)
2516 ctx = scmutil.revsingle(repo, opts.get(b'rev'), default=None)
2517 for dst, src in ctx.p2copies().items():
2517 for dst, src in ctx.p2copies().items():
2518 ui.write(b'%s -> %s\n' % (src, dst))
2518 ui.write(b'%s -> %s\n' % (src, dst))
2519
2519
2520
2520
2521 @command(
2521 @command(
2522 b'debugpathcomplete',
2522 b'debugpathcomplete',
2523 [
2523 [
2524 (b'f', b'full', None, _(b'complete an entire path')),
2524 (b'f', b'full', None, _(b'complete an entire path')),
2525 (b'n', b'normal', None, _(b'show only normal files')),
2525 (b'n', b'normal', None, _(b'show only normal files')),
2526 (b'a', b'added', None, _(b'show only added files')),
2526 (b'a', b'added', None, _(b'show only added files')),
2527 (b'r', b'removed', None, _(b'show only removed files')),
2527 (b'r', b'removed', None, _(b'show only removed files')),
2528 ],
2528 ],
2529 _(b'FILESPEC...'),
2529 _(b'FILESPEC...'),
2530 )
2530 )
2531 def debugpathcomplete(ui, repo, *specs, **opts):
2531 def debugpathcomplete(ui, repo, *specs, **opts):
2532 """complete part or all of a tracked path
2532 """complete part or all of a tracked path
2533
2533
2534 This command supports shells that offer path name completion. It
2534 This command supports shells that offer path name completion. It
2535 currently completes only files already known to the dirstate.
2535 currently completes only files already known to the dirstate.
2536
2536
2537 Completion extends only to the next path segment unless
2537 Completion extends only to the next path segment unless
2538 --full is specified, in which case entire paths are used."""
2538 --full is specified, in which case entire paths are used."""
2539
2539
2540 def complete(path, acceptable):
2540 def complete(path, acceptable):
2541 dirstate = repo.dirstate
2541 dirstate = repo.dirstate
2542 spec = os.path.normpath(os.path.join(encoding.getcwd(), path))
2542 spec = os.path.normpath(os.path.join(encoding.getcwd(), path))
2543 rootdir = repo.root + pycompat.ossep
2543 rootdir = repo.root + pycompat.ossep
2544 if spec != repo.root and not spec.startswith(rootdir):
2544 if spec != repo.root and not spec.startswith(rootdir):
2545 return [], []
2545 return [], []
2546 if os.path.isdir(spec):
2546 if os.path.isdir(spec):
2547 spec += b'/'
2547 spec += b'/'
2548 spec = spec[len(rootdir) :]
2548 spec = spec[len(rootdir) :]
2549 fixpaths = pycompat.ossep != b'/'
2549 fixpaths = pycompat.ossep != b'/'
2550 if fixpaths:
2550 if fixpaths:
2551 spec = spec.replace(pycompat.ossep, b'/')
2551 spec = spec.replace(pycompat.ossep, b'/')
2552 speclen = len(spec)
2552 speclen = len(spec)
2553 fullpaths = opts['full']
2553 fullpaths = opts['full']
2554 files, dirs = set(), set()
2554 files, dirs = set(), set()
2555 adddir, addfile = dirs.add, files.add
2555 adddir, addfile = dirs.add, files.add
2556 for f, st in pycompat.iteritems(dirstate):
2556 for f, st in pycompat.iteritems(dirstate):
2557 if f.startswith(spec) and st[0] in acceptable:
2557 if f.startswith(spec) and st[0] in acceptable:
2558 if fixpaths:
2558 if fixpaths:
2559 f = f.replace(b'/', pycompat.ossep)
2559 f = f.replace(b'/', pycompat.ossep)
2560 if fullpaths:
2560 if fullpaths:
2561 addfile(f)
2561 addfile(f)
2562 continue
2562 continue
2563 s = f.find(pycompat.ossep, speclen)
2563 s = f.find(pycompat.ossep, speclen)
2564 if s >= 0:
2564 if s >= 0:
2565 adddir(f[:s])
2565 adddir(f[:s])
2566 else:
2566 else:
2567 addfile(f)
2567 addfile(f)
2568 return files, dirs
2568 return files, dirs
2569
2569
2570 acceptable = b''
2570 acceptable = b''
2571 if opts['normal']:
2571 if opts['normal']:
2572 acceptable += b'nm'
2572 acceptable += b'nm'
2573 if opts['added']:
2573 if opts['added']:
2574 acceptable += b'a'
2574 acceptable += b'a'
2575 if opts['removed']:
2575 if opts['removed']:
2576 acceptable += b'r'
2576 acceptable += b'r'
2577 cwd = repo.getcwd()
2577 cwd = repo.getcwd()
2578 if not specs:
2578 if not specs:
2579 specs = [b'.']
2579 specs = [b'.']
2580
2580
2581 files, dirs = set(), set()
2581 files, dirs = set(), set()
2582 for spec in specs:
2582 for spec in specs:
2583 f, d = complete(spec, acceptable or b'nmar')
2583 f, d = complete(spec, acceptable or b'nmar')
2584 files.update(f)
2584 files.update(f)
2585 dirs.update(d)
2585 dirs.update(d)
2586 files.update(dirs)
2586 files.update(dirs)
2587 ui.write(b'\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
2587 ui.write(b'\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
2588 ui.write(b'\n')
2588 ui.write(b'\n')
2589
2589
2590
2590
2591 @command(
2591 @command(
2592 b'debugpathcopies',
2592 b'debugpathcopies',
2593 cmdutil.walkopts,
2593 cmdutil.walkopts,
2594 b'hg debugpathcopies REV1 REV2 [FILE]',
2594 b'hg debugpathcopies REV1 REV2 [FILE]',
2595 inferrepo=True,
2595 inferrepo=True,
2596 )
2596 )
2597 def debugpathcopies(ui, repo, rev1, rev2, *pats, **opts):
2597 def debugpathcopies(ui, repo, rev1, rev2, *pats, **opts):
2598 """show copies between two revisions"""
2598 """show copies between two revisions"""
2599 ctx1 = scmutil.revsingle(repo, rev1)
2599 ctx1 = scmutil.revsingle(repo, rev1)
2600 ctx2 = scmutil.revsingle(repo, rev2)
2600 ctx2 = scmutil.revsingle(repo, rev2)
2601 m = scmutil.match(ctx1, pats, opts)
2601 m = scmutil.match(ctx1, pats, opts)
2602 for dst, src in sorted(copies.pathcopies(ctx1, ctx2, m).items()):
2602 for dst, src in sorted(copies.pathcopies(ctx1, ctx2, m).items()):
2603 ui.write(b'%s -> %s\n' % (src, dst))
2603 ui.write(b'%s -> %s\n' % (src, dst))
2604
2604
2605
2605
2606 @command(b'debugpeer', [], _(b'PATH'), norepo=True)
2606 @command(b'debugpeer', [], _(b'PATH'), norepo=True)
2607 def debugpeer(ui, path):
2607 def debugpeer(ui, path):
2608 """establish a connection to a peer repository"""
2608 """establish a connection to a peer repository"""
2609 # Always enable peer request logging. Requires --debug to display
2609 # Always enable peer request logging. Requires --debug to display
2610 # though.
2610 # though.
2611 overrides = {
2611 overrides = {
2612 (b'devel', b'debug.peer-request'): True,
2612 (b'devel', b'debug.peer-request'): True,
2613 }
2613 }
2614
2614
2615 with ui.configoverride(overrides):
2615 with ui.configoverride(overrides):
2616 peer = hg.peer(ui, {}, path)
2616 peer = hg.peer(ui, {}, path)
2617
2617
2618 local = peer.local() is not None
2618 local = peer.local() is not None
2619 canpush = peer.canpush()
2619 canpush = peer.canpush()
2620
2620
2621 ui.write(_(b'url: %s\n') % peer.url())
2621 ui.write(_(b'url: %s\n') % peer.url())
2622 ui.write(_(b'local: %s\n') % (_(b'yes') if local else _(b'no')))
2622 ui.write(_(b'local: %s\n') % (_(b'yes') if local else _(b'no')))
2623 ui.write(_(b'pushable: %s\n') % (_(b'yes') if canpush else _(b'no')))
2623 ui.write(_(b'pushable: %s\n') % (_(b'yes') if canpush else _(b'no')))
2624
2624
2625
2625
2626 @command(
2626 @command(
2627 b'debugpickmergetool',
2627 b'debugpickmergetool',
2628 [
2628 [
2629 (b'r', b'rev', b'', _(b'check for files in this revision'), _(b'REV')),
2629 (b'r', b'rev', b'', _(b'check for files in this revision'), _(b'REV')),
2630 (b'', b'changedelete', None, _(b'emulate merging change and delete')),
2630 (b'', b'changedelete', None, _(b'emulate merging change and delete')),
2631 ]
2631 ]
2632 + cmdutil.walkopts
2632 + cmdutil.walkopts
2633 + cmdutil.mergetoolopts,
2633 + cmdutil.mergetoolopts,
2634 _(b'[PATTERN]...'),
2634 _(b'[PATTERN]...'),
2635 inferrepo=True,
2635 inferrepo=True,
2636 )
2636 )
2637 def debugpickmergetool(ui, repo, *pats, **opts):
2637 def debugpickmergetool(ui, repo, *pats, **opts):
2638 """examine which merge tool is chosen for specified file
2638 """examine which merge tool is chosen for specified file
2639
2639
2640 As described in :hg:`help merge-tools`, Mercurial examines
2640 As described in :hg:`help merge-tools`, Mercurial examines
2641 configurations below in this order to decide which merge tool is
2641 configurations below in this order to decide which merge tool is
2642 chosen for specified file.
2642 chosen for specified file.
2643
2643
2644 1. ``--tool`` option
2644 1. ``--tool`` option
2645 2. ``HGMERGE`` environment variable
2645 2. ``HGMERGE`` environment variable
2646 3. configurations in ``merge-patterns`` section
2646 3. configurations in ``merge-patterns`` section
2647 4. configuration of ``ui.merge``
2647 4. configuration of ``ui.merge``
2648 5. configurations in ``merge-tools`` section
2648 5. configurations in ``merge-tools`` section
2649 6. ``hgmerge`` tool (for historical reason only)
2649 6. ``hgmerge`` tool (for historical reason only)
2650 7. default tool for fallback (``:merge`` or ``:prompt``)
2650 7. default tool for fallback (``:merge`` or ``:prompt``)
2651
2651
2652 This command writes out examination result in the style below::
2652 This command writes out examination result in the style below::
2653
2653
2654 FILE = MERGETOOL
2654 FILE = MERGETOOL
2655
2655
2656 By default, all files known in the first parent context of the
2656 By default, all files known in the first parent context of the
2657 working directory are examined. Use file patterns and/or -I/-X
2657 working directory are examined. Use file patterns and/or -I/-X
2658 options to limit target files. -r/--rev is also useful to examine
2658 options to limit target files. -r/--rev is also useful to examine
2659 files in another context without actual updating to it.
2659 files in another context without actual updating to it.
2660
2660
2661 With --debug, this command shows warning messages while matching
2661 With --debug, this command shows warning messages while matching
2662 against ``merge-patterns`` and so on, too. It is recommended to
2662 against ``merge-patterns`` and so on, too. It is recommended to
2663 use this option with explicit file patterns and/or -I/-X options,
2663 use this option with explicit file patterns and/or -I/-X options,
2664 because this option increases amount of output per file according
2664 because this option increases amount of output per file according
2665 to configurations in hgrc.
2665 to configurations in hgrc.
2666
2666
2667 With -v/--verbose, this command shows configurations below at
2667 With -v/--verbose, this command shows configurations below at
2668 first (only if specified).
2668 first (only if specified).
2669
2669
2670 - ``--tool`` option
2670 - ``--tool`` option
2671 - ``HGMERGE`` environment variable
2671 - ``HGMERGE`` environment variable
2672 - configuration of ``ui.merge``
2672 - configuration of ``ui.merge``
2673
2673
2674 If merge tool is chosen before matching against
2674 If merge tool is chosen before matching against
2675 ``merge-patterns``, this command can't show any helpful
2675 ``merge-patterns``, this command can't show any helpful
2676 information, even with --debug. In such case, information above is
2676 information, even with --debug. In such case, information above is
2677 useful to know why a merge tool is chosen.
2677 useful to know why a merge tool is chosen.
2678 """
2678 """
2679 opts = pycompat.byteskwargs(opts)
2679 opts = pycompat.byteskwargs(opts)
2680 overrides = {}
2680 overrides = {}
2681 if opts[b'tool']:
2681 if opts[b'tool']:
2682 overrides[(b'ui', b'forcemerge')] = opts[b'tool']
2682 overrides[(b'ui', b'forcemerge')] = opts[b'tool']
2683 ui.notenoi18n(b'with --tool %r\n' % (pycompat.bytestr(opts[b'tool'])))
2683 ui.notenoi18n(b'with --tool %r\n' % (pycompat.bytestr(opts[b'tool'])))
2684
2684
2685 with ui.configoverride(overrides, b'debugmergepatterns'):
2685 with ui.configoverride(overrides, b'debugmergepatterns'):
2686 hgmerge = encoding.environ.get(b"HGMERGE")
2686 hgmerge = encoding.environ.get(b"HGMERGE")
2687 if hgmerge is not None:
2687 if hgmerge is not None:
2688 ui.notenoi18n(b'with HGMERGE=%r\n' % (pycompat.bytestr(hgmerge)))
2688 ui.notenoi18n(b'with HGMERGE=%r\n' % (pycompat.bytestr(hgmerge)))
2689 uimerge = ui.config(b"ui", b"merge")
2689 uimerge = ui.config(b"ui", b"merge")
2690 if uimerge:
2690 if uimerge:
2691 ui.notenoi18n(b'with ui.merge=%r\n' % (pycompat.bytestr(uimerge)))
2691 ui.notenoi18n(b'with ui.merge=%r\n' % (pycompat.bytestr(uimerge)))
2692
2692
2693 ctx = scmutil.revsingle(repo, opts.get(b'rev'))
2693 ctx = scmutil.revsingle(repo, opts.get(b'rev'))
2694 m = scmutil.match(ctx, pats, opts)
2694 m = scmutil.match(ctx, pats, opts)
2695 changedelete = opts[b'changedelete']
2695 changedelete = opts[b'changedelete']
2696 for path in ctx.walk(m):
2696 for path in ctx.walk(m):
2697 fctx = ctx[path]
2697 fctx = ctx[path]
2698 try:
2698 try:
2699 if not ui.debugflag:
2699 if not ui.debugflag:
2700 ui.pushbuffer(error=True)
2700 ui.pushbuffer(error=True)
2701 tool, toolpath = filemerge._picktool(
2701 tool, toolpath = filemerge._picktool(
2702 repo,
2702 repo,
2703 ui,
2703 ui,
2704 path,
2704 path,
2705 fctx.isbinary(),
2705 fctx.isbinary(),
2706 b'l' in fctx.flags(),
2706 b'l' in fctx.flags(),
2707 changedelete,
2707 changedelete,
2708 )
2708 )
2709 finally:
2709 finally:
2710 if not ui.debugflag:
2710 if not ui.debugflag:
2711 ui.popbuffer()
2711 ui.popbuffer()
2712 ui.write(b'%s = %s\n' % (path, tool))
2712 ui.write(b'%s = %s\n' % (path, tool))
2713
2713
2714
2714
2715 @command(b'debugpushkey', [], _(b'REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
2715 @command(b'debugpushkey', [], _(b'REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
2716 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
2716 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
2717 """access the pushkey key/value protocol
2717 """access the pushkey key/value protocol
2718
2718
2719 With two args, list the keys in the given namespace.
2719 With two args, list the keys in the given namespace.
2720
2720
2721 With five args, set a key to new if it currently is set to old.
2721 With five args, set a key to new if it currently is set to old.
2722 Reports success or failure.
2722 Reports success or failure.
2723 """
2723 """
2724
2724
2725 target = hg.peer(ui, {}, repopath)
2725 target = hg.peer(ui, {}, repopath)
2726 if keyinfo:
2726 if keyinfo:
2727 key, old, new = keyinfo
2727 key, old, new = keyinfo
2728 with target.commandexecutor() as e:
2728 with target.commandexecutor() as e:
2729 r = e.callcommand(
2729 r = e.callcommand(
2730 b'pushkey',
2730 b'pushkey',
2731 {
2731 {
2732 b'namespace': namespace,
2732 b'namespace': namespace,
2733 b'key': key,
2733 b'key': key,
2734 b'old': old,
2734 b'old': old,
2735 b'new': new,
2735 b'new': new,
2736 },
2736 },
2737 ).result()
2737 ).result()
2738
2738
2739 ui.status(pycompat.bytestr(r) + b'\n')
2739 ui.status(pycompat.bytestr(r) + b'\n')
2740 return not r
2740 return not r
2741 else:
2741 else:
2742 for k, v in sorted(pycompat.iteritems(target.listkeys(namespace))):
2742 for k, v in sorted(pycompat.iteritems(target.listkeys(namespace))):
2743 ui.write(
2743 ui.write(
2744 b"%s\t%s\n" % (stringutil.escapestr(k), stringutil.escapestr(v))
2744 b"%s\t%s\n" % (stringutil.escapestr(k), stringutil.escapestr(v))
2745 )
2745 )
2746
2746
2747
2747
2748 @command(b'debugpvec', [], _(b'A B'))
2748 @command(b'debugpvec', [], _(b'A B'))
2749 def debugpvec(ui, repo, a, b=None):
2749 def debugpvec(ui, repo, a, b=None):
2750 ca = scmutil.revsingle(repo, a)
2750 ca = scmutil.revsingle(repo, a)
2751 cb = scmutil.revsingle(repo, b)
2751 cb = scmutil.revsingle(repo, b)
2752 pa = pvec.ctxpvec(ca)
2752 pa = pvec.ctxpvec(ca)
2753 pb = pvec.ctxpvec(cb)
2753 pb = pvec.ctxpvec(cb)
2754 if pa == pb:
2754 if pa == pb:
2755 rel = b"="
2755 rel = b"="
2756 elif pa > pb:
2756 elif pa > pb:
2757 rel = b">"
2757 rel = b">"
2758 elif pa < pb:
2758 elif pa < pb:
2759 rel = b"<"
2759 rel = b"<"
2760 elif pa | pb:
2760 elif pa | pb:
2761 rel = b"|"
2761 rel = b"|"
2762 ui.write(_(b"a: %s\n") % pa)
2762 ui.write(_(b"a: %s\n") % pa)
2763 ui.write(_(b"b: %s\n") % pb)
2763 ui.write(_(b"b: %s\n") % pb)
2764 ui.write(_(b"depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
2764 ui.write(_(b"depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
2765 ui.write(
2765 ui.write(
2766 _(b"delta: %d hdist: %d distance: %d relation: %s\n")
2766 _(b"delta: %d hdist: %d distance: %d relation: %s\n")
2767 % (
2767 % (
2768 abs(pa._depth - pb._depth),
2768 abs(pa._depth - pb._depth),
2769 pvec._hamming(pa._vec, pb._vec),
2769 pvec._hamming(pa._vec, pb._vec),
2770 pa.distance(pb),
2770 pa.distance(pb),
2771 rel,
2771 rel,
2772 )
2772 )
2773 )
2773 )
2774
2774
2775
2775
2776 @command(
2776 @command(
2777 b'debugrebuilddirstate|debugrebuildstate',
2777 b'debugrebuilddirstate|debugrebuildstate',
2778 [
2778 [
2779 (b'r', b'rev', b'', _(b'revision to rebuild to'), _(b'REV')),
2779 (b'r', b'rev', b'', _(b'revision to rebuild to'), _(b'REV')),
2780 (
2780 (
2781 b'',
2781 b'',
2782 b'minimal',
2782 b'minimal',
2783 None,
2783 None,
2784 _(
2784 _(
2785 b'only rebuild files that are inconsistent with '
2785 b'only rebuild files that are inconsistent with '
2786 b'the working copy parent'
2786 b'the working copy parent'
2787 ),
2787 ),
2788 ),
2788 ),
2789 ],
2789 ],
2790 _(b'[-r REV]'),
2790 _(b'[-r REV]'),
2791 )
2791 )
2792 def debugrebuilddirstate(ui, repo, rev, **opts):
2792 def debugrebuilddirstate(ui, repo, rev, **opts):
2793 """rebuild the dirstate as it would look like for the given revision
2793 """rebuild the dirstate as it would look like for the given revision
2794
2794
2795 If no revision is specified the first current parent will be used.
2795 If no revision is specified the first current parent will be used.
2796
2796
2797 The dirstate will be set to the files of the given revision.
2797 The dirstate will be set to the files of the given revision.
2798 The actual working directory content or existing dirstate
2798 The actual working directory content or existing dirstate
2799 information such as adds or removes is not considered.
2799 information such as adds or removes is not considered.
2800
2800
2801 ``minimal`` will only rebuild the dirstate status for files that claim to be
2801 ``minimal`` will only rebuild the dirstate status for files that claim to be
2802 tracked but are not in the parent manifest, or that exist in the parent
2802 tracked but are not in the parent manifest, or that exist in the parent
2803 manifest but are not in the dirstate. It will not change adds, removes, or
2803 manifest but are not in the dirstate. It will not change adds, removes, or
2804 modified files that are in the working copy parent.
2804 modified files that are in the working copy parent.
2805
2805
2806 One use of this command is to make the next :hg:`status` invocation
2806 One use of this command is to make the next :hg:`status` invocation
2807 check the actual file content.
2807 check the actual file content.
2808 """
2808 """
2809 ctx = scmutil.revsingle(repo, rev)
2809 ctx = scmutil.revsingle(repo, rev)
2810 with repo.wlock():
2810 with repo.wlock():
2811 dirstate = repo.dirstate
2811 dirstate = repo.dirstate
2812 changedfiles = None
2812 changedfiles = None
2813 # See command doc for what minimal does.
2813 # See command doc for what minimal does.
2814 if opts.get('minimal'):
2814 if opts.get('minimal'):
2815 manifestfiles = set(ctx.manifest().keys())
2815 manifestfiles = set(ctx.manifest().keys())
2816 dirstatefiles = set(dirstate)
2816 dirstatefiles = set(dirstate)
2817 manifestonly = manifestfiles - dirstatefiles
2817 manifestonly = manifestfiles - dirstatefiles
2818 dsonly = dirstatefiles - manifestfiles
2818 dsonly = dirstatefiles - manifestfiles
2819 dsnotadded = {f for f in dsonly if dirstate[f] != b'a'}
2819 dsnotadded = {f for f in dsonly if dirstate[f] != b'a'}
2820 changedfiles = manifestonly | dsnotadded
2820 changedfiles = manifestonly | dsnotadded
2821
2821
2822 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
2822 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
2823
2823
2824
2824
2825 @command(b'debugrebuildfncache', [], b'')
2825 @command(b'debugrebuildfncache', [], b'')
2826 def debugrebuildfncache(ui, repo):
2826 def debugrebuildfncache(ui, repo):
2827 """rebuild the fncache file"""
2827 """rebuild the fncache file"""
2828 repair.rebuildfncache(ui, repo)
2828 repair.rebuildfncache(ui, repo)
2829
2829
2830
2830
2831 @command(
2831 @command(
2832 b'debugrename',
2832 b'debugrename',
2833 [(b'r', b'rev', b'', _(b'revision to debug'), _(b'REV'))],
2833 [(b'r', b'rev', b'', _(b'revision to debug'), _(b'REV'))],
2834 _(b'[-r REV] [FILE]...'),
2834 _(b'[-r REV] [FILE]...'),
2835 )
2835 )
2836 def debugrename(ui, repo, *pats, **opts):
2836 def debugrename(ui, repo, *pats, **opts):
2837 """dump rename information"""
2837 """dump rename information"""
2838
2838
2839 opts = pycompat.byteskwargs(opts)
2839 opts = pycompat.byteskwargs(opts)
2840 ctx = scmutil.revsingle(repo, opts.get(b'rev'))
2840 ctx = scmutil.revsingle(repo, opts.get(b'rev'))
2841 m = scmutil.match(ctx, pats, opts)
2841 m = scmutil.match(ctx, pats, opts)
2842 for abs in ctx.walk(m):
2842 for abs in ctx.walk(m):
2843 fctx = ctx[abs]
2843 fctx = ctx[abs]
2844 o = fctx.filelog().renamed(fctx.filenode())
2844 o = fctx.filelog().renamed(fctx.filenode())
2845 rel = repo.pathto(abs)
2845 rel = repo.pathto(abs)
2846 if o:
2846 if o:
2847 ui.write(_(b"%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
2847 ui.write(_(b"%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
2848 else:
2848 else:
2849 ui.write(_(b"%s not renamed\n") % rel)
2849 ui.write(_(b"%s not renamed\n") % rel)
2850
2850
2851
2851
2852 @command(b'debugrequires|debugrequirements', [], b'')
2852 @command(b'debugrequires|debugrequirements', [], b'')
2853 def debugrequirements(ui, repo):
2853 def debugrequirements(ui, repo):
2854 """ print the current repo requirements """
2854 """ print the current repo requirements """
2855 for r in sorted(repo.requirements):
2855 for r in sorted(repo.requirements):
2856 ui.write(b"%s\n" % r)
2856 ui.write(b"%s\n" % r)
2857
2857
2858
2858
2859 @command(
2859 @command(
2860 b'debugrevlog',
2860 b'debugrevlog',
2861 cmdutil.debugrevlogopts + [(b'd', b'dump', False, _(b'dump index data'))],
2861 cmdutil.debugrevlogopts + [(b'd', b'dump', False, _(b'dump index data'))],
2862 _(b'-c|-m|FILE'),
2862 _(b'-c|-m|FILE'),
2863 optionalrepo=True,
2863 optionalrepo=True,
2864 )
2864 )
2865 def debugrevlog(ui, repo, file_=None, **opts):
2865 def debugrevlog(ui, repo, file_=None, **opts):
2866 """show data and statistics about a revlog"""
2866 """show data and statistics about a revlog"""
2867 opts = pycompat.byteskwargs(opts)
2867 opts = pycompat.byteskwargs(opts)
2868 r = cmdutil.openrevlog(repo, b'debugrevlog', file_, opts)
2868 r = cmdutil.openrevlog(repo, b'debugrevlog', file_, opts)
2869
2869
2870 if opts.get(b"dump"):
2870 if opts.get(b"dump"):
2871 numrevs = len(r)
2871 numrevs = len(r)
2872 ui.write(
2872 ui.write(
2873 (
2873 (
2874 b"# rev p1rev p2rev start end deltastart base p1 p2"
2874 b"# rev p1rev p2rev start end deltastart base p1 p2"
2875 b" rawsize totalsize compression heads chainlen\n"
2875 b" rawsize totalsize compression heads chainlen\n"
2876 )
2876 )
2877 )
2877 )
2878 ts = 0
2878 ts = 0
2879 heads = set()
2879 heads = set()
2880
2880
2881 for rev in pycompat.xrange(numrevs):
2881 for rev in pycompat.xrange(numrevs):
2882 dbase = r.deltaparent(rev)
2882 dbase = r.deltaparent(rev)
2883 if dbase == -1:
2883 if dbase == -1:
2884 dbase = rev
2884 dbase = rev
2885 cbase = r.chainbase(rev)
2885 cbase = r.chainbase(rev)
2886 clen = r.chainlen(rev)
2886 clen = r.chainlen(rev)
2887 p1, p2 = r.parentrevs(rev)
2887 p1, p2 = r.parentrevs(rev)
2888 rs = r.rawsize(rev)
2888 rs = r.rawsize(rev)
2889 ts = ts + rs
2889 ts = ts + rs
2890 heads -= set(r.parentrevs(rev))
2890 heads -= set(r.parentrevs(rev))
2891 heads.add(rev)
2891 heads.add(rev)
2892 try:
2892 try:
2893 compression = ts / r.end(rev)
2893 compression = ts / r.end(rev)
2894 except ZeroDivisionError:
2894 except ZeroDivisionError:
2895 compression = 0
2895 compression = 0
2896 ui.write(
2896 ui.write(
2897 b"%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
2897 b"%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
2898 b"%11d %5d %8d\n"
2898 b"%11d %5d %8d\n"
2899 % (
2899 % (
2900 rev,
2900 rev,
2901 p1,
2901 p1,
2902 p2,
2902 p2,
2903 r.start(rev),
2903 r.start(rev),
2904 r.end(rev),
2904 r.end(rev),
2905 r.start(dbase),
2905 r.start(dbase),
2906 r.start(cbase),
2906 r.start(cbase),
2907 r.start(p1),
2907 r.start(p1),
2908 r.start(p2),
2908 r.start(p2),
2909 rs,
2909 rs,
2910 ts,
2910 ts,
2911 compression,
2911 compression,
2912 len(heads),
2912 len(heads),
2913 clen,
2913 clen,
2914 )
2914 )
2915 )
2915 )
2916 return 0
2916 return 0
2917
2917
2918 v = r.version
2918 v = r.version
2919 format = v & 0xFFFF
2919 format = v & 0xFFFF
2920 flags = []
2920 flags = []
2921 gdelta = False
2921 gdelta = False
2922 if v & revlog.FLAG_INLINE_DATA:
2922 if v & revlog.FLAG_INLINE_DATA:
2923 flags.append(b'inline')
2923 flags.append(b'inline')
2924 if v & revlog.FLAG_GENERALDELTA:
2924 if v & revlog.FLAG_GENERALDELTA:
2925 gdelta = True
2925 gdelta = True
2926 flags.append(b'generaldelta')
2926 flags.append(b'generaldelta')
2927 if not flags:
2927 if not flags:
2928 flags = [b'(none)']
2928 flags = [b'(none)']
2929
2929
2930 ### tracks merge vs single parent
2930 ### tracks merge vs single parent
2931 nummerges = 0
2931 nummerges = 0
2932
2932
2933 ### tracks ways the "delta" are build
2933 ### tracks ways the "delta" are build
2934 # nodelta
2934 # nodelta
2935 numempty = 0
2935 numempty = 0
2936 numemptytext = 0
2936 numemptytext = 0
2937 numemptydelta = 0
2937 numemptydelta = 0
2938 # full file content
2938 # full file content
2939 numfull = 0
2939 numfull = 0
2940 # intermediate snapshot against a prior snapshot
2940 # intermediate snapshot against a prior snapshot
2941 numsemi = 0
2941 numsemi = 0
2942 # snapshot count per depth
2942 # snapshot count per depth
2943 numsnapdepth = collections.defaultdict(lambda: 0)
2943 numsnapdepth = collections.defaultdict(lambda: 0)
2944 # delta against previous revision
2944 # delta against previous revision
2945 numprev = 0
2945 numprev = 0
2946 # delta against first or second parent (not prev)
2946 # delta against first or second parent (not prev)
2947 nump1 = 0
2947 nump1 = 0
2948 nump2 = 0
2948 nump2 = 0
2949 # delta against neither prev nor parents
2949 # delta against neither prev nor parents
2950 numother = 0
2950 numother = 0
2951 # delta against prev that are also first or second parent
2951 # delta against prev that are also first or second parent
2952 # (details of `numprev`)
2952 # (details of `numprev`)
2953 nump1prev = 0
2953 nump1prev = 0
2954 nump2prev = 0
2954 nump2prev = 0
2955
2955
2956 # data about delta chain of each revs
2956 # data about delta chain of each revs
2957 chainlengths = []
2957 chainlengths = []
2958 chainbases = []
2958 chainbases = []
2959 chainspans = []
2959 chainspans = []
2960
2960
2961 # data about each revision
2961 # data about each revision
2962 datasize = [None, 0, 0]
2962 datasize = [None, 0, 0]
2963 fullsize = [None, 0, 0]
2963 fullsize = [None, 0, 0]
2964 semisize = [None, 0, 0]
2964 semisize = [None, 0, 0]
2965 # snapshot count per depth
2965 # snapshot count per depth
2966 snapsizedepth = collections.defaultdict(lambda: [None, 0, 0])
2966 snapsizedepth = collections.defaultdict(lambda: [None, 0, 0])
2967 deltasize = [None, 0, 0]
2967 deltasize = [None, 0, 0]
2968 chunktypecounts = {}
2968 chunktypecounts = {}
2969 chunktypesizes = {}
2969 chunktypesizes = {}
2970
2970
2971 def addsize(size, l):
2971 def addsize(size, l):
2972 if l[0] is None or size < l[0]:
2972 if l[0] is None or size < l[0]:
2973 l[0] = size
2973 l[0] = size
2974 if size > l[1]:
2974 if size > l[1]:
2975 l[1] = size
2975 l[1] = size
2976 l[2] += size
2976 l[2] += size
2977
2977
2978 numrevs = len(r)
2978 numrevs = len(r)
2979 for rev in pycompat.xrange(numrevs):
2979 for rev in pycompat.xrange(numrevs):
2980 p1, p2 = r.parentrevs(rev)
2980 p1, p2 = r.parentrevs(rev)
2981 delta = r.deltaparent(rev)
2981 delta = r.deltaparent(rev)
2982 if format > 0:
2982 if format > 0:
2983 addsize(r.rawsize(rev), datasize)
2983 addsize(r.rawsize(rev), datasize)
2984 if p2 != nullrev:
2984 if p2 != nullrev:
2985 nummerges += 1
2985 nummerges += 1
2986 size = r.length(rev)
2986 size = r.length(rev)
2987 if delta == nullrev:
2987 if delta == nullrev:
2988 chainlengths.append(0)
2988 chainlengths.append(0)
2989 chainbases.append(r.start(rev))
2989 chainbases.append(r.start(rev))
2990 chainspans.append(size)
2990 chainspans.append(size)
2991 if size == 0:
2991 if size == 0:
2992 numempty += 1
2992 numempty += 1
2993 numemptytext += 1
2993 numemptytext += 1
2994 else:
2994 else:
2995 numfull += 1
2995 numfull += 1
2996 numsnapdepth[0] += 1
2996 numsnapdepth[0] += 1
2997 addsize(size, fullsize)
2997 addsize(size, fullsize)
2998 addsize(size, snapsizedepth[0])
2998 addsize(size, snapsizedepth[0])
2999 else:
2999 else:
3000 chainlengths.append(chainlengths[delta] + 1)
3000 chainlengths.append(chainlengths[delta] + 1)
3001 baseaddr = chainbases[delta]
3001 baseaddr = chainbases[delta]
3002 revaddr = r.start(rev)
3002 revaddr = r.start(rev)
3003 chainbases.append(baseaddr)
3003 chainbases.append(baseaddr)
3004 chainspans.append((revaddr - baseaddr) + size)
3004 chainspans.append((revaddr - baseaddr) + size)
3005 if size == 0:
3005 if size == 0:
3006 numempty += 1
3006 numempty += 1
3007 numemptydelta += 1
3007 numemptydelta += 1
3008 elif r.issnapshot(rev):
3008 elif r.issnapshot(rev):
3009 addsize(size, semisize)
3009 addsize(size, semisize)
3010 numsemi += 1
3010 numsemi += 1
3011 depth = r.snapshotdepth(rev)
3011 depth = r.snapshotdepth(rev)
3012 numsnapdepth[depth] += 1
3012 numsnapdepth[depth] += 1
3013 addsize(size, snapsizedepth[depth])
3013 addsize(size, snapsizedepth[depth])
3014 else:
3014 else:
3015 addsize(size, deltasize)
3015 addsize(size, deltasize)
3016 if delta == rev - 1:
3016 if delta == rev - 1:
3017 numprev += 1
3017 numprev += 1
3018 if delta == p1:
3018 if delta == p1:
3019 nump1prev += 1
3019 nump1prev += 1
3020 elif delta == p2:
3020 elif delta == p2:
3021 nump2prev += 1
3021 nump2prev += 1
3022 elif delta == p1:
3022 elif delta == p1:
3023 nump1 += 1
3023 nump1 += 1
3024 elif delta == p2:
3024 elif delta == p2:
3025 nump2 += 1
3025 nump2 += 1
3026 elif delta != nullrev:
3026 elif delta != nullrev:
3027 numother += 1
3027 numother += 1
3028
3028
3029 # Obtain data on the raw chunks in the revlog.
3029 # Obtain data on the raw chunks in the revlog.
3030 if util.safehasattr(r, b'_getsegmentforrevs'):
3030 if util.safehasattr(r, b'_getsegmentforrevs'):
3031 segment = r._getsegmentforrevs(rev, rev)[1]
3031 segment = r._getsegmentforrevs(rev, rev)[1]
3032 else:
3032 else:
3033 segment = r._revlog._getsegmentforrevs(rev, rev)[1]
3033 segment = r._revlog._getsegmentforrevs(rev, rev)[1]
3034 if segment:
3034 if segment:
3035 chunktype = bytes(segment[0:1])
3035 chunktype = bytes(segment[0:1])
3036 else:
3036 else:
3037 chunktype = b'empty'
3037 chunktype = b'empty'
3038
3038
3039 if chunktype not in chunktypecounts:
3039 if chunktype not in chunktypecounts:
3040 chunktypecounts[chunktype] = 0
3040 chunktypecounts[chunktype] = 0
3041 chunktypesizes[chunktype] = 0
3041 chunktypesizes[chunktype] = 0
3042
3042
3043 chunktypecounts[chunktype] += 1
3043 chunktypecounts[chunktype] += 1
3044 chunktypesizes[chunktype] += size
3044 chunktypesizes[chunktype] += size
3045
3045
3046 # Adjust size min value for empty cases
3046 # Adjust size min value for empty cases
3047 for size in (datasize, fullsize, semisize, deltasize):
3047 for size in (datasize, fullsize, semisize, deltasize):
3048 if size[0] is None:
3048 if size[0] is None:
3049 size[0] = 0
3049 size[0] = 0
3050
3050
3051 numdeltas = numrevs - numfull - numempty - numsemi
3051 numdeltas = numrevs - numfull - numempty - numsemi
3052 numoprev = numprev - nump1prev - nump2prev
3052 numoprev = numprev - nump1prev - nump2prev
3053 totalrawsize = datasize[2]
3053 totalrawsize = datasize[2]
3054 datasize[2] /= numrevs
3054 datasize[2] /= numrevs
3055 fulltotal = fullsize[2]
3055 fulltotal = fullsize[2]
3056 if numfull == 0:
3056 if numfull == 0:
3057 fullsize[2] = 0
3057 fullsize[2] = 0
3058 else:
3058 else:
3059 fullsize[2] /= numfull
3059 fullsize[2] /= numfull
3060 semitotal = semisize[2]
3060 semitotal = semisize[2]
3061 snaptotal = {}
3061 snaptotal = {}
3062 if numsemi > 0:
3062 if numsemi > 0:
3063 semisize[2] /= numsemi
3063 semisize[2] /= numsemi
3064 for depth in snapsizedepth:
3064 for depth in snapsizedepth:
3065 snaptotal[depth] = snapsizedepth[depth][2]
3065 snaptotal[depth] = snapsizedepth[depth][2]
3066 snapsizedepth[depth][2] /= numsnapdepth[depth]
3066 snapsizedepth[depth][2] /= numsnapdepth[depth]
3067
3067
3068 deltatotal = deltasize[2]
3068 deltatotal = deltasize[2]
3069 if numdeltas > 0:
3069 if numdeltas > 0:
3070 deltasize[2] /= numdeltas
3070 deltasize[2] /= numdeltas
3071 totalsize = fulltotal + semitotal + deltatotal
3071 totalsize = fulltotal + semitotal + deltatotal
3072 avgchainlen = sum(chainlengths) / numrevs
3072 avgchainlen = sum(chainlengths) / numrevs
3073 maxchainlen = max(chainlengths)
3073 maxchainlen = max(chainlengths)
3074 maxchainspan = max(chainspans)
3074 maxchainspan = max(chainspans)
3075 compratio = 1
3075 compratio = 1
3076 if totalsize:
3076 if totalsize:
3077 compratio = totalrawsize / totalsize
3077 compratio = totalrawsize / totalsize
3078
3078
3079 basedfmtstr = b'%%%dd\n'
3079 basedfmtstr = b'%%%dd\n'
3080 basepcfmtstr = b'%%%dd %s(%%5.2f%%%%)\n'
3080 basepcfmtstr = b'%%%dd %s(%%5.2f%%%%)\n'
3081
3081
3082 def dfmtstr(max):
3082 def dfmtstr(max):
3083 return basedfmtstr % len(str(max))
3083 return basedfmtstr % len(str(max))
3084
3084
3085 def pcfmtstr(max, padding=0):
3085 def pcfmtstr(max, padding=0):
3086 return basepcfmtstr % (len(str(max)), b' ' * padding)
3086 return basepcfmtstr % (len(str(max)), b' ' * padding)
3087
3087
3088 def pcfmt(value, total):
3088 def pcfmt(value, total):
3089 if total:
3089 if total:
3090 return (value, 100 * float(value) / total)
3090 return (value, 100 * float(value) / total)
3091 else:
3091 else:
3092 return value, 100.0
3092 return value, 100.0
3093
3093
3094 ui.writenoi18n(b'format : %d\n' % format)
3094 ui.writenoi18n(b'format : %d\n' % format)
3095 ui.writenoi18n(b'flags : %s\n' % b', '.join(flags))
3095 ui.writenoi18n(b'flags : %s\n' % b', '.join(flags))
3096
3096
3097 ui.write(b'\n')
3097 ui.write(b'\n')
3098 fmt = pcfmtstr(totalsize)
3098 fmt = pcfmtstr(totalsize)
3099 fmt2 = dfmtstr(totalsize)
3099 fmt2 = dfmtstr(totalsize)
3100 ui.writenoi18n(b'revisions : ' + fmt2 % numrevs)
3100 ui.writenoi18n(b'revisions : ' + fmt2 % numrevs)
3101 ui.writenoi18n(b' merges : ' + fmt % pcfmt(nummerges, numrevs))
3101 ui.writenoi18n(b' merges : ' + fmt % pcfmt(nummerges, numrevs))
3102 ui.writenoi18n(
3102 ui.writenoi18n(
3103 b' normal : ' + fmt % pcfmt(numrevs - nummerges, numrevs)
3103 b' normal : ' + fmt % pcfmt(numrevs - nummerges, numrevs)
3104 )
3104 )
3105 ui.writenoi18n(b'revisions : ' + fmt2 % numrevs)
3105 ui.writenoi18n(b'revisions : ' + fmt2 % numrevs)
3106 ui.writenoi18n(b' empty : ' + fmt % pcfmt(numempty, numrevs))
3106 ui.writenoi18n(b' empty : ' + fmt % pcfmt(numempty, numrevs))
3107 ui.writenoi18n(
3107 ui.writenoi18n(
3108 b' text : '
3108 b' text : '
3109 + fmt % pcfmt(numemptytext, numemptytext + numemptydelta)
3109 + fmt % pcfmt(numemptytext, numemptytext + numemptydelta)
3110 )
3110 )
3111 ui.writenoi18n(
3111 ui.writenoi18n(
3112 b' delta : '
3112 b' delta : '
3113 + fmt % pcfmt(numemptydelta, numemptytext + numemptydelta)
3113 + fmt % pcfmt(numemptydelta, numemptytext + numemptydelta)
3114 )
3114 )
3115 ui.writenoi18n(
3115 ui.writenoi18n(
3116 b' snapshot : ' + fmt % pcfmt(numfull + numsemi, numrevs)
3116 b' snapshot : ' + fmt % pcfmt(numfull + numsemi, numrevs)
3117 )
3117 )
3118 for depth in sorted(numsnapdepth):
3118 for depth in sorted(numsnapdepth):
3119 ui.write(
3119 ui.write(
3120 (b' lvl-%-3d : ' % depth)
3120 (b' lvl-%-3d : ' % depth)
3121 + fmt % pcfmt(numsnapdepth[depth], numrevs)
3121 + fmt % pcfmt(numsnapdepth[depth], numrevs)
3122 )
3122 )
3123 ui.writenoi18n(b' deltas : ' + fmt % pcfmt(numdeltas, numrevs))
3123 ui.writenoi18n(b' deltas : ' + fmt % pcfmt(numdeltas, numrevs))
3124 ui.writenoi18n(b'revision size : ' + fmt2 % totalsize)
3124 ui.writenoi18n(b'revision size : ' + fmt2 % totalsize)
3125 ui.writenoi18n(
3125 ui.writenoi18n(
3126 b' snapshot : ' + fmt % pcfmt(fulltotal + semitotal, totalsize)
3126 b' snapshot : ' + fmt % pcfmt(fulltotal + semitotal, totalsize)
3127 )
3127 )
3128 for depth in sorted(numsnapdepth):
3128 for depth in sorted(numsnapdepth):
3129 ui.write(
3129 ui.write(
3130 (b' lvl-%-3d : ' % depth)
3130 (b' lvl-%-3d : ' % depth)
3131 + fmt % pcfmt(snaptotal[depth], totalsize)
3131 + fmt % pcfmt(snaptotal[depth], totalsize)
3132 )
3132 )
3133 ui.writenoi18n(b' deltas : ' + fmt % pcfmt(deltatotal, totalsize))
3133 ui.writenoi18n(b' deltas : ' + fmt % pcfmt(deltatotal, totalsize))
3134
3134
3135 def fmtchunktype(chunktype):
3135 def fmtchunktype(chunktype):
3136 if chunktype == b'empty':
3136 if chunktype == b'empty':
3137 return b' %s : ' % chunktype
3137 return b' %s : ' % chunktype
3138 elif chunktype in pycompat.bytestr(string.ascii_letters):
3138 elif chunktype in pycompat.bytestr(string.ascii_letters):
3139 return b' 0x%s (%s) : ' % (hex(chunktype), chunktype)
3139 return b' 0x%s (%s) : ' % (hex(chunktype), chunktype)
3140 else:
3140 else:
3141 return b' 0x%s : ' % hex(chunktype)
3141 return b' 0x%s : ' % hex(chunktype)
3142
3142
3143 ui.write(b'\n')
3143 ui.write(b'\n')
3144 ui.writenoi18n(b'chunks : ' + fmt2 % numrevs)
3144 ui.writenoi18n(b'chunks : ' + fmt2 % numrevs)
3145 for chunktype in sorted(chunktypecounts):
3145 for chunktype in sorted(chunktypecounts):
3146 ui.write(fmtchunktype(chunktype))
3146 ui.write(fmtchunktype(chunktype))
3147 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
3147 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
3148 ui.writenoi18n(b'chunks size : ' + fmt2 % totalsize)
3148 ui.writenoi18n(b'chunks size : ' + fmt2 % totalsize)
3149 for chunktype in sorted(chunktypecounts):
3149 for chunktype in sorted(chunktypecounts):
3150 ui.write(fmtchunktype(chunktype))
3150 ui.write(fmtchunktype(chunktype))
3151 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
3151 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
3152
3152
3153 ui.write(b'\n')
3153 ui.write(b'\n')
3154 fmt = dfmtstr(max(avgchainlen, maxchainlen, maxchainspan, compratio))
3154 fmt = dfmtstr(max(avgchainlen, maxchainlen, maxchainspan, compratio))
3155 ui.writenoi18n(b'avg chain length : ' + fmt % avgchainlen)
3155 ui.writenoi18n(b'avg chain length : ' + fmt % avgchainlen)
3156 ui.writenoi18n(b'max chain length : ' + fmt % maxchainlen)
3156 ui.writenoi18n(b'max chain length : ' + fmt % maxchainlen)
3157 ui.writenoi18n(b'max chain reach : ' + fmt % maxchainspan)
3157 ui.writenoi18n(b'max chain reach : ' + fmt % maxchainspan)
3158 ui.writenoi18n(b'compression ratio : ' + fmt % compratio)
3158 ui.writenoi18n(b'compression ratio : ' + fmt % compratio)
3159
3159
3160 if format > 0:
3160 if format > 0:
3161 ui.write(b'\n')
3161 ui.write(b'\n')
3162 ui.writenoi18n(
3162 ui.writenoi18n(
3163 b'uncompressed data size (min/max/avg) : %d / %d / %d\n'
3163 b'uncompressed data size (min/max/avg) : %d / %d / %d\n'
3164 % tuple(datasize)
3164 % tuple(datasize)
3165 )
3165 )
3166 ui.writenoi18n(
3166 ui.writenoi18n(
3167 b'full revision size (min/max/avg) : %d / %d / %d\n'
3167 b'full revision size (min/max/avg) : %d / %d / %d\n'
3168 % tuple(fullsize)
3168 % tuple(fullsize)
3169 )
3169 )
3170 ui.writenoi18n(
3170 ui.writenoi18n(
3171 b'inter-snapshot size (min/max/avg) : %d / %d / %d\n'
3171 b'inter-snapshot size (min/max/avg) : %d / %d / %d\n'
3172 % tuple(semisize)
3172 % tuple(semisize)
3173 )
3173 )
3174 for depth in sorted(snapsizedepth):
3174 for depth in sorted(snapsizedepth):
3175 if depth == 0:
3175 if depth == 0:
3176 continue
3176 continue
3177 ui.writenoi18n(
3177 ui.writenoi18n(
3178 b' level-%-3d (min/max/avg) : %d / %d / %d\n'
3178 b' level-%-3d (min/max/avg) : %d / %d / %d\n'
3179 % ((depth,) + tuple(snapsizedepth[depth]))
3179 % ((depth,) + tuple(snapsizedepth[depth]))
3180 )
3180 )
3181 ui.writenoi18n(
3181 ui.writenoi18n(
3182 b'delta size (min/max/avg) : %d / %d / %d\n'
3182 b'delta size (min/max/avg) : %d / %d / %d\n'
3183 % tuple(deltasize)
3183 % tuple(deltasize)
3184 )
3184 )
3185
3185
3186 if numdeltas > 0:
3186 if numdeltas > 0:
3187 ui.write(b'\n')
3187 ui.write(b'\n')
3188 fmt = pcfmtstr(numdeltas)
3188 fmt = pcfmtstr(numdeltas)
3189 fmt2 = pcfmtstr(numdeltas, 4)
3189 fmt2 = pcfmtstr(numdeltas, 4)
3190 ui.writenoi18n(
3190 ui.writenoi18n(
3191 b'deltas against prev : ' + fmt % pcfmt(numprev, numdeltas)
3191 b'deltas against prev : ' + fmt % pcfmt(numprev, numdeltas)
3192 )
3192 )
3193 if numprev > 0:
3193 if numprev > 0:
3194 ui.writenoi18n(
3194 ui.writenoi18n(
3195 b' where prev = p1 : ' + fmt2 % pcfmt(nump1prev, numprev)
3195 b' where prev = p1 : ' + fmt2 % pcfmt(nump1prev, numprev)
3196 )
3196 )
3197 ui.writenoi18n(
3197 ui.writenoi18n(
3198 b' where prev = p2 : ' + fmt2 % pcfmt(nump2prev, numprev)
3198 b' where prev = p2 : ' + fmt2 % pcfmt(nump2prev, numprev)
3199 )
3199 )
3200 ui.writenoi18n(
3200 ui.writenoi18n(
3201 b' other : ' + fmt2 % pcfmt(numoprev, numprev)
3201 b' other : ' + fmt2 % pcfmt(numoprev, numprev)
3202 )
3202 )
3203 if gdelta:
3203 if gdelta:
3204 ui.writenoi18n(
3204 ui.writenoi18n(
3205 b'deltas against p1 : ' + fmt % pcfmt(nump1, numdeltas)
3205 b'deltas against p1 : ' + fmt % pcfmt(nump1, numdeltas)
3206 )
3206 )
3207 ui.writenoi18n(
3207 ui.writenoi18n(
3208 b'deltas against p2 : ' + fmt % pcfmt(nump2, numdeltas)
3208 b'deltas against p2 : ' + fmt % pcfmt(nump2, numdeltas)
3209 )
3209 )
3210 ui.writenoi18n(
3210 ui.writenoi18n(
3211 b'deltas against other : ' + fmt % pcfmt(numother, numdeltas)
3211 b'deltas against other : ' + fmt % pcfmt(numother, numdeltas)
3212 )
3212 )
3213
3213
3214
3214
3215 @command(
3215 @command(
3216 b'debugrevlogindex',
3216 b'debugrevlogindex',
3217 cmdutil.debugrevlogopts
3217 cmdutil.debugrevlogopts
3218 + [(b'f', b'format', 0, _(b'revlog format'), _(b'FORMAT'))],
3218 + [(b'f', b'format', 0, _(b'revlog format'), _(b'FORMAT'))],
3219 _(b'[-f FORMAT] -c|-m|FILE'),
3219 _(b'[-f FORMAT] -c|-m|FILE'),
3220 optionalrepo=True,
3220 optionalrepo=True,
3221 )
3221 )
3222 def debugrevlogindex(ui, repo, file_=None, **opts):
3222 def debugrevlogindex(ui, repo, file_=None, **opts):
3223 """dump the contents of a revlog index"""
3223 """dump the contents of a revlog index"""
3224 opts = pycompat.byteskwargs(opts)
3224 opts = pycompat.byteskwargs(opts)
3225 r = cmdutil.openrevlog(repo, b'debugrevlogindex', file_, opts)
3225 r = cmdutil.openrevlog(repo, b'debugrevlogindex', file_, opts)
3226 format = opts.get(b'format', 0)
3226 format = opts.get(b'format', 0)
3227 if format not in (0, 1):
3227 if format not in (0, 1):
3228 raise error.Abort(_(b"unknown format %d") % format)
3228 raise error.Abort(_(b"unknown format %d") % format)
3229
3229
3230 if ui.debugflag:
3230 if ui.debugflag:
3231 shortfn = hex
3231 shortfn = hex
3232 else:
3232 else:
3233 shortfn = short
3233 shortfn = short
3234
3234
3235 # There might not be anything in r, so have a sane default
3235 # There might not be anything in r, so have a sane default
3236 idlen = 12
3236 idlen = 12
3237 for i in r:
3237 for i in r:
3238 idlen = len(shortfn(r.node(i)))
3238 idlen = len(shortfn(r.node(i)))
3239 break
3239 break
3240
3240
3241 if format == 0:
3241 if format == 0:
3242 if ui.verbose:
3242 if ui.verbose:
3243 ui.writenoi18n(
3243 ui.writenoi18n(
3244 b" rev offset length linkrev %s %s p2\n"
3244 b" rev offset length linkrev %s %s p2\n"
3245 % (b"nodeid".ljust(idlen), b"p1".ljust(idlen))
3245 % (b"nodeid".ljust(idlen), b"p1".ljust(idlen))
3246 )
3246 )
3247 else:
3247 else:
3248 ui.writenoi18n(
3248 ui.writenoi18n(
3249 b" rev linkrev %s %s p2\n"
3249 b" rev linkrev %s %s p2\n"
3250 % (b"nodeid".ljust(idlen), b"p1".ljust(idlen))
3250 % (b"nodeid".ljust(idlen), b"p1".ljust(idlen))
3251 )
3251 )
3252 elif format == 1:
3252 elif format == 1:
3253 if ui.verbose:
3253 if ui.verbose:
3254 ui.writenoi18n(
3254 ui.writenoi18n(
3255 (
3255 (
3256 b" rev flag offset length size link p1"
3256 b" rev flag offset length size link p1"
3257 b" p2 %s\n"
3257 b" p2 %s\n"
3258 )
3258 )
3259 % b"nodeid".rjust(idlen)
3259 % b"nodeid".rjust(idlen)
3260 )
3260 )
3261 else:
3261 else:
3262 ui.writenoi18n(
3262 ui.writenoi18n(
3263 b" rev flag size link p1 p2 %s\n"
3263 b" rev flag size link p1 p2 %s\n"
3264 % b"nodeid".rjust(idlen)
3264 % b"nodeid".rjust(idlen)
3265 )
3265 )
3266
3266
3267 for i in r:
3267 for i in r:
3268 node = r.node(i)
3268 node = r.node(i)
3269 if format == 0:
3269 if format == 0:
3270 try:
3270 try:
3271 pp = r.parents(node)
3271 pp = r.parents(node)
3272 except Exception:
3272 except Exception:
3273 pp = [nullid, nullid]
3273 pp = [nullid, nullid]
3274 if ui.verbose:
3274 if ui.verbose:
3275 ui.write(
3275 ui.write(
3276 b"% 6d % 9d % 7d % 7d %s %s %s\n"
3276 b"% 6d % 9d % 7d % 7d %s %s %s\n"
3277 % (
3277 % (
3278 i,
3278 i,
3279 r.start(i),
3279 r.start(i),
3280 r.length(i),
3280 r.length(i),
3281 r.linkrev(i),
3281 r.linkrev(i),
3282 shortfn(node),
3282 shortfn(node),
3283 shortfn(pp[0]),
3283 shortfn(pp[0]),
3284 shortfn(pp[1]),
3284 shortfn(pp[1]),
3285 )
3285 )
3286 )
3286 )
3287 else:
3287 else:
3288 ui.write(
3288 ui.write(
3289 b"% 6d % 7d %s %s %s\n"
3289 b"% 6d % 7d %s %s %s\n"
3290 % (
3290 % (
3291 i,
3291 i,
3292 r.linkrev(i),
3292 r.linkrev(i),
3293 shortfn(node),
3293 shortfn(node),
3294 shortfn(pp[0]),
3294 shortfn(pp[0]),
3295 shortfn(pp[1]),
3295 shortfn(pp[1]),
3296 )
3296 )
3297 )
3297 )
3298 elif format == 1:
3298 elif format == 1:
3299 pr = r.parentrevs(i)
3299 pr = r.parentrevs(i)
3300 if ui.verbose:
3300 if ui.verbose:
3301 ui.write(
3301 ui.write(
3302 b"% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d %s\n"
3302 b"% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d %s\n"
3303 % (
3303 % (
3304 i,
3304 i,
3305 r.flags(i),
3305 r.flags(i),
3306 r.start(i),
3306 r.start(i),
3307 r.length(i),
3307 r.length(i),
3308 r.rawsize(i),
3308 r.rawsize(i),
3309 r.linkrev(i),
3309 r.linkrev(i),
3310 pr[0],
3310 pr[0],
3311 pr[1],
3311 pr[1],
3312 shortfn(node),
3312 shortfn(node),
3313 )
3313 )
3314 )
3314 )
3315 else:
3315 else:
3316 ui.write(
3316 ui.write(
3317 b"% 6d %04x % 8d % 6d % 6d % 6d %s\n"
3317 b"% 6d %04x % 8d % 6d % 6d % 6d %s\n"
3318 % (
3318 % (
3319 i,
3319 i,
3320 r.flags(i),
3320 r.flags(i),
3321 r.rawsize(i),
3321 r.rawsize(i),
3322 r.linkrev(i),
3322 r.linkrev(i),
3323 pr[0],
3323 pr[0],
3324 pr[1],
3324 pr[1],
3325 shortfn(node),
3325 shortfn(node),
3326 )
3326 )
3327 )
3327 )
3328
3328
3329
3329
3330 @command(
3330 @command(
3331 b'debugrevspec',
3331 b'debugrevspec',
3332 [
3332 [
3333 (
3333 (
3334 b'',
3334 b'',
3335 b'optimize',
3335 b'optimize',
3336 None,
3336 None,
3337 _(b'print parsed tree after optimizing (DEPRECATED)'),
3337 _(b'print parsed tree after optimizing (DEPRECATED)'),
3338 ),
3338 ),
3339 (
3339 (
3340 b'',
3340 b'',
3341 b'show-revs',
3341 b'show-revs',
3342 True,
3342 True,
3343 _(b'print list of result revisions (default)'),
3343 _(b'print list of result revisions (default)'),
3344 ),
3344 ),
3345 (
3345 (
3346 b's',
3346 b's',
3347 b'show-set',
3347 b'show-set',
3348 None,
3348 None,
3349 _(b'print internal representation of result set'),
3349 _(b'print internal representation of result set'),
3350 ),
3350 ),
3351 (
3351 (
3352 b'p',
3352 b'p',
3353 b'show-stage',
3353 b'show-stage',
3354 [],
3354 [],
3355 _(b'print parsed tree at the given stage'),
3355 _(b'print parsed tree at the given stage'),
3356 _(b'NAME'),
3356 _(b'NAME'),
3357 ),
3357 ),
3358 (b'', b'no-optimized', False, _(b'evaluate tree without optimization')),
3358 (b'', b'no-optimized', False, _(b'evaluate tree without optimization')),
3359 (b'', b'verify-optimized', False, _(b'verify optimized result')),
3359 (b'', b'verify-optimized', False, _(b'verify optimized result')),
3360 ],
3360 ],
3361 b'REVSPEC',
3361 b'REVSPEC',
3362 )
3362 )
3363 def debugrevspec(ui, repo, expr, **opts):
3363 def debugrevspec(ui, repo, expr, **opts):
3364 """parse and apply a revision specification
3364 """parse and apply a revision specification
3365
3365
3366 Use -p/--show-stage option to print the parsed tree at the given stages.
3366 Use -p/--show-stage option to print the parsed tree at the given stages.
3367 Use -p all to print tree at every stage.
3367 Use -p all to print tree at every stage.
3368
3368
3369 Use --no-show-revs option with -s or -p to print only the set
3369 Use --no-show-revs option with -s or -p to print only the set
3370 representation or the parsed tree respectively.
3370 representation or the parsed tree respectively.
3371
3371
3372 Use --verify-optimized to compare the optimized result with the unoptimized
3372 Use --verify-optimized to compare the optimized result with the unoptimized
3373 one. Returns 1 if the optimized result differs.
3373 one. Returns 1 if the optimized result differs.
3374 """
3374 """
3375 opts = pycompat.byteskwargs(opts)
3375 opts = pycompat.byteskwargs(opts)
3376 aliases = ui.configitems(b'revsetalias')
3376 aliases = ui.configitems(b'revsetalias')
3377 stages = [
3377 stages = [
3378 (b'parsed', lambda tree: tree),
3378 (b'parsed', lambda tree: tree),
3379 (
3379 (
3380 b'expanded',
3380 b'expanded',
3381 lambda tree: revsetlang.expandaliases(tree, aliases, ui.warn),
3381 lambda tree: revsetlang.expandaliases(tree, aliases, ui.warn),
3382 ),
3382 ),
3383 (b'concatenated', revsetlang.foldconcat),
3383 (b'concatenated', revsetlang.foldconcat),
3384 (b'analyzed', revsetlang.analyze),
3384 (b'analyzed', revsetlang.analyze),
3385 (b'optimized', revsetlang.optimize),
3385 (b'optimized', revsetlang.optimize),
3386 ]
3386 ]
3387 if opts[b'no_optimized']:
3387 if opts[b'no_optimized']:
3388 stages = stages[:-1]
3388 stages = stages[:-1]
3389 if opts[b'verify_optimized'] and opts[b'no_optimized']:
3389 if opts[b'verify_optimized'] and opts[b'no_optimized']:
3390 raise error.Abort(
3390 raise error.Abort(
3391 _(b'cannot use --verify-optimized with --no-optimized')
3391 _(b'cannot use --verify-optimized with --no-optimized')
3392 )
3392 )
3393 stagenames = {n for n, f in stages}
3393 stagenames = {n for n, f in stages}
3394
3394
3395 showalways = set()
3395 showalways = set()
3396 showchanged = set()
3396 showchanged = set()
3397 if ui.verbose and not opts[b'show_stage']:
3397 if ui.verbose and not opts[b'show_stage']:
3398 # show parsed tree by --verbose (deprecated)
3398 # show parsed tree by --verbose (deprecated)
3399 showalways.add(b'parsed')
3399 showalways.add(b'parsed')
3400 showchanged.update([b'expanded', b'concatenated'])
3400 showchanged.update([b'expanded', b'concatenated'])
3401 if opts[b'optimize']:
3401 if opts[b'optimize']:
3402 showalways.add(b'optimized')
3402 showalways.add(b'optimized')
3403 if opts[b'show_stage'] and opts[b'optimize']:
3403 if opts[b'show_stage'] and opts[b'optimize']:
3404 raise error.Abort(_(b'cannot use --optimize with --show-stage'))
3404 raise error.Abort(_(b'cannot use --optimize with --show-stage'))
3405 if opts[b'show_stage'] == [b'all']:
3405 if opts[b'show_stage'] == [b'all']:
3406 showalways.update(stagenames)
3406 showalways.update(stagenames)
3407 else:
3407 else:
3408 for n in opts[b'show_stage']:
3408 for n in opts[b'show_stage']:
3409 if n not in stagenames:
3409 if n not in stagenames:
3410 raise error.Abort(_(b'invalid stage name: %s') % n)
3410 raise error.Abort(_(b'invalid stage name: %s') % n)
3411 showalways.update(opts[b'show_stage'])
3411 showalways.update(opts[b'show_stage'])
3412
3412
3413 treebystage = {}
3413 treebystage = {}
3414 printedtree = None
3414 printedtree = None
3415 tree = revsetlang.parse(expr, lookup=revset.lookupfn(repo))
3415 tree = revsetlang.parse(expr, lookup=revset.lookupfn(repo))
3416 for n, f in stages:
3416 for n, f in stages:
3417 treebystage[n] = tree = f(tree)
3417 treebystage[n] = tree = f(tree)
3418 if n in showalways or (n in showchanged and tree != printedtree):
3418 if n in showalways or (n in showchanged and tree != printedtree):
3419 if opts[b'show_stage'] or n != b'parsed':
3419 if opts[b'show_stage'] or n != b'parsed':
3420 ui.write(b"* %s:\n" % n)
3420 ui.write(b"* %s:\n" % n)
3421 ui.write(revsetlang.prettyformat(tree), b"\n")
3421 ui.write(revsetlang.prettyformat(tree), b"\n")
3422 printedtree = tree
3422 printedtree = tree
3423
3423
3424 if opts[b'verify_optimized']:
3424 if opts[b'verify_optimized']:
3425 arevs = revset.makematcher(treebystage[b'analyzed'])(repo)
3425 arevs = revset.makematcher(treebystage[b'analyzed'])(repo)
3426 brevs = revset.makematcher(treebystage[b'optimized'])(repo)
3426 brevs = revset.makematcher(treebystage[b'optimized'])(repo)
3427 if opts[b'show_set'] or (opts[b'show_set'] is None and ui.verbose):
3427 if opts[b'show_set'] or (opts[b'show_set'] is None and ui.verbose):
3428 ui.writenoi18n(
3428 ui.writenoi18n(
3429 b"* analyzed set:\n", stringutil.prettyrepr(arevs), b"\n"
3429 b"* analyzed set:\n", stringutil.prettyrepr(arevs), b"\n"
3430 )
3430 )
3431 ui.writenoi18n(
3431 ui.writenoi18n(
3432 b"* optimized set:\n", stringutil.prettyrepr(brevs), b"\n"
3432 b"* optimized set:\n", stringutil.prettyrepr(brevs), b"\n"
3433 )
3433 )
3434 arevs = list(arevs)
3434 arevs = list(arevs)
3435 brevs = list(brevs)
3435 brevs = list(brevs)
3436 if arevs == brevs:
3436 if arevs == brevs:
3437 return 0
3437 return 0
3438 ui.writenoi18n(b'--- analyzed\n', label=b'diff.file_a')
3438 ui.writenoi18n(b'--- analyzed\n', label=b'diff.file_a')
3439 ui.writenoi18n(b'+++ optimized\n', label=b'diff.file_b')
3439 ui.writenoi18n(b'+++ optimized\n', label=b'diff.file_b')
3440 sm = difflib.SequenceMatcher(None, arevs, brevs)
3440 sm = difflib.SequenceMatcher(None, arevs, brevs)
3441 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
3441 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
3442 if tag in ('delete', 'replace'):
3442 if tag in ('delete', 'replace'):
3443 for c in arevs[alo:ahi]:
3443 for c in arevs[alo:ahi]:
3444 ui.write(b'-%d\n' % c, label=b'diff.deleted')
3444 ui.write(b'-%d\n' % c, label=b'diff.deleted')
3445 if tag in ('insert', 'replace'):
3445 if tag in ('insert', 'replace'):
3446 for c in brevs[blo:bhi]:
3446 for c in brevs[blo:bhi]:
3447 ui.write(b'+%d\n' % c, label=b'diff.inserted')
3447 ui.write(b'+%d\n' % c, label=b'diff.inserted')
3448 if tag == 'equal':
3448 if tag == 'equal':
3449 for c in arevs[alo:ahi]:
3449 for c in arevs[alo:ahi]:
3450 ui.write(b' %d\n' % c)
3450 ui.write(b' %d\n' % c)
3451 return 1
3451 return 1
3452
3452
3453 func = revset.makematcher(tree)
3453 func = revset.makematcher(tree)
3454 revs = func(repo)
3454 revs = func(repo)
3455 if opts[b'show_set'] or (opts[b'show_set'] is None and ui.verbose):
3455 if opts[b'show_set'] or (opts[b'show_set'] is None and ui.verbose):
3456 ui.writenoi18n(b"* set:\n", stringutil.prettyrepr(revs), b"\n")
3456 ui.writenoi18n(b"* set:\n", stringutil.prettyrepr(revs), b"\n")
3457 if not opts[b'show_revs']:
3457 if not opts[b'show_revs']:
3458 return
3458 return
3459 for c in revs:
3459 for c in revs:
3460 ui.write(b"%d\n" % c)
3460 ui.write(b"%d\n" % c)
3461
3461
3462
3462
3463 @command(
3463 @command(
3464 b'debugserve',
3464 b'debugserve',
3465 [
3465 [
3466 (
3466 (
3467 b'',
3467 b'',
3468 b'sshstdio',
3468 b'sshstdio',
3469 False,
3469 False,
3470 _(b'run an SSH server bound to process handles'),
3470 _(b'run an SSH server bound to process handles'),
3471 ),
3471 ),
3472 (b'', b'logiofd', b'', _(b'file descriptor to log server I/O to')),
3472 (b'', b'logiofd', b'', _(b'file descriptor to log server I/O to')),
3473 (b'', b'logiofile', b'', _(b'file to log server I/O to')),
3473 (b'', b'logiofile', b'', _(b'file to log server I/O to')),
3474 ],
3474 ],
3475 b'',
3475 b'',
3476 )
3476 )
3477 def debugserve(ui, repo, **opts):
3477 def debugserve(ui, repo, **opts):
3478 """run a server with advanced settings
3478 """run a server with advanced settings
3479
3479
3480 This command is similar to :hg:`serve`. It exists partially as a
3480 This command is similar to :hg:`serve`. It exists partially as a
3481 workaround to the fact that ``hg serve --stdio`` must have specific
3481 workaround to the fact that ``hg serve --stdio`` must have specific
3482 arguments for security reasons.
3482 arguments for security reasons.
3483 """
3483 """
3484 opts = pycompat.byteskwargs(opts)
3484 opts = pycompat.byteskwargs(opts)
3485
3485
3486 if not opts[b'sshstdio']:
3486 if not opts[b'sshstdio']:
3487 raise error.Abort(_(b'only --sshstdio is currently supported'))
3487 raise error.Abort(_(b'only --sshstdio is currently supported'))
3488
3488
3489 logfh = None
3489 logfh = None
3490
3490
3491 if opts[b'logiofd'] and opts[b'logiofile']:
3491 if opts[b'logiofd'] and opts[b'logiofile']:
3492 raise error.Abort(_(b'cannot use both --logiofd and --logiofile'))
3492 raise error.Abort(_(b'cannot use both --logiofd and --logiofile'))
3493
3493
3494 if opts[b'logiofd']:
3494 if opts[b'logiofd']:
3495 # Ideally we would be line buffered. But line buffering in binary
3495 # Ideally we would be line buffered. But line buffering in binary
3496 # mode isn't supported and emits a warning in Python 3.8+. Disabling
3496 # mode isn't supported and emits a warning in Python 3.8+. Disabling
3497 # buffering could have performance impacts. But since this isn't
3497 # buffering could have performance impacts. But since this isn't
3498 # performance critical code, it should be fine.
3498 # performance critical code, it should be fine.
3499 try:
3499 try:
3500 logfh = os.fdopen(int(opts[b'logiofd']), 'ab', 0)
3500 logfh = os.fdopen(int(opts[b'logiofd']), 'ab', 0)
3501 except OSError as e:
3501 except OSError as e:
3502 if e.errno != errno.ESPIPE:
3502 if e.errno != errno.ESPIPE:
3503 raise
3503 raise
3504 # can't seek a pipe, so `ab` mode fails on py3
3504 # can't seek a pipe, so `ab` mode fails on py3
3505 logfh = os.fdopen(int(opts[b'logiofd']), 'wb', 0)
3505 logfh = os.fdopen(int(opts[b'logiofd']), 'wb', 0)
3506 elif opts[b'logiofile']:
3506 elif opts[b'logiofile']:
3507 logfh = open(opts[b'logiofile'], b'ab', 0)
3507 logfh = open(opts[b'logiofile'], b'ab', 0)
3508
3508
3509 s = wireprotoserver.sshserver(ui, repo, logfh=logfh)
3509 s = wireprotoserver.sshserver(ui, repo, logfh=logfh)
3510 s.serve_forever()
3510 s.serve_forever()
3511
3511
3512
3512
3513 @command(b'debugsetparents', [], _(b'REV1 [REV2]'))
3513 @command(b'debugsetparents', [], _(b'REV1 [REV2]'))
3514 def debugsetparents(ui, repo, rev1, rev2=None):
3514 def debugsetparents(ui, repo, rev1, rev2=None):
3515 """manually set the parents of the current working directory (DANGEROUS)
3515 """manually set the parents of the current working directory (DANGEROUS)
3516
3516
3517 This command is not what you are looking for and should not be used. Using
3517 This command is not what you are looking for and should not be used. Using
3518 this command will most certainly results in slight corruption of the file
3518 this command will most certainly results in slight corruption of the file
3519 level histories withing your repository. DO NOT USE THIS COMMAND.
3519 level histories withing your repository. DO NOT USE THIS COMMAND.
3520
3520
3521 The command update the p1 and p2 field in the dirstate, and not touching
3521 The command update the p1 and p2 field in the dirstate, and not touching
3522 anything else. This useful for writing repository conversion tools, but
3522 anything else. This useful for writing repository conversion tools, but
3523 should be used with extreme care. For example, neither the working
3523 should be used with extreme care. For example, neither the working
3524 directory nor the dirstate is updated, so file status may be incorrect
3524 directory nor the dirstate is updated, so file status may be incorrect
3525 after running this command. Only used if you are one of the few people that
3525 after running this command. Only used if you are one of the few people that
3526 deeply unstand both conversion tools and file level histories. If you are
3526 deeply unstand both conversion tools and file level histories. If you are
3527 reading this help, you are not one of this people (most of them sailed west
3527 reading this help, you are not one of this people (most of them sailed west
3528 from Mithlond anyway.
3528 from Mithlond anyway.
3529
3529
3530 So one last time DO NOT USE THIS COMMAND.
3530 So one last time DO NOT USE THIS COMMAND.
3531
3531
3532 Returns 0 on success.
3532 Returns 0 on success.
3533 """
3533 """
3534
3534
3535 node1 = scmutil.revsingle(repo, rev1).node()
3535 node1 = scmutil.revsingle(repo, rev1).node()
3536 node2 = scmutil.revsingle(repo, rev2, b'null').node()
3536 node2 = scmutil.revsingle(repo, rev2, b'null').node()
3537
3537
3538 with repo.wlock():
3538 with repo.wlock():
3539 repo.setparents(node1, node2)
3539 repo.setparents(node1, node2)
3540
3540
3541
3541
3542 @command(b'debugsidedata', cmdutil.debugrevlogopts, _(b'-c|-m|FILE REV'))
3542 @command(b'debugsidedata', cmdutil.debugrevlogopts, _(b'-c|-m|FILE REV'))
3543 def debugsidedata(ui, repo, file_, rev=None, **opts):
3543 def debugsidedata(ui, repo, file_, rev=None, **opts):
3544 """dump the side data for a cl/manifest/file revision
3544 """dump the side data for a cl/manifest/file revision
3545
3545
3546 Use --verbose to dump the sidedata content."""
3546 Use --verbose to dump the sidedata content."""
3547 opts = pycompat.byteskwargs(opts)
3547 opts = pycompat.byteskwargs(opts)
3548 if opts.get(b'changelog') or opts.get(b'manifest') or opts.get(b'dir'):
3548 if opts.get(b'changelog') or opts.get(b'manifest') or opts.get(b'dir'):
3549 if rev is not None:
3549 if rev is not None:
3550 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
3550 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
3551 file_, rev = None, file_
3551 file_, rev = None, file_
3552 elif rev is None:
3552 elif rev is None:
3553 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
3553 raise error.CommandError(b'debugdata', _(b'invalid arguments'))
3554 r = cmdutil.openstorage(repo, b'debugdata', file_, opts)
3554 r = cmdutil.openstorage(repo, b'debugdata', file_, opts)
3555 r = getattr(r, '_revlog', r)
3555 r = getattr(r, '_revlog', r)
3556 try:
3556 try:
3557 sidedata = r.sidedata(r.lookup(rev))
3557 sidedata = r.sidedata(r.lookup(rev))
3558 except KeyError:
3558 except KeyError:
3559 raise error.Abort(_(b'invalid revision identifier %s') % rev)
3559 raise error.Abort(_(b'invalid revision identifier %s') % rev)
3560 if sidedata:
3560 if sidedata:
3561 sidedata = list(sidedata.items())
3561 sidedata = list(sidedata.items())
3562 sidedata.sort()
3562 sidedata.sort()
3563 ui.writenoi18n(b'%d sidedata entries\n' % len(sidedata))
3563 ui.writenoi18n(b'%d sidedata entries\n' % len(sidedata))
3564 for key, value in sidedata:
3564 for key, value in sidedata:
3565 ui.writenoi18n(b' entry-%04o size %d\n' % (key, len(value)))
3565 ui.writenoi18n(b' entry-%04o size %d\n' % (key, len(value)))
3566 if ui.verbose:
3566 if ui.verbose:
3567 ui.writenoi18n(b' %s\n' % stringutil.pprint(value))
3567 ui.writenoi18n(b' %s\n' % stringutil.pprint(value))
3568
3568
3569
3569
3570 @command(b'debugssl', [], b'[SOURCE]', optionalrepo=True)
3570 @command(b'debugssl', [], b'[SOURCE]', optionalrepo=True)
3571 def debugssl(ui, repo, source=None, **opts):
3571 def debugssl(ui, repo, source=None, **opts):
3572 """test a secure connection to a server
3572 """test a secure connection to a server
3573
3573
3574 This builds the certificate chain for the server on Windows, installing the
3574 This builds the certificate chain for the server on Windows, installing the
3575 missing intermediates and trusted root via Windows Update if necessary. It
3575 missing intermediates and trusted root via Windows Update if necessary. It
3576 does nothing on other platforms.
3576 does nothing on other platforms.
3577
3577
3578 If SOURCE is omitted, the 'default' path will be used. If a URL is given,
3578 If SOURCE is omitted, the 'default' path will be used. If a URL is given,
3579 that server is used. See :hg:`help urls` for more information.
3579 that server is used. See :hg:`help urls` for more information.
3580
3580
3581 If the update succeeds, retry the original operation. Otherwise, the cause
3581 If the update succeeds, retry the original operation. Otherwise, the cause
3582 of the SSL error is likely another issue.
3582 of the SSL error is likely another issue.
3583 """
3583 """
3584 if not pycompat.iswindows:
3584 if not pycompat.iswindows:
3585 raise error.Abort(
3585 raise error.Abort(
3586 _(b'certificate chain building is only possible on Windows')
3586 _(b'certificate chain building is only possible on Windows')
3587 )
3587 )
3588
3588
3589 if not source:
3589 if not source:
3590 if not repo:
3590 if not repo:
3591 raise error.Abort(
3591 raise error.Abort(
3592 _(
3592 _(
3593 b"there is no Mercurial repository here, and no "
3593 b"there is no Mercurial repository here, and no "
3594 b"server specified"
3594 b"server specified"
3595 )
3595 )
3596 )
3596 )
3597 source = b"default"
3597 source = b"default"
3598
3598
3599 source, branches = hg.parseurl(ui.expandpath(source))
3599 source, branches = hg.parseurl(ui.expandpath(source))
3600 url = util.url(source)
3600 url = util.url(source)
3601
3601
3602 defaultport = {b'https': 443, b'ssh': 22}
3602 defaultport = {b'https': 443, b'ssh': 22}
3603 if url.scheme in defaultport:
3603 if url.scheme in defaultport:
3604 try:
3604 try:
3605 addr = (url.host, int(url.port or defaultport[url.scheme]))
3605 addr = (url.host, int(url.port or defaultport[url.scheme]))
3606 except ValueError:
3606 except ValueError:
3607 raise error.Abort(_(b"malformed port number in URL"))
3607 raise error.Abort(_(b"malformed port number in URL"))
3608 else:
3608 else:
3609 raise error.Abort(_(b"only https and ssh connections are supported"))
3609 raise error.Abort(_(b"only https and ssh connections are supported"))
3610
3610
3611 from . import win32
3611 from . import win32
3612
3612
3613 s = ssl.wrap_socket(
3613 s = ssl.wrap_socket(
3614 socket.socket(),
3614 socket.socket(),
3615 ssl_version=ssl.PROTOCOL_TLS,
3615 ssl_version=ssl.PROTOCOL_TLS,
3616 cert_reqs=ssl.CERT_NONE,
3616 cert_reqs=ssl.CERT_NONE,
3617 ca_certs=None,
3617 ca_certs=None,
3618 )
3618 )
3619
3619
3620 try:
3620 try:
3621 s.connect(addr)
3621 s.connect(addr)
3622 cert = s.getpeercert(True)
3622 cert = s.getpeercert(True)
3623
3623
3624 ui.status(_(b'checking the certificate chain for %s\n') % url.host)
3624 ui.status(_(b'checking the certificate chain for %s\n') % url.host)
3625
3625
3626 complete = win32.checkcertificatechain(cert, build=False)
3626 complete = win32.checkcertificatechain(cert, build=False)
3627
3627
3628 if not complete:
3628 if not complete:
3629 ui.status(_(b'certificate chain is incomplete, updating... '))
3629 ui.status(_(b'certificate chain is incomplete, updating... '))
3630
3630
3631 if not win32.checkcertificatechain(cert):
3631 if not win32.checkcertificatechain(cert):
3632 ui.status(_(b'failed.\n'))
3632 ui.status(_(b'failed.\n'))
3633 else:
3633 else:
3634 ui.status(_(b'done.\n'))
3634 ui.status(_(b'done.\n'))
3635 else:
3635 else:
3636 ui.status(_(b'full certificate chain is available\n'))
3636 ui.status(_(b'full certificate chain is available\n'))
3637 finally:
3637 finally:
3638 s.close()
3638 s.close()
3639
3639
3640
3640
3641 @command(
3641 @command(
3642 b"debugbackupbundle",
3642 b"debugbackupbundle",
3643 [
3643 [
3644 (
3644 (
3645 b"",
3645 b"",
3646 b"recover",
3646 b"recover",
3647 b"",
3647 b"",
3648 b"brings the specified changeset back into the repository",
3648 b"brings the specified changeset back into the repository",
3649 )
3649 )
3650 ]
3650 ]
3651 + cmdutil.logopts,
3651 + cmdutil.logopts,
3652 _(b"hg debugbackupbundle [--recover HASH]"),
3652 _(b"hg debugbackupbundle [--recover HASH]"),
3653 )
3653 )
3654 def debugbackupbundle(ui, repo, *pats, **opts):
3654 def debugbackupbundle(ui, repo, *pats, **opts):
3655 """lists the changesets available in backup bundles
3655 """lists the changesets available in backup bundles
3656
3656
3657 Without any arguments, this command prints a list of the changesets in each
3657 Without any arguments, this command prints a list of the changesets in each
3658 backup bundle.
3658 backup bundle.
3659
3659
3660 --recover takes a changeset hash and unbundles the first bundle that
3660 --recover takes a changeset hash and unbundles the first bundle that
3661 contains that hash, which puts that changeset back in your repository.
3661 contains that hash, which puts that changeset back in your repository.
3662
3662
3663 --verbose will print the entire commit message and the bundle path for that
3663 --verbose will print the entire commit message and the bundle path for that
3664 backup.
3664 backup.
3665 """
3665 """
3666 backups = list(
3666 backups = list(
3667 filter(
3667 filter(
3668 os.path.isfile, glob.glob(repo.vfs.join(b"strip-backup") + b"/*.hg")
3668 os.path.isfile, glob.glob(repo.vfs.join(b"strip-backup") + b"/*.hg")
3669 )
3669 )
3670 )
3670 )
3671 backups.sort(key=lambda x: os.path.getmtime(x), reverse=True)
3671 backups.sort(key=lambda x: os.path.getmtime(x), reverse=True)
3672
3672
3673 opts = pycompat.byteskwargs(opts)
3673 opts = pycompat.byteskwargs(opts)
3674 opts[b"bundle"] = b""
3674 opts[b"bundle"] = b""
3675 opts[b"force"] = None
3675 opts[b"force"] = None
3676 limit = logcmdutil.getlimit(opts)
3676 limit = logcmdutil.getlimit(opts)
3677
3677
3678 def display(other, chlist, displayer):
3678 def display(other, chlist, displayer):
3679 if opts.get(b"newest_first"):
3679 if opts.get(b"newest_first"):
3680 chlist.reverse()
3680 chlist.reverse()
3681 count = 0
3681 count = 0
3682 for n in chlist:
3682 for n in chlist:
3683 if limit is not None and count >= limit:
3683 if limit is not None and count >= limit:
3684 break
3684 break
3685 parents = [True for p in other.changelog.parents(n) if p != nullid]
3685 parents = [True for p in other.changelog.parents(n) if p != nullid]
3686 if opts.get(b"no_merges") and len(parents) == 2:
3686 if opts.get(b"no_merges") and len(parents) == 2:
3687 continue
3687 continue
3688 count += 1
3688 count += 1
3689 displayer.show(other[n])
3689 displayer.show(other[n])
3690
3690
3691 recovernode = opts.get(b"recover")
3691 recovernode = opts.get(b"recover")
3692 if recovernode:
3692 if recovernode:
3693 if scmutil.isrevsymbol(repo, recovernode):
3693 if scmutil.isrevsymbol(repo, recovernode):
3694 ui.warn(_(b"%s already exists in the repo\n") % recovernode)
3694 ui.warn(_(b"%s already exists in the repo\n") % recovernode)
3695 return
3695 return
3696 elif backups:
3696 elif backups:
3697 msg = _(
3697 msg = _(
3698 b"Recover changesets using: hg debugbackupbundle --recover "
3698 b"Recover changesets using: hg debugbackupbundle --recover "
3699 b"<changeset hash>\n\nAvailable backup changesets:"
3699 b"<changeset hash>\n\nAvailable backup changesets:"
3700 )
3700 )
3701 ui.status(msg, label=b"status.removed")
3701 ui.status(msg, label=b"status.removed")
3702 else:
3702 else:
3703 ui.status(_(b"no backup changesets found\n"))
3703 ui.status(_(b"no backup changesets found\n"))
3704 return
3704 return
3705
3705
3706 for backup in backups:
3706 for backup in backups:
3707 # Much of this is copied from the hg incoming logic
3707 # Much of this is copied from the hg incoming logic
3708 source = ui.expandpath(os.path.relpath(backup, encoding.getcwd()))
3708 source = ui.expandpath(os.path.relpath(backup, encoding.getcwd()))
3709 source, branches = hg.parseurl(source, opts.get(b"branch"))
3709 source, branches = hg.parseurl(source, opts.get(b"branch"))
3710 try:
3710 try:
3711 other = hg.peer(repo, opts, source)
3711 other = hg.peer(repo, opts, source)
3712 except error.LookupError as ex:
3712 except error.LookupError as ex:
3713 msg = _(b"\nwarning: unable to open bundle %s") % source
3713 msg = _(b"\nwarning: unable to open bundle %s") % source
3714 hint = _(b"\n(missing parent rev %s)\n") % short(ex.name)
3714 hint = _(b"\n(missing parent rev %s)\n") % short(ex.name)
3715 ui.warn(msg, hint=hint)
3715 ui.warn(msg, hint=hint)
3716 continue
3716 continue
3717 revs, checkout = hg.addbranchrevs(
3717 revs, checkout = hg.addbranchrevs(
3718 repo, other, branches, opts.get(b"rev")
3718 repo, other, branches, opts.get(b"rev")
3719 )
3719 )
3720
3720
3721 if revs:
3721 if revs:
3722 revs = [other.lookup(rev) for rev in revs]
3722 revs = [other.lookup(rev) for rev in revs]
3723
3723
3724 quiet = ui.quiet
3724 quiet = ui.quiet
3725 try:
3725 try:
3726 ui.quiet = True
3726 ui.quiet = True
3727 other, chlist, cleanupfn = bundlerepo.getremotechanges(
3727 other, chlist, cleanupfn = bundlerepo.getremotechanges(
3728 ui, repo, other, revs, opts[b"bundle"], opts[b"force"]
3728 ui, repo, other, revs, opts[b"bundle"], opts[b"force"]
3729 )
3729 )
3730 except error.LookupError:
3730 except error.LookupError:
3731 continue
3731 continue
3732 finally:
3732 finally:
3733 ui.quiet = quiet
3733 ui.quiet = quiet
3734
3734
3735 try:
3735 try:
3736 if not chlist:
3736 if not chlist:
3737 continue
3737 continue
3738 if recovernode:
3738 if recovernode:
3739 with repo.lock(), repo.transaction(b"unbundle") as tr:
3739 with repo.lock(), repo.transaction(b"unbundle") as tr:
3740 if scmutil.isrevsymbol(other, recovernode):
3740 if scmutil.isrevsymbol(other, recovernode):
3741 ui.status(_(b"Unbundling %s\n") % (recovernode))
3741 ui.status(_(b"Unbundling %s\n") % (recovernode))
3742 f = hg.openpath(ui, source)
3742 f = hg.openpath(ui, source)
3743 gen = exchange.readbundle(ui, f, source)
3743 gen = exchange.readbundle(ui, f, source)
3744 if isinstance(gen, bundle2.unbundle20):
3744 if isinstance(gen, bundle2.unbundle20):
3745 bundle2.applybundle(
3745 bundle2.applybundle(
3746 repo,
3746 repo,
3747 gen,
3747 gen,
3748 tr,
3748 tr,
3749 source=b"unbundle",
3749 source=b"unbundle",
3750 url=b"bundle:" + source,
3750 url=b"bundle:" + source,
3751 )
3751 )
3752 else:
3752 else:
3753 gen.apply(repo, b"unbundle", b"bundle:" + source)
3753 gen.apply(repo, b"unbundle", b"bundle:" + source)
3754 break
3754 break
3755 else:
3755 else:
3756 backupdate = encoding.strtolocal(
3756 backupdate = encoding.strtolocal(
3757 time.strftime(
3757 time.strftime(
3758 "%a %H:%M, %Y-%m-%d",
3758 "%a %H:%M, %Y-%m-%d",
3759 time.localtime(os.path.getmtime(source)),
3759 time.localtime(os.path.getmtime(source)),
3760 )
3760 )
3761 )
3761 )
3762 ui.status(b"\n%s\n" % (backupdate.ljust(50)))
3762 ui.status(b"\n%s\n" % (backupdate.ljust(50)))
3763 if ui.verbose:
3763 if ui.verbose:
3764 ui.status(b"%s%s\n" % (b"bundle:".ljust(13), source))
3764 ui.status(b"%s%s\n" % (b"bundle:".ljust(13), source))
3765 else:
3765 else:
3766 opts[
3766 opts[
3767 b"template"
3767 b"template"
3768 ] = b"{label('status.modified', node|short)} {desc|firstline}\n"
3768 ] = b"{label('status.modified', node|short)} {desc|firstline}\n"
3769 displayer = logcmdutil.changesetdisplayer(
3769 displayer = logcmdutil.changesetdisplayer(
3770 ui, other, opts, False
3770 ui, other, opts, False
3771 )
3771 )
3772 display(other, chlist, displayer)
3772 display(other, chlist, displayer)
3773 displayer.close()
3773 displayer.close()
3774 finally:
3774 finally:
3775 cleanupfn()
3775 cleanupfn()
3776
3776
3777
3777
3778 @command(
3778 @command(
3779 b'debugsub',
3779 b'debugsub',
3780 [(b'r', b'rev', b'', _(b'revision to check'), _(b'REV'))],
3780 [(b'r', b'rev', b'', _(b'revision to check'), _(b'REV'))],
3781 _(b'[-r REV] [REV]'),
3781 _(b'[-r REV] [REV]'),
3782 )
3782 )
3783 def debugsub(ui, repo, rev=None):
3783 def debugsub(ui, repo, rev=None):
3784 ctx = scmutil.revsingle(repo, rev, None)
3784 ctx = scmutil.revsingle(repo, rev, None)
3785 for k, v in sorted(ctx.substate.items()):
3785 for k, v in sorted(ctx.substate.items()):
3786 ui.writenoi18n(b'path %s\n' % k)
3786 ui.writenoi18n(b'path %s\n' % k)
3787 ui.writenoi18n(b' source %s\n' % v[0])
3787 ui.writenoi18n(b' source %s\n' % v[0])
3788 ui.writenoi18n(b' revision %s\n' % v[1])
3788 ui.writenoi18n(b' revision %s\n' % v[1])
3789
3789
3790
3790
3791 @command(b'debugshell', optionalrepo=True)
3791 @command(b'debugshell', optionalrepo=True)
3792 def debugshell(ui, repo):
3792 def debugshell(ui, repo):
3793 """run an interactive Python interpreter
3793 """run an interactive Python interpreter
3794
3794
3795 The local namespace is provided with a reference to the ui and
3795 The local namespace is provided with a reference to the ui and
3796 the repo instance (if available).
3796 the repo instance (if available).
3797 """
3797 """
3798 import code
3798 import code
3799
3799
3800 imported_objects = {
3800 imported_objects = {
3801 'ui': ui,
3801 'ui': ui,
3802 'repo': repo,
3802 'repo': repo,
3803 }
3803 }
3804
3804
3805 code.interact(local=imported_objects)
3805 code.interact(local=imported_objects)
3806
3806
3807
3807
3808 @command(
3808 @command(
3809 b'debugsuccessorssets',
3809 b'debugsuccessorssets',
3810 [(b'', b'closest', False, _(b'return closest successors sets only'))],
3810 [(b'', b'closest', False, _(b'return closest successors sets only'))],
3811 _(b'[REV]'),
3811 _(b'[REV]'),
3812 )
3812 )
3813 def debugsuccessorssets(ui, repo, *revs, **opts):
3813 def debugsuccessorssets(ui, repo, *revs, **opts):
3814 """show set of successors for revision
3814 """show set of successors for revision
3815
3815
3816 A successors set of changeset A is a consistent group of revisions that
3816 A successors set of changeset A is a consistent group of revisions that
3817 succeed A. It contains non-obsolete changesets only unless closests
3817 succeed A. It contains non-obsolete changesets only unless closests
3818 successors set is set.
3818 successors set is set.
3819
3819
3820 In most cases a changeset A has a single successors set containing a single
3820 In most cases a changeset A has a single successors set containing a single
3821 successor (changeset A replaced by A').
3821 successor (changeset A replaced by A').
3822
3822
3823 A changeset that is made obsolete with no successors are called "pruned".
3823 A changeset that is made obsolete with no successors are called "pruned".
3824 Such changesets have no successors sets at all.
3824 Such changesets have no successors sets at all.
3825
3825
3826 A changeset that has been "split" will have a successors set containing
3826 A changeset that has been "split" will have a successors set containing
3827 more than one successor.
3827 more than one successor.
3828
3828
3829 A changeset that has been rewritten in multiple different ways is called
3829 A changeset that has been rewritten in multiple different ways is called
3830 "divergent". Such changesets have multiple successor sets (each of which
3830 "divergent". Such changesets have multiple successor sets (each of which
3831 may also be split, i.e. have multiple successors).
3831 may also be split, i.e. have multiple successors).
3832
3832
3833 Results are displayed as follows::
3833 Results are displayed as follows::
3834
3834
3835 <rev1>
3835 <rev1>
3836 <successors-1A>
3836 <successors-1A>
3837 <rev2>
3837 <rev2>
3838 <successors-2A>
3838 <successors-2A>
3839 <successors-2B1> <successors-2B2> <successors-2B3>
3839 <successors-2B1> <successors-2B2> <successors-2B3>
3840
3840
3841 Here rev2 has two possible (i.e. divergent) successors sets. The first
3841 Here rev2 has two possible (i.e. divergent) successors sets. The first
3842 holds one element, whereas the second holds three (i.e. the changeset has
3842 holds one element, whereas the second holds three (i.e. the changeset has
3843 been split).
3843 been split).
3844 """
3844 """
3845 # passed to successorssets caching computation from one call to another
3845 # passed to successorssets caching computation from one call to another
3846 cache = {}
3846 cache = {}
3847 ctx2str = bytes
3847 ctx2str = bytes
3848 node2str = short
3848 node2str = short
3849 for rev in scmutil.revrange(repo, revs):
3849 for rev in scmutil.revrange(repo, revs):
3850 ctx = repo[rev]
3850 ctx = repo[rev]
3851 ui.write(b'%s\n' % ctx2str(ctx))
3851 ui.write(b'%s\n' % ctx2str(ctx))
3852 for succsset in obsutil.successorssets(
3852 for succsset in obsutil.successorssets(
3853 repo, ctx.node(), closest=opts['closest'], cache=cache
3853 repo, ctx.node(), closest=opts['closest'], cache=cache
3854 ):
3854 ):
3855 if succsset:
3855 if succsset:
3856 ui.write(b' ')
3856 ui.write(b' ')
3857 ui.write(node2str(succsset[0]))
3857 ui.write(node2str(succsset[0]))
3858 for node in succsset[1:]:
3858 for node in succsset[1:]:
3859 ui.write(b' ')
3859 ui.write(b' ')
3860 ui.write(node2str(node))
3860 ui.write(node2str(node))
3861 ui.write(b'\n')
3861 ui.write(b'\n')
3862
3862
3863
3863
3864 @command(b'debugtagscache', [])
3864 @command(b'debugtagscache', [])
3865 def debugtagscache(ui, repo):
3865 def debugtagscache(ui, repo):
3866 """display the contents of .hg/cache/hgtagsfnodes1"""
3866 """display the contents of .hg/cache/hgtagsfnodes1"""
3867 cache = tagsmod.hgtagsfnodescache(repo.unfiltered())
3867 cache = tagsmod.hgtagsfnodescache(repo.unfiltered())
3868 for r in repo:
3868 for r in repo:
3869 node = repo[r].node()
3869 node = repo[r].node()
3870 tagsnode = cache.getfnode(node, computemissing=False)
3870 tagsnode = cache.getfnode(node, computemissing=False)
3871 tagsnodedisplay = hex(tagsnode) if tagsnode else b'missing/invalid'
3871 if tagsnode:
3872 tagsnodedisplay = hex(tagsnode)
3873 elif tagsnode is False:
3874 tagsnodedisplay = b'invalid'
3875 else:
3876 tagsnodedisplay = b'missing'
3877
3872 ui.write(b'%d %s %s\n' % (r, hex(node), tagsnodedisplay))
3878 ui.write(b'%d %s %s\n' % (r, hex(node), tagsnodedisplay))
3873
3879
3874
3880
3875 @command(
3881 @command(
3876 b'debugtemplate',
3882 b'debugtemplate',
3877 [
3883 [
3878 (b'r', b'rev', [], _(b'apply template on changesets'), _(b'REV')),
3884 (b'r', b'rev', [], _(b'apply template on changesets'), _(b'REV')),
3879 (b'D', b'define', [], _(b'define template keyword'), _(b'KEY=VALUE')),
3885 (b'D', b'define', [], _(b'define template keyword'), _(b'KEY=VALUE')),
3880 ],
3886 ],
3881 _(b'[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
3887 _(b'[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
3882 optionalrepo=True,
3888 optionalrepo=True,
3883 )
3889 )
3884 def debugtemplate(ui, repo, tmpl, **opts):
3890 def debugtemplate(ui, repo, tmpl, **opts):
3885 """parse and apply a template
3891 """parse and apply a template
3886
3892
3887 If -r/--rev is given, the template is processed as a log template and
3893 If -r/--rev is given, the template is processed as a log template and
3888 applied to the given changesets. Otherwise, it is processed as a generic
3894 applied to the given changesets. Otherwise, it is processed as a generic
3889 template.
3895 template.
3890
3896
3891 Use --verbose to print the parsed tree.
3897 Use --verbose to print the parsed tree.
3892 """
3898 """
3893 revs = None
3899 revs = None
3894 if opts['rev']:
3900 if opts['rev']:
3895 if repo is None:
3901 if repo is None:
3896 raise error.RepoError(
3902 raise error.RepoError(
3897 _(b'there is no Mercurial repository here (.hg not found)')
3903 _(b'there is no Mercurial repository here (.hg not found)')
3898 )
3904 )
3899 revs = scmutil.revrange(repo, opts['rev'])
3905 revs = scmutil.revrange(repo, opts['rev'])
3900
3906
3901 props = {}
3907 props = {}
3902 for d in opts['define']:
3908 for d in opts['define']:
3903 try:
3909 try:
3904 k, v = (e.strip() for e in d.split(b'=', 1))
3910 k, v = (e.strip() for e in d.split(b'=', 1))
3905 if not k or k == b'ui':
3911 if not k or k == b'ui':
3906 raise ValueError
3912 raise ValueError
3907 props[k] = v
3913 props[k] = v
3908 except ValueError:
3914 except ValueError:
3909 raise error.Abort(_(b'malformed keyword definition: %s') % d)
3915 raise error.Abort(_(b'malformed keyword definition: %s') % d)
3910
3916
3911 if ui.verbose:
3917 if ui.verbose:
3912 aliases = ui.configitems(b'templatealias')
3918 aliases = ui.configitems(b'templatealias')
3913 tree = templater.parse(tmpl)
3919 tree = templater.parse(tmpl)
3914 ui.note(templater.prettyformat(tree), b'\n')
3920 ui.note(templater.prettyformat(tree), b'\n')
3915 newtree = templater.expandaliases(tree, aliases)
3921 newtree = templater.expandaliases(tree, aliases)
3916 if newtree != tree:
3922 if newtree != tree:
3917 ui.notenoi18n(
3923 ui.notenoi18n(
3918 b"* expanded:\n", templater.prettyformat(newtree), b'\n'
3924 b"* expanded:\n", templater.prettyformat(newtree), b'\n'
3919 )
3925 )
3920
3926
3921 if revs is None:
3927 if revs is None:
3922 tres = formatter.templateresources(ui, repo)
3928 tres = formatter.templateresources(ui, repo)
3923 t = formatter.maketemplater(ui, tmpl, resources=tres)
3929 t = formatter.maketemplater(ui, tmpl, resources=tres)
3924 if ui.verbose:
3930 if ui.verbose:
3925 kwds, funcs = t.symbolsuseddefault()
3931 kwds, funcs = t.symbolsuseddefault()
3926 ui.writenoi18n(b"* keywords: %s\n" % b', '.join(sorted(kwds)))
3932 ui.writenoi18n(b"* keywords: %s\n" % b', '.join(sorted(kwds)))
3927 ui.writenoi18n(b"* functions: %s\n" % b', '.join(sorted(funcs)))
3933 ui.writenoi18n(b"* functions: %s\n" % b', '.join(sorted(funcs)))
3928 ui.write(t.renderdefault(props))
3934 ui.write(t.renderdefault(props))
3929 else:
3935 else:
3930 displayer = logcmdutil.maketemplater(ui, repo, tmpl)
3936 displayer = logcmdutil.maketemplater(ui, repo, tmpl)
3931 if ui.verbose:
3937 if ui.verbose:
3932 kwds, funcs = displayer.t.symbolsuseddefault()
3938 kwds, funcs = displayer.t.symbolsuseddefault()
3933 ui.writenoi18n(b"* keywords: %s\n" % b', '.join(sorted(kwds)))
3939 ui.writenoi18n(b"* keywords: %s\n" % b', '.join(sorted(kwds)))
3934 ui.writenoi18n(b"* functions: %s\n" % b', '.join(sorted(funcs)))
3940 ui.writenoi18n(b"* functions: %s\n" % b', '.join(sorted(funcs)))
3935 for r in revs:
3941 for r in revs:
3936 displayer.show(repo[r], **pycompat.strkwargs(props))
3942 displayer.show(repo[r], **pycompat.strkwargs(props))
3937 displayer.close()
3943 displayer.close()
3938
3944
3939
3945
3940 @command(
3946 @command(
3941 b'debuguigetpass',
3947 b'debuguigetpass',
3942 [
3948 [
3943 (b'p', b'prompt', b'', _(b'prompt text'), _(b'TEXT')),
3949 (b'p', b'prompt', b'', _(b'prompt text'), _(b'TEXT')),
3944 ],
3950 ],
3945 _(b'[-p TEXT]'),
3951 _(b'[-p TEXT]'),
3946 norepo=True,
3952 norepo=True,
3947 )
3953 )
3948 def debuguigetpass(ui, prompt=b''):
3954 def debuguigetpass(ui, prompt=b''):
3949 """show prompt to type password"""
3955 """show prompt to type password"""
3950 r = ui.getpass(prompt)
3956 r = ui.getpass(prompt)
3951 if r is None:
3957 if r is None:
3952 r = b"<default response>"
3958 r = b"<default response>"
3953 ui.writenoi18n(b'response: %s\n' % r)
3959 ui.writenoi18n(b'response: %s\n' % r)
3954
3960
3955
3961
3956 @command(
3962 @command(
3957 b'debuguiprompt',
3963 b'debuguiprompt',
3958 [
3964 [
3959 (b'p', b'prompt', b'', _(b'prompt text'), _(b'TEXT')),
3965 (b'p', b'prompt', b'', _(b'prompt text'), _(b'TEXT')),
3960 ],
3966 ],
3961 _(b'[-p TEXT]'),
3967 _(b'[-p TEXT]'),
3962 norepo=True,
3968 norepo=True,
3963 )
3969 )
3964 def debuguiprompt(ui, prompt=b''):
3970 def debuguiprompt(ui, prompt=b''):
3965 """show plain prompt"""
3971 """show plain prompt"""
3966 r = ui.prompt(prompt)
3972 r = ui.prompt(prompt)
3967 ui.writenoi18n(b'response: %s\n' % r)
3973 ui.writenoi18n(b'response: %s\n' % r)
3968
3974
3969
3975
3970 @command(b'debugupdatecaches', [])
3976 @command(b'debugupdatecaches', [])
3971 def debugupdatecaches(ui, repo, *pats, **opts):
3977 def debugupdatecaches(ui, repo, *pats, **opts):
3972 """warm all known caches in the repository"""
3978 """warm all known caches in the repository"""
3973 with repo.wlock(), repo.lock():
3979 with repo.wlock(), repo.lock():
3974 repo.updatecaches(full=True)
3980 repo.updatecaches(full=True)
3975
3981
3976
3982
3977 @command(
3983 @command(
3978 b'debugupgraderepo',
3984 b'debugupgraderepo',
3979 [
3985 [
3980 (
3986 (
3981 b'o',
3987 b'o',
3982 b'optimize',
3988 b'optimize',
3983 [],
3989 [],
3984 _(b'extra optimization to perform'),
3990 _(b'extra optimization to perform'),
3985 _(b'NAME'),
3991 _(b'NAME'),
3986 ),
3992 ),
3987 (b'', b'run', False, _(b'performs an upgrade')),
3993 (b'', b'run', False, _(b'performs an upgrade')),
3988 (b'', b'backup', True, _(b'keep the old repository content around')),
3994 (b'', b'backup', True, _(b'keep the old repository content around')),
3989 (b'', b'changelog', None, _(b'select the changelog for upgrade')),
3995 (b'', b'changelog', None, _(b'select the changelog for upgrade')),
3990 (b'', b'manifest', None, _(b'select the manifest for upgrade')),
3996 (b'', b'manifest', None, _(b'select the manifest for upgrade')),
3991 (b'', b'filelogs', None, _(b'select all filelogs for upgrade')),
3997 (b'', b'filelogs', None, _(b'select all filelogs for upgrade')),
3992 ],
3998 ],
3993 )
3999 )
3994 def debugupgraderepo(ui, repo, run=False, optimize=None, backup=True, **opts):
4000 def debugupgraderepo(ui, repo, run=False, optimize=None, backup=True, **opts):
3995 """upgrade a repository to use different features
4001 """upgrade a repository to use different features
3996
4002
3997 If no arguments are specified, the repository is evaluated for upgrade
4003 If no arguments are specified, the repository is evaluated for upgrade
3998 and a list of problems and potential optimizations is printed.
4004 and a list of problems and potential optimizations is printed.
3999
4005
4000 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
4006 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
4001 can be influenced via additional arguments. More details will be provided
4007 can be influenced via additional arguments. More details will be provided
4002 by the command output when run without ``--run``.
4008 by the command output when run without ``--run``.
4003
4009
4004 During the upgrade, the repository will be locked and no writes will be
4010 During the upgrade, the repository will be locked and no writes will be
4005 allowed.
4011 allowed.
4006
4012
4007 At the end of the upgrade, the repository may not be readable while new
4013 At the end of the upgrade, the repository may not be readable while new
4008 repository data is swapped in. This window will be as long as it takes to
4014 repository data is swapped in. This window will be as long as it takes to
4009 rename some directories inside the ``.hg`` directory. On most machines, this
4015 rename some directories inside the ``.hg`` directory. On most machines, this
4010 should complete almost instantaneously and the chances of a consumer being
4016 should complete almost instantaneously and the chances of a consumer being
4011 unable to access the repository should be low.
4017 unable to access the repository should be low.
4012
4018
4013 By default, all revlog will be upgraded. You can restrict this using flag
4019 By default, all revlog will be upgraded. You can restrict this using flag
4014 such as `--manifest`:
4020 such as `--manifest`:
4015
4021
4016 * `--manifest`: only optimize the manifest
4022 * `--manifest`: only optimize the manifest
4017 * `--no-manifest`: optimize all revlog but the manifest
4023 * `--no-manifest`: optimize all revlog but the manifest
4018 * `--changelog`: optimize the changelog only
4024 * `--changelog`: optimize the changelog only
4019 * `--no-changelog --no-manifest`: optimize filelogs only
4025 * `--no-changelog --no-manifest`: optimize filelogs only
4020 * `--filelogs`: optimize the filelogs only
4026 * `--filelogs`: optimize the filelogs only
4021 * `--no-changelog --no-manifest --no-filelogs`: skip all revlog optimizations
4027 * `--no-changelog --no-manifest --no-filelogs`: skip all revlog optimizations
4022 """
4028 """
4023 return upgrade.upgraderepo(
4029 return upgrade.upgraderepo(
4024 ui, repo, run=run, optimize=set(optimize), backup=backup, **opts
4030 ui, repo, run=run, optimize=set(optimize), backup=backup, **opts
4025 )
4031 )
4026
4032
4027
4033
4028 @command(
4034 @command(
4029 b'debugwalk', cmdutil.walkopts, _(b'[OPTION]... [FILE]...'), inferrepo=True
4035 b'debugwalk', cmdutil.walkopts, _(b'[OPTION]... [FILE]...'), inferrepo=True
4030 )
4036 )
4031 def debugwalk(ui, repo, *pats, **opts):
4037 def debugwalk(ui, repo, *pats, **opts):
4032 """show how files match on given patterns"""
4038 """show how files match on given patterns"""
4033 opts = pycompat.byteskwargs(opts)
4039 opts = pycompat.byteskwargs(opts)
4034 m = scmutil.match(repo[None], pats, opts)
4040 m = scmutil.match(repo[None], pats, opts)
4035 if ui.verbose:
4041 if ui.verbose:
4036 ui.writenoi18n(b'* matcher:\n', stringutil.prettyrepr(m), b'\n')
4042 ui.writenoi18n(b'* matcher:\n', stringutil.prettyrepr(m), b'\n')
4037 items = list(repo[None].walk(m))
4043 items = list(repo[None].walk(m))
4038 if not items:
4044 if not items:
4039 return
4045 return
4040 f = lambda fn: fn
4046 f = lambda fn: fn
4041 if ui.configbool(b'ui', b'slash') and pycompat.ossep != b'/':
4047 if ui.configbool(b'ui', b'slash') and pycompat.ossep != b'/':
4042 f = lambda fn: util.normpath(fn)
4048 f = lambda fn: util.normpath(fn)
4043 fmt = b'f %%-%ds %%-%ds %%s' % (
4049 fmt = b'f %%-%ds %%-%ds %%s' % (
4044 max([len(abs) for abs in items]),
4050 max([len(abs) for abs in items]),
4045 max([len(repo.pathto(abs)) for abs in items]),
4051 max([len(repo.pathto(abs)) for abs in items]),
4046 )
4052 )
4047 for abs in items:
4053 for abs in items:
4048 line = fmt % (
4054 line = fmt % (
4049 abs,
4055 abs,
4050 f(repo.pathto(abs)),
4056 f(repo.pathto(abs)),
4051 m.exact(abs) and b'exact' or b'',
4057 m.exact(abs) and b'exact' or b'',
4052 )
4058 )
4053 ui.write(b"%s\n" % line.rstrip())
4059 ui.write(b"%s\n" % line.rstrip())
4054
4060
4055
4061
4056 @command(b'debugwhyunstable', [], _(b'REV'))
4062 @command(b'debugwhyunstable', [], _(b'REV'))
4057 def debugwhyunstable(ui, repo, rev):
4063 def debugwhyunstable(ui, repo, rev):
4058 """explain instabilities of a changeset"""
4064 """explain instabilities of a changeset"""
4059 for entry in obsutil.whyunstable(repo, scmutil.revsingle(repo, rev)):
4065 for entry in obsutil.whyunstable(repo, scmutil.revsingle(repo, rev)):
4060 dnodes = b''
4066 dnodes = b''
4061 if entry.get(b'divergentnodes'):
4067 if entry.get(b'divergentnodes'):
4062 dnodes = (
4068 dnodes = (
4063 b' '.join(
4069 b' '.join(
4064 b'%s (%s)' % (ctx.hex(), ctx.phasestr())
4070 b'%s (%s)' % (ctx.hex(), ctx.phasestr())
4065 for ctx in entry[b'divergentnodes']
4071 for ctx in entry[b'divergentnodes']
4066 )
4072 )
4067 + b' '
4073 + b' '
4068 )
4074 )
4069 ui.write(
4075 ui.write(
4070 b'%s: %s%s %s\n'
4076 b'%s: %s%s %s\n'
4071 % (entry[b'instability'], dnodes, entry[b'reason'], entry[b'node'])
4077 % (entry[b'instability'], dnodes, entry[b'reason'], entry[b'node'])
4072 )
4078 )
4073
4079
4074
4080
4075 @command(
4081 @command(
4076 b'debugwireargs',
4082 b'debugwireargs',
4077 [
4083 [
4078 (b'', b'three', b'', b'three'),
4084 (b'', b'three', b'', b'three'),
4079 (b'', b'four', b'', b'four'),
4085 (b'', b'four', b'', b'four'),
4080 (b'', b'five', b'', b'five'),
4086 (b'', b'five', b'', b'five'),
4081 ]
4087 ]
4082 + cmdutil.remoteopts,
4088 + cmdutil.remoteopts,
4083 _(b'REPO [OPTIONS]... [ONE [TWO]]'),
4089 _(b'REPO [OPTIONS]... [ONE [TWO]]'),
4084 norepo=True,
4090 norepo=True,
4085 )
4091 )
4086 def debugwireargs(ui, repopath, *vals, **opts):
4092 def debugwireargs(ui, repopath, *vals, **opts):
4087 opts = pycompat.byteskwargs(opts)
4093 opts = pycompat.byteskwargs(opts)
4088 repo = hg.peer(ui, opts, repopath)
4094 repo = hg.peer(ui, opts, repopath)
4089 for opt in cmdutil.remoteopts:
4095 for opt in cmdutil.remoteopts:
4090 del opts[opt[1]]
4096 del opts[opt[1]]
4091 args = {}
4097 args = {}
4092 for k, v in pycompat.iteritems(opts):
4098 for k, v in pycompat.iteritems(opts):
4093 if v:
4099 if v:
4094 args[k] = v
4100 args[k] = v
4095 args = pycompat.strkwargs(args)
4101 args = pycompat.strkwargs(args)
4096 # run twice to check that we don't mess up the stream for the next command
4102 # run twice to check that we don't mess up the stream for the next command
4097 res1 = repo.debugwireargs(*vals, **args)
4103 res1 = repo.debugwireargs(*vals, **args)
4098 res2 = repo.debugwireargs(*vals, **args)
4104 res2 = repo.debugwireargs(*vals, **args)
4099 ui.write(b"%s\n" % res1)
4105 ui.write(b"%s\n" % res1)
4100 if res1 != res2:
4106 if res1 != res2:
4101 ui.warn(b"%s\n" % res2)
4107 ui.warn(b"%s\n" % res2)
4102
4108
4103
4109
4104 def _parsewirelangblocks(fh):
4110 def _parsewirelangblocks(fh):
4105 activeaction = None
4111 activeaction = None
4106 blocklines = []
4112 blocklines = []
4107 lastindent = 0
4113 lastindent = 0
4108
4114
4109 for line in fh:
4115 for line in fh:
4110 line = line.rstrip()
4116 line = line.rstrip()
4111 if not line:
4117 if not line:
4112 continue
4118 continue
4113
4119
4114 if line.startswith(b'#'):
4120 if line.startswith(b'#'):
4115 continue
4121 continue
4116
4122
4117 if not line.startswith(b' '):
4123 if not line.startswith(b' '):
4118 # New block. Flush previous one.
4124 # New block. Flush previous one.
4119 if activeaction:
4125 if activeaction:
4120 yield activeaction, blocklines
4126 yield activeaction, blocklines
4121
4127
4122 activeaction = line
4128 activeaction = line
4123 blocklines = []
4129 blocklines = []
4124 lastindent = 0
4130 lastindent = 0
4125 continue
4131 continue
4126
4132
4127 # Else we start with an indent.
4133 # Else we start with an indent.
4128
4134
4129 if not activeaction:
4135 if not activeaction:
4130 raise error.Abort(_(b'indented line outside of block'))
4136 raise error.Abort(_(b'indented line outside of block'))
4131
4137
4132 indent = len(line) - len(line.lstrip())
4138 indent = len(line) - len(line.lstrip())
4133
4139
4134 # If this line is indented more than the last line, concatenate it.
4140 # If this line is indented more than the last line, concatenate it.
4135 if indent > lastindent and blocklines:
4141 if indent > lastindent and blocklines:
4136 blocklines[-1] += line.lstrip()
4142 blocklines[-1] += line.lstrip()
4137 else:
4143 else:
4138 blocklines.append(line)
4144 blocklines.append(line)
4139 lastindent = indent
4145 lastindent = indent
4140
4146
4141 # Flush last block.
4147 # Flush last block.
4142 if activeaction:
4148 if activeaction:
4143 yield activeaction, blocklines
4149 yield activeaction, blocklines
4144
4150
4145
4151
4146 @command(
4152 @command(
4147 b'debugwireproto',
4153 b'debugwireproto',
4148 [
4154 [
4149 (b'', b'localssh', False, _(b'start an SSH server for this repo')),
4155 (b'', b'localssh', False, _(b'start an SSH server for this repo')),
4150 (b'', b'peer', b'', _(b'construct a specific version of the peer')),
4156 (b'', b'peer', b'', _(b'construct a specific version of the peer')),
4151 (
4157 (
4152 b'',
4158 b'',
4153 b'noreadstderr',
4159 b'noreadstderr',
4154 False,
4160 False,
4155 _(b'do not read from stderr of the remote'),
4161 _(b'do not read from stderr of the remote'),
4156 ),
4162 ),
4157 (
4163 (
4158 b'',
4164 b'',
4159 b'nologhandshake',
4165 b'nologhandshake',
4160 False,
4166 False,
4161 _(b'do not log I/O related to the peer handshake'),
4167 _(b'do not log I/O related to the peer handshake'),
4162 ),
4168 ),
4163 ]
4169 ]
4164 + cmdutil.remoteopts,
4170 + cmdutil.remoteopts,
4165 _(b'[PATH]'),
4171 _(b'[PATH]'),
4166 optionalrepo=True,
4172 optionalrepo=True,
4167 )
4173 )
4168 def debugwireproto(ui, repo, path=None, **opts):
4174 def debugwireproto(ui, repo, path=None, **opts):
4169 """send wire protocol commands to a server
4175 """send wire protocol commands to a server
4170
4176
4171 This command can be used to issue wire protocol commands to remote
4177 This command can be used to issue wire protocol commands to remote
4172 peers and to debug the raw data being exchanged.
4178 peers and to debug the raw data being exchanged.
4173
4179
4174 ``--localssh`` will start an SSH server against the current repository
4180 ``--localssh`` will start an SSH server against the current repository
4175 and connect to that. By default, the connection will perform a handshake
4181 and connect to that. By default, the connection will perform a handshake
4176 and establish an appropriate peer instance.
4182 and establish an appropriate peer instance.
4177
4183
4178 ``--peer`` can be used to bypass the handshake protocol and construct a
4184 ``--peer`` can be used to bypass the handshake protocol and construct a
4179 peer instance using the specified class type. Valid values are ``raw``,
4185 peer instance using the specified class type. Valid values are ``raw``,
4180 ``http2``, ``ssh1``, and ``ssh2``. ``raw`` instances only allow sending
4186 ``http2``, ``ssh1``, and ``ssh2``. ``raw`` instances only allow sending
4181 raw data payloads and don't support higher-level command actions.
4187 raw data payloads and don't support higher-level command actions.
4182
4188
4183 ``--noreadstderr`` can be used to disable automatic reading from stderr
4189 ``--noreadstderr`` can be used to disable automatic reading from stderr
4184 of the peer (for SSH connections only). Disabling automatic reading of
4190 of the peer (for SSH connections only). Disabling automatic reading of
4185 stderr is useful for making output more deterministic.
4191 stderr is useful for making output more deterministic.
4186
4192
4187 Commands are issued via a mini language which is specified via stdin.
4193 Commands are issued via a mini language which is specified via stdin.
4188 The language consists of individual actions to perform. An action is
4194 The language consists of individual actions to perform. An action is
4189 defined by a block. A block is defined as a line with no leading
4195 defined by a block. A block is defined as a line with no leading
4190 space followed by 0 or more lines with leading space. Blocks are
4196 space followed by 0 or more lines with leading space. Blocks are
4191 effectively a high-level command with additional metadata.
4197 effectively a high-level command with additional metadata.
4192
4198
4193 Lines beginning with ``#`` are ignored.
4199 Lines beginning with ``#`` are ignored.
4194
4200
4195 The following sections denote available actions.
4201 The following sections denote available actions.
4196
4202
4197 raw
4203 raw
4198 ---
4204 ---
4199
4205
4200 Send raw data to the server.
4206 Send raw data to the server.
4201
4207
4202 The block payload contains the raw data to send as one atomic send
4208 The block payload contains the raw data to send as one atomic send
4203 operation. The data may not actually be delivered in a single system
4209 operation. The data may not actually be delivered in a single system
4204 call: it depends on the abilities of the transport being used.
4210 call: it depends on the abilities of the transport being used.
4205
4211
4206 Each line in the block is de-indented and concatenated. Then, that
4212 Each line in the block is de-indented and concatenated. Then, that
4207 value is evaluated as a Python b'' literal. This allows the use of
4213 value is evaluated as a Python b'' literal. This allows the use of
4208 backslash escaping, etc.
4214 backslash escaping, etc.
4209
4215
4210 raw+
4216 raw+
4211 ----
4217 ----
4212
4218
4213 Behaves like ``raw`` except flushes output afterwards.
4219 Behaves like ``raw`` except flushes output afterwards.
4214
4220
4215 command <X>
4221 command <X>
4216 -----------
4222 -----------
4217
4223
4218 Send a request to run a named command, whose name follows the ``command``
4224 Send a request to run a named command, whose name follows the ``command``
4219 string.
4225 string.
4220
4226
4221 Arguments to the command are defined as lines in this block. The format of
4227 Arguments to the command are defined as lines in this block. The format of
4222 each line is ``<key> <value>``. e.g.::
4228 each line is ``<key> <value>``. e.g.::
4223
4229
4224 command listkeys
4230 command listkeys
4225 namespace bookmarks
4231 namespace bookmarks
4226
4232
4227 If the value begins with ``eval:``, it will be interpreted as a Python
4233 If the value begins with ``eval:``, it will be interpreted as a Python
4228 literal expression. Otherwise values are interpreted as Python b'' literals.
4234 literal expression. Otherwise values are interpreted as Python b'' literals.
4229 This allows sending complex types and encoding special byte sequences via
4235 This allows sending complex types and encoding special byte sequences via
4230 backslash escaping.
4236 backslash escaping.
4231
4237
4232 The following arguments have special meaning:
4238 The following arguments have special meaning:
4233
4239
4234 ``PUSHFILE``
4240 ``PUSHFILE``
4235 When defined, the *push* mechanism of the peer will be used instead
4241 When defined, the *push* mechanism of the peer will be used instead
4236 of the static request-response mechanism and the content of the
4242 of the static request-response mechanism and the content of the
4237 file specified in the value of this argument will be sent as the
4243 file specified in the value of this argument will be sent as the
4238 command payload.
4244 command payload.
4239
4245
4240 This can be used to submit a local bundle file to the remote.
4246 This can be used to submit a local bundle file to the remote.
4241
4247
4242 batchbegin
4248 batchbegin
4243 ----------
4249 ----------
4244
4250
4245 Instruct the peer to begin a batched send.
4251 Instruct the peer to begin a batched send.
4246
4252
4247 All ``command`` blocks are queued for execution until the next
4253 All ``command`` blocks are queued for execution until the next
4248 ``batchsubmit`` block.
4254 ``batchsubmit`` block.
4249
4255
4250 batchsubmit
4256 batchsubmit
4251 -----------
4257 -----------
4252
4258
4253 Submit previously queued ``command`` blocks as a batch request.
4259 Submit previously queued ``command`` blocks as a batch request.
4254
4260
4255 This action MUST be paired with a ``batchbegin`` action.
4261 This action MUST be paired with a ``batchbegin`` action.
4256
4262
4257 httprequest <method> <path>
4263 httprequest <method> <path>
4258 ---------------------------
4264 ---------------------------
4259
4265
4260 (HTTP peer only)
4266 (HTTP peer only)
4261
4267
4262 Send an HTTP request to the peer.
4268 Send an HTTP request to the peer.
4263
4269
4264 The HTTP request line follows the ``httprequest`` action. e.g. ``GET /foo``.
4270 The HTTP request line follows the ``httprequest`` action. e.g. ``GET /foo``.
4265
4271
4266 Arguments of the form ``<key>: <value>`` are interpreted as HTTP request
4272 Arguments of the form ``<key>: <value>`` are interpreted as HTTP request
4267 headers to add to the request. e.g. ``Accept: foo``.
4273 headers to add to the request. e.g. ``Accept: foo``.
4268
4274
4269 The following arguments are special:
4275 The following arguments are special:
4270
4276
4271 ``BODYFILE``
4277 ``BODYFILE``
4272 The content of the file defined as the value to this argument will be
4278 The content of the file defined as the value to this argument will be
4273 transferred verbatim as the HTTP request body.
4279 transferred verbatim as the HTTP request body.
4274
4280
4275 ``frame <type> <flags> <payload>``
4281 ``frame <type> <flags> <payload>``
4276 Send a unified protocol frame as part of the request body.
4282 Send a unified protocol frame as part of the request body.
4277
4283
4278 All frames will be collected and sent as the body to the HTTP
4284 All frames will be collected and sent as the body to the HTTP
4279 request.
4285 request.
4280
4286
4281 close
4287 close
4282 -----
4288 -----
4283
4289
4284 Close the connection to the server.
4290 Close the connection to the server.
4285
4291
4286 flush
4292 flush
4287 -----
4293 -----
4288
4294
4289 Flush data written to the server.
4295 Flush data written to the server.
4290
4296
4291 readavailable
4297 readavailable
4292 -------------
4298 -------------
4293
4299
4294 Close the write end of the connection and read all available data from
4300 Close the write end of the connection and read all available data from
4295 the server.
4301 the server.
4296
4302
4297 If the connection to the server encompasses multiple pipes, we poll both
4303 If the connection to the server encompasses multiple pipes, we poll both
4298 pipes and read available data.
4304 pipes and read available data.
4299
4305
4300 readline
4306 readline
4301 --------
4307 --------
4302
4308
4303 Read a line of output from the server. If there are multiple output
4309 Read a line of output from the server. If there are multiple output
4304 pipes, reads only the main pipe.
4310 pipes, reads only the main pipe.
4305
4311
4306 ereadline
4312 ereadline
4307 ---------
4313 ---------
4308
4314
4309 Like ``readline``, but read from the stderr pipe, if available.
4315 Like ``readline``, but read from the stderr pipe, if available.
4310
4316
4311 read <X>
4317 read <X>
4312 --------
4318 --------
4313
4319
4314 ``read()`` N bytes from the server's main output pipe.
4320 ``read()`` N bytes from the server's main output pipe.
4315
4321
4316 eread <X>
4322 eread <X>
4317 ---------
4323 ---------
4318
4324
4319 ``read()`` N bytes from the server's stderr pipe, if available.
4325 ``read()`` N bytes from the server's stderr pipe, if available.
4320
4326
4321 Specifying Unified Frame-Based Protocol Frames
4327 Specifying Unified Frame-Based Protocol Frames
4322 ----------------------------------------------
4328 ----------------------------------------------
4323
4329
4324 It is possible to emit a *Unified Frame-Based Protocol* by using special
4330 It is possible to emit a *Unified Frame-Based Protocol* by using special
4325 syntax.
4331 syntax.
4326
4332
4327 A frame is composed as a type, flags, and payload. These can be parsed
4333 A frame is composed as a type, flags, and payload. These can be parsed
4328 from a string of the form:
4334 from a string of the form:
4329
4335
4330 <request-id> <stream-id> <stream-flags> <type> <flags> <payload>
4336 <request-id> <stream-id> <stream-flags> <type> <flags> <payload>
4331
4337
4332 ``request-id`` and ``stream-id`` are integers defining the request and
4338 ``request-id`` and ``stream-id`` are integers defining the request and
4333 stream identifiers.
4339 stream identifiers.
4334
4340
4335 ``type`` can be an integer value for the frame type or the string name
4341 ``type`` can be an integer value for the frame type or the string name
4336 of the type. The strings are defined in ``wireprotoframing.py``. e.g.
4342 of the type. The strings are defined in ``wireprotoframing.py``. e.g.
4337 ``command-name``.
4343 ``command-name``.
4338
4344
4339 ``stream-flags`` and ``flags`` are a ``|`` delimited list of flag
4345 ``stream-flags`` and ``flags`` are a ``|`` delimited list of flag
4340 components. Each component (and there can be just one) can be an integer
4346 components. Each component (and there can be just one) can be an integer
4341 or a flag name for stream flags or frame flags, respectively. Values are
4347 or a flag name for stream flags or frame flags, respectively. Values are
4342 resolved to integers and then bitwise OR'd together.
4348 resolved to integers and then bitwise OR'd together.
4343
4349
4344 ``payload`` represents the raw frame payload. If it begins with
4350 ``payload`` represents the raw frame payload. If it begins with
4345 ``cbor:``, the following string is evaluated as Python code and the
4351 ``cbor:``, the following string is evaluated as Python code and the
4346 resulting object is fed into a CBOR encoder. Otherwise it is interpreted
4352 resulting object is fed into a CBOR encoder. Otherwise it is interpreted
4347 as a Python byte string literal.
4353 as a Python byte string literal.
4348 """
4354 """
4349 opts = pycompat.byteskwargs(opts)
4355 opts = pycompat.byteskwargs(opts)
4350
4356
4351 if opts[b'localssh'] and not repo:
4357 if opts[b'localssh'] and not repo:
4352 raise error.Abort(_(b'--localssh requires a repository'))
4358 raise error.Abort(_(b'--localssh requires a repository'))
4353
4359
4354 if opts[b'peer'] and opts[b'peer'] not in (
4360 if opts[b'peer'] and opts[b'peer'] not in (
4355 b'raw',
4361 b'raw',
4356 b'http2',
4362 b'http2',
4357 b'ssh1',
4363 b'ssh1',
4358 b'ssh2',
4364 b'ssh2',
4359 ):
4365 ):
4360 raise error.Abort(
4366 raise error.Abort(
4361 _(b'invalid value for --peer'),
4367 _(b'invalid value for --peer'),
4362 hint=_(b'valid values are "raw", "ssh1", and "ssh2"'),
4368 hint=_(b'valid values are "raw", "ssh1", and "ssh2"'),
4363 )
4369 )
4364
4370
4365 if path and opts[b'localssh']:
4371 if path and opts[b'localssh']:
4366 raise error.Abort(_(b'cannot specify --localssh with an explicit path'))
4372 raise error.Abort(_(b'cannot specify --localssh with an explicit path'))
4367
4373
4368 if ui.interactive():
4374 if ui.interactive():
4369 ui.write(_(b'(waiting for commands on stdin)\n'))
4375 ui.write(_(b'(waiting for commands on stdin)\n'))
4370
4376
4371 blocks = list(_parsewirelangblocks(ui.fin))
4377 blocks = list(_parsewirelangblocks(ui.fin))
4372
4378
4373 proc = None
4379 proc = None
4374 stdin = None
4380 stdin = None
4375 stdout = None
4381 stdout = None
4376 stderr = None
4382 stderr = None
4377 opener = None
4383 opener = None
4378
4384
4379 if opts[b'localssh']:
4385 if opts[b'localssh']:
4380 # We start the SSH server in its own process so there is process
4386 # We start the SSH server in its own process so there is process
4381 # separation. This prevents a whole class of potential bugs around
4387 # separation. This prevents a whole class of potential bugs around
4382 # shared state from interfering with server operation.
4388 # shared state from interfering with server operation.
4383 args = procutil.hgcmd() + [
4389 args = procutil.hgcmd() + [
4384 b'-R',
4390 b'-R',
4385 repo.root,
4391 repo.root,
4386 b'debugserve',
4392 b'debugserve',
4387 b'--sshstdio',
4393 b'--sshstdio',
4388 ]
4394 ]
4389 proc = subprocess.Popen(
4395 proc = subprocess.Popen(
4390 pycompat.rapply(procutil.tonativestr, args),
4396 pycompat.rapply(procutil.tonativestr, args),
4391 stdin=subprocess.PIPE,
4397 stdin=subprocess.PIPE,
4392 stdout=subprocess.PIPE,
4398 stdout=subprocess.PIPE,
4393 stderr=subprocess.PIPE,
4399 stderr=subprocess.PIPE,
4394 bufsize=0,
4400 bufsize=0,
4395 )
4401 )
4396
4402
4397 stdin = proc.stdin
4403 stdin = proc.stdin
4398 stdout = proc.stdout
4404 stdout = proc.stdout
4399 stderr = proc.stderr
4405 stderr = proc.stderr
4400
4406
4401 # We turn the pipes into observers so we can log I/O.
4407 # We turn the pipes into observers so we can log I/O.
4402 if ui.verbose or opts[b'peer'] == b'raw':
4408 if ui.verbose or opts[b'peer'] == b'raw':
4403 stdin = util.makeloggingfileobject(
4409 stdin = util.makeloggingfileobject(
4404 ui, proc.stdin, b'i', logdata=True
4410 ui, proc.stdin, b'i', logdata=True
4405 )
4411 )
4406 stdout = util.makeloggingfileobject(
4412 stdout = util.makeloggingfileobject(
4407 ui, proc.stdout, b'o', logdata=True
4413 ui, proc.stdout, b'o', logdata=True
4408 )
4414 )
4409 stderr = util.makeloggingfileobject(
4415 stderr = util.makeloggingfileobject(
4410 ui, proc.stderr, b'e', logdata=True
4416 ui, proc.stderr, b'e', logdata=True
4411 )
4417 )
4412
4418
4413 # --localssh also implies the peer connection settings.
4419 # --localssh also implies the peer connection settings.
4414
4420
4415 url = b'ssh://localserver'
4421 url = b'ssh://localserver'
4416 autoreadstderr = not opts[b'noreadstderr']
4422 autoreadstderr = not opts[b'noreadstderr']
4417
4423
4418 if opts[b'peer'] == b'ssh1':
4424 if opts[b'peer'] == b'ssh1':
4419 ui.write(_(b'creating ssh peer for wire protocol version 1\n'))
4425 ui.write(_(b'creating ssh peer for wire protocol version 1\n'))
4420 peer = sshpeer.sshv1peer(
4426 peer = sshpeer.sshv1peer(
4421 ui,
4427 ui,
4422 url,
4428 url,
4423 proc,
4429 proc,
4424 stdin,
4430 stdin,
4425 stdout,
4431 stdout,
4426 stderr,
4432 stderr,
4427 None,
4433 None,
4428 autoreadstderr=autoreadstderr,
4434 autoreadstderr=autoreadstderr,
4429 )
4435 )
4430 elif opts[b'peer'] == b'ssh2':
4436 elif opts[b'peer'] == b'ssh2':
4431 ui.write(_(b'creating ssh peer for wire protocol version 2\n'))
4437 ui.write(_(b'creating ssh peer for wire protocol version 2\n'))
4432 peer = sshpeer.sshv2peer(
4438 peer = sshpeer.sshv2peer(
4433 ui,
4439 ui,
4434 url,
4440 url,
4435 proc,
4441 proc,
4436 stdin,
4442 stdin,
4437 stdout,
4443 stdout,
4438 stderr,
4444 stderr,
4439 None,
4445 None,
4440 autoreadstderr=autoreadstderr,
4446 autoreadstderr=autoreadstderr,
4441 )
4447 )
4442 elif opts[b'peer'] == b'raw':
4448 elif opts[b'peer'] == b'raw':
4443 ui.write(_(b'using raw connection to peer\n'))
4449 ui.write(_(b'using raw connection to peer\n'))
4444 peer = None
4450 peer = None
4445 else:
4451 else:
4446 ui.write(_(b'creating ssh peer from handshake results\n'))
4452 ui.write(_(b'creating ssh peer from handshake results\n'))
4447 peer = sshpeer.makepeer(
4453 peer = sshpeer.makepeer(
4448 ui,
4454 ui,
4449 url,
4455 url,
4450 proc,
4456 proc,
4451 stdin,
4457 stdin,
4452 stdout,
4458 stdout,
4453 stderr,
4459 stderr,
4454 autoreadstderr=autoreadstderr,
4460 autoreadstderr=autoreadstderr,
4455 )
4461 )
4456
4462
4457 elif path:
4463 elif path:
4458 # We bypass hg.peer() so we can proxy the sockets.
4464 # We bypass hg.peer() so we can proxy the sockets.
4459 # TODO consider not doing this because we skip
4465 # TODO consider not doing this because we skip
4460 # ``hg.wirepeersetupfuncs`` and potentially other useful functionality.
4466 # ``hg.wirepeersetupfuncs`` and potentially other useful functionality.
4461 u = util.url(path)
4467 u = util.url(path)
4462 if u.scheme != b'http':
4468 if u.scheme != b'http':
4463 raise error.Abort(_(b'only http:// paths are currently supported'))
4469 raise error.Abort(_(b'only http:// paths are currently supported'))
4464
4470
4465 url, authinfo = u.authinfo()
4471 url, authinfo = u.authinfo()
4466 openerargs = {
4472 openerargs = {
4467 'useragent': b'Mercurial debugwireproto',
4473 'useragent': b'Mercurial debugwireproto',
4468 }
4474 }
4469
4475
4470 # Turn pipes/sockets into observers so we can log I/O.
4476 # Turn pipes/sockets into observers so we can log I/O.
4471 if ui.verbose:
4477 if ui.verbose:
4472 openerargs.update(
4478 openerargs.update(
4473 {
4479 {
4474 'loggingfh': ui,
4480 'loggingfh': ui,
4475 'loggingname': b's',
4481 'loggingname': b's',
4476 'loggingopts': {
4482 'loggingopts': {
4477 'logdata': True,
4483 'logdata': True,
4478 'logdataapis': False,
4484 'logdataapis': False,
4479 },
4485 },
4480 }
4486 }
4481 )
4487 )
4482
4488
4483 if ui.debugflag:
4489 if ui.debugflag:
4484 openerargs['loggingopts']['logdataapis'] = True
4490 openerargs['loggingopts']['logdataapis'] = True
4485
4491
4486 # Don't send default headers when in raw mode. This allows us to
4492 # Don't send default headers when in raw mode. This allows us to
4487 # bypass most of the behavior of our URL handling code so we can
4493 # bypass most of the behavior of our URL handling code so we can
4488 # have near complete control over what's sent on the wire.
4494 # have near complete control over what's sent on the wire.
4489 if opts[b'peer'] == b'raw':
4495 if opts[b'peer'] == b'raw':
4490 openerargs['sendaccept'] = False
4496 openerargs['sendaccept'] = False
4491
4497
4492 opener = urlmod.opener(ui, authinfo, **openerargs)
4498 opener = urlmod.opener(ui, authinfo, **openerargs)
4493
4499
4494 if opts[b'peer'] == b'http2':
4500 if opts[b'peer'] == b'http2':
4495 ui.write(_(b'creating http peer for wire protocol version 2\n'))
4501 ui.write(_(b'creating http peer for wire protocol version 2\n'))
4496 # We go through makepeer() because we need an API descriptor for
4502 # We go through makepeer() because we need an API descriptor for
4497 # the peer instance to be useful.
4503 # the peer instance to be useful.
4498 with ui.configoverride(
4504 with ui.configoverride(
4499 {(b'experimental', b'httppeer.advertise-v2'): True}
4505 {(b'experimental', b'httppeer.advertise-v2'): True}
4500 ):
4506 ):
4501 if opts[b'nologhandshake']:
4507 if opts[b'nologhandshake']:
4502 ui.pushbuffer()
4508 ui.pushbuffer()
4503
4509
4504 peer = httppeer.makepeer(ui, path, opener=opener)
4510 peer = httppeer.makepeer(ui, path, opener=opener)
4505
4511
4506 if opts[b'nologhandshake']:
4512 if opts[b'nologhandshake']:
4507 ui.popbuffer()
4513 ui.popbuffer()
4508
4514
4509 if not isinstance(peer, httppeer.httpv2peer):
4515 if not isinstance(peer, httppeer.httpv2peer):
4510 raise error.Abort(
4516 raise error.Abort(
4511 _(
4517 _(
4512 b'could not instantiate HTTP peer for '
4518 b'could not instantiate HTTP peer for '
4513 b'wire protocol version 2'
4519 b'wire protocol version 2'
4514 ),
4520 ),
4515 hint=_(
4521 hint=_(
4516 b'the server may not have the feature '
4522 b'the server may not have the feature '
4517 b'enabled or is not allowing this '
4523 b'enabled or is not allowing this '
4518 b'client version'
4524 b'client version'
4519 ),
4525 ),
4520 )
4526 )
4521
4527
4522 elif opts[b'peer'] == b'raw':
4528 elif opts[b'peer'] == b'raw':
4523 ui.write(_(b'using raw connection to peer\n'))
4529 ui.write(_(b'using raw connection to peer\n'))
4524 peer = None
4530 peer = None
4525 elif opts[b'peer']:
4531 elif opts[b'peer']:
4526 raise error.Abort(
4532 raise error.Abort(
4527 _(b'--peer %s not supported with HTTP peers') % opts[b'peer']
4533 _(b'--peer %s not supported with HTTP peers') % opts[b'peer']
4528 )
4534 )
4529 else:
4535 else:
4530 peer = httppeer.makepeer(ui, path, opener=opener)
4536 peer = httppeer.makepeer(ui, path, opener=opener)
4531
4537
4532 # We /could/ populate stdin/stdout with sock.makefile()...
4538 # We /could/ populate stdin/stdout with sock.makefile()...
4533 else:
4539 else:
4534 raise error.Abort(_(b'unsupported connection configuration'))
4540 raise error.Abort(_(b'unsupported connection configuration'))
4535
4541
4536 batchedcommands = None
4542 batchedcommands = None
4537
4543
4538 # Now perform actions based on the parsed wire language instructions.
4544 # Now perform actions based on the parsed wire language instructions.
4539 for action, lines in blocks:
4545 for action, lines in blocks:
4540 if action in (b'raw', b'raw+'):
4546 if action in (b'raw', b'raw+'):
4541 if not stdin:
4547 if not stdin:
4542 raise error.Abort(_(b'cannot call raw/raw+ on this peer'))
4548 raise error.Abort(_(b'cannot call raw/raw+ on this peer'))
4543
4549
4544 # Concatenate the data together.
4550 # Concatenate the data together.
4545 data = b''.join(l.lstrip() for l in lines)
4551 data = b''.join(l.lstrip() for l in lines)
4546 data = stringutil.unescapestr(data)
4552 data = stringutil.unescapestr(data)
4547 stdin.write(data)
4553 stdin.write(data)
4548
4554
4549 if action == b'raw+':
4555 if action == b'raw+':
4550 stdin.flush()
4556 stdin.flush()
4551 elif action == b'flush':
4557 elif action == b'flush':
4552 if not stdin:
4558 if not stdin:
4553 raise error.Abort(_(b'cannot call flush on this peer'))
4559 raise error.Abort(_(b'cannot call flush on this peer'))
4554 stdin.flush()
4560 stdin.flush()
4555 elif action.startswith(b'command'):
4561 elif action.startswith(b'command'):
4556 if not peer:
4562 if not peer:
4557 raise error.Abort(
4563 raise error.Abort(
4558 _(
4564 _(
4559 b'cannot send commands unless peer instance '
4565 b'cannot send commands unless peer instance '
4560 b'is available'
4566 b'is available'
4561 )
4567 )
4562 )
4568 )
4563
4569
4564 command = action.split(b' ', 1)[1]
4570 command = action.split(b' ', 1)[1]
4565
4571
4566 args = {}
4572 args = {}
4567 for line in lines:
4573 for line in lines:
4568 # We need to allow empty values.
4574 # We need to allow empty values.
4569 fields = line.lstrip().split(b' ', 1)
4575 fields = line.lstrip().split(b' ', 1)
4570 if len(fields) == 1:
4576 if len(fields) == 1:
4571 key = fields[0]
4577 key = fields[0]
4572 value = b''
4578 value = b''
4573 else:
4579 else:
4574 key, value = fields
4580 key, value = fields
4575
4581
4576 if value.startswith(b'eval:'):
4582 if value.startswith(b'eval:'):
4577 value = stringutil.evalpythonliteral(value[5:])
4583 value = stringutil.evalpythonliteral(value[5:])
4578 else:
4584 else:
4579 value = stringutil.unescapestr(value)
4585 value = stringutil.unescapestr(value)
4580
4586
4581 args[key] = value
4587 args[key] = value
4582
4588
4583 if batchedcommands is not None:
4589 if batchedcommands is not None:
4584 batchedcommands.append((command, args))
4590 batchedcommands.append((command, args))
4585 continue
4591 continue
4586
4592
4587 ui.status(_(b'sending %s command\n') % command)
4593 ui.status(_(b'sending %s command\n') % command)
4588
4594
4589 if b'PUSHFILE' in args:
4595 if b'PUSHFILE' in args:
4590 with open(args[b'PUSHFILE'], 'rb') as fh:
4596 with open(args[b'PUSHFILE'], 'rb') as fh:
4591 del args[b'PUSHFILE']
4597 del args[b'PUSHFILE']
4592 res, output = peer._callpush(
4598 res, output = peer._callpush(
4593 command, fh, **pycompat.strkwargs(args)
4599 command, fh, **pycompat.strkwargs(args)
4594 )
4600 )
4595 ui.status(_(b'result: %s\n') % stringutil.escapestr(res))
4601 ui.status(_(b'result: %s\n') % stringutil.escapestr(res))
4596 ui.status(
4602 ui.status(
4597 _(b'remote output: %s\n') % stringutil.escapestr(output)
4603 _(b'remote output: %s\n') % stringutil.escapestr(output)
4598 )
4604 )
4599 else:
4605 else:
4600 with peer.commandexecutor() as e:
4606 with peer.commandexecutor() as e:
4601 res = e.callcommand(command, args).result()
4607 res = e.callcommand(command, args).result()
4602
4608
4603 if isinstance(res, wireprotov2peer.commandresponse):
4609 if isinstance(res, wireprotov2peer.commandresponse):
4604 val = res.objects()
4610 val = res.objects()
4605 ui.status(
4611 ui.status(
4606 _(b'response: %s\n')
4612 _(b'response: %s\n')
4607 % stringutil.pprint(val, bprefix=True, indent=2)
4613 % stringutil.pprint(val, bprefix=True, indent=2)
4608 )
4614 )
4609 else:
4615 else:
4610 ui.status(
4616 ui.status(
4611 _(b'response: %s\n')
4617 _(b'response: %s\n')
4612 % stringutil.pprint(res, bprefix=True, indent=2)
4618 % stringutil.pprint(res, bprefix=True, indent=2)
4613 )
4619 )
4614
4620
4615 elif action == b'batchbegin':
4621 elif action == b'batchbegin':
4616 if batchedcommands is not None:
4622 if batchedcommands is not None:
4617 raise error.Abort(_(b'nested batchbegin not allowed'))
4623 raise error.Abort(_(b'nested batchbegin not allowed'))
4618
4624
4619 batchedcommands = []
4625 batchedcommands = []
4620 elif action == b'batchsubmit':
4626 elif action == b'batchsubmit':
4621 # There is a batching API we could go through. But it would be
4627 # There is a batching API we could go through. But it would be
4622 # difficult to normalize requests into function calls. It is easier
4628 # difficult to normalize requests into function calls. It is easier
4623 # to bypass this layer and normalize to commands + args.
4629 # to bypass this layer and normalize to commands + args.
4624 ui.status(
4630 ui.status(
4625 _(b'sending batch with %d sub-commands\n')
4631 _(b'sending batch with %d sub-commands\n')
4626 % len(batchedcommands)
4632 % len(batchedcommands)
4627 )
4633 )
4628 assert peer is not None
4634 assert peer is not None
4629 for i, chunk in enumerate(peer._submitbatch(batchedcommands)):
4635 for i, chunk in enumerate(peer._submitbatch(batchedcommands)):
4630 ui.status(
4636 ui.status(
4631 _(b'response #%d: %s\n') % (i, stringutil.escapestr(chunk))
4637 _(b'response #%d: %s\n') % (i, stringutil.escapestr(chunk))
4632 )
4638 )
4633
4639
4634 batchedcommands = None
4640 batchedcommands = None
4635
4641
4636 elif action.startswith(b'httprequest '):
4642 elif action.startswith(b'httprequest '):
4637 if not opener:
4643 if not opener:
4638 raise error.Abort(
4644 raise error.Abort(
4639 _(b'cannot use httprequest without an HTTP peer')
4645 _(b'cannot use httprequest without an HTTP peer')
4640 )
4646 )
4641
4647
4642 request = action.split(b' ', 2)
4648 request = action.split(b' ', 2)
4643 if len(request) != 3:
4649 if len(request) != 3:
4644 raise error.Abort(
4650 raise error.Abort(
4645 _(
4651 _(
4646 b'invalid httprequest: expected format is '
4652 b'invalid httprequest: expected format is '
4647 b'"httprequest <method> <path>'
4653 b'"httprequest <method> <path>'
4648 )
4654 )
4649 )
4655 )
4650
4656
4651 method, httppath = request[1:]
4657 method, httppath = request[1:]
4652 headers = {}
4658 headers = {}
4653 body = None
4659 body = None
4654 frames = []
4660 frames = []
4655 for line in lines:
4661 for line in lines:
4656 line = line.lstrip()
4662 line = line.lstrip()
4657 m = re.match(b'^([a-zA-Z0-9_-]+): (.*)$', line)
4663 m = re.match(b'^([a-zA-Z0-9_-]+): (.*)$', line)
4658 if m:
4664 if m:
4659 # Headers need to use native strings.
4665 # Headers need to use native strings.
4660 key = pycompat.strurl(m.group(1))
4666 key = pycompat.strurl(m.group(1))
4661 value = pycompat.strurl(m.group(2))
4667 value = pycompat.strurl(m.group(2))
4662 headers[key] = value
4668 headers[key] = value
4663 continue
4669 continue
4664
4670
4665 if line.startswith(b'BODYFILE '):
4671 if line.startswith(b'BODYFILE '):
4666 with open(line.split(b' ', 1), b'rb') as fh:
4672 with open(line.split(b' ', 1), b'rb') as fh:
4667 body = fh.read()
4673 body = fh.read()
4668 elif line.startswith(b'frame '):
4674 elif line.startswith(b'frame '):
4669 frame = wireprotoframing.makeframefromhumanstring(
4675 frame = wireprotoframing.makeframefromhumanstring(
4670 line[len(b'frame ') :]
4676 line[len(b'frame ') :]
4671 )
4677 )
4672
4678
4673 frames.append(frame)
4679 frames.append(frame)
4674 else:
4680 else:
4675 raise error.Abort(
4681 raise error.Abort(
4676 _(b'unknown argument to httprequest: %s') % line
4682 _(b'unknown argument to httprequest: %s') % line
4677 )
4683 )
4678
4684
4679 url = path + httppath
4685 url = path + httppath
4680
4686
4681 if frames:
4687 if frames:
4682 body = b''.join(bytes(f) for f in frames)
4688 body = b''.join(bytes(f) for f in frames)
4683
4689
4684 req = urlmod.urlreq.request(pycompat.strurl(url), body, headers)
4690 req = urlmod.urlreq.request(pycompat.strurl(url), body, headers)
4685
4691
4686 # urllib.Request insists on using has_data() as a proxy for
4692 # urllib.Request insists on using has_data() as a proxy for
4687 # determining the request method. Override that to use our
4693 # determining the request method. Override that to use our
4688 # explicitly requested method.
4694 # explicitly requested method.
4689 req.get_method = lambda: pycompat.sysstr(method)
4695 req.get_method = lambda: pycompat.sysstr(method)
4690
4696
4691 try:
4697 try:
4692 res = opener.open(req)
4698 res = opener.open(req)
4693 body = res.read()
4699 body = res.read()
4694 except util.urlerr.urlerror as e:
4700 except util.urlerr.urlerror as e:
4695 # read() method must be called, but only exists in Python 2
4701 # read() method must be called, but only exists in Python 2
4696 getattr(e, 'read', lambda: None)()
4702 getattr(e, 'read', lambda: None)()
4697 continue
4703 continue
4698
4704
4699 ct = res.headers.get('Content-Type')
4705 ct = res.headers.get('Content-Type')
4700 if ct == 'application/mercurial-cbor':
4706 if ct == 'application/mercurial-cbor':
4701 ui.write(
4707 ui.write(
4702 _(b'cbor> %s\n')
4708 _(b'cbor> %s\n')
4703 % stringutil.pprint(
4709 % stringutil.pprint(
4704 cborutil.decodeall(body), bprefix=True, indent=2
4710 cborutil.decodeall(body), bprefix=True, indent=2
4705 )
4711 )
4706 )
4712 )
4707
4713
4708 elif action == b'close':
4714 elif action == b'close':
4709 assert peer is not None
4715 assert peer is not None
4710 peer.close()
4716 peer.close()
4711 elif action == b'readavailable':
4717 elif action == b'readavailable':
4712 if not stdout or not stderr:
4718 if not stdout or not stderr:
4713 raise error.Abort(
4719 raise error.Abort(
4714 _(b'readavailable not available on this peer')
4720 _(b'readavailable not available on this peer')
4715 )
4721 )
4716
4722
4717 stdin.close()
4723 stdin.close()
4718 stdout.read()
4724 stdout.read()
4719 stderr.read()
4725 stderr.read()
4720
4726
4721 elif action == b'readline':
4727 elif action == b'readline':
4722 if not stdout:
4728 if not stdout:
4723 raise error.Abort(_(b'readline not available on this peer'))
4729 raise error.Abort(_(b'readline not available on this peer'))
4724 stdout.readline()
4730 stdout.readline()
4725 elif action == b'ereadline':
4731 elif action == b'ereadline':
4726 if not stderr:
4732 if not stderr:
4727 raise error.Abort(_(b'ereadline not available on this peer'))
4733 raise error.Abort(_(b'ereadline not available on this peer'))
4728 stderr.readline()
4734 stderr.readline()
4729 elif action.startswith(b'read '):
4735 elif action.startswith(b'read '):
4730 count = int(action.split(b' ', 1)[1])
4736 count = int(action.split(b' ', 1)[1])
4731 if not stdout:
4737 if not stdout:
4732 raise error.Abort(_(b'read not available on this peer'))
4738 raise error.Abort(_(b'read not available on this peer'))
4733 stdout.read(count)
4739 stdout.read(count)
4734 elif action.startswith(b'eread '):
4740 elif action.startswith(b'eread '):
4735 count = int(action.split(b' ', 1)[1])
4741 count = int(action.split(b' ', 1)[1])
4736 if not stderr:
4742 if not stderr:
4737 raise error.Abort(_(b'eread not available on this peer'))
4743 raise error.Abort(_(b'eread not available on this peer'))
4738 stderr.read(count)
4744 stderr.read(count)
4739 else:
4745 else:
4740 raise error.Abort(_(b'unknown action: %s') % action)
4746 raise error.Abort(_(b'unknown action: %s') % action)
4741
4747
4742 if batchedcommands is not None:
4748 if batchedcommands is not None:
4743 raise error.Abort(_(b'unclosed "batchbegin" request'))
4749 raise error.Abort(_(b'unclosed "batchbegin" request'))
4744
4750
4745 if peer:
4751 if peer:
4746 peer.close()
4752 peer.close()
4747
4753
4748 if proc:
4754 if proc:
4749 proc.kill()
4755 proc.kill()
@@ -1,876 +1,880 b''
1 # tags.py - read tag info from local repository
1 # tags.py - read tag info from local repository
2 #
2 #
3 # Copyright 2009 Matt Mackall <mpm@selenic.com>
3 # Copyright 2009 Matt Mackall <mpm@selenic.com>
4 # Copyright 2009 Greg Ward <greg@gerg.ca>
4 # Copyright 2009 Greg Ward <greg@gerg.ca>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 # Currently this module only deals with reading and caching tags.
9 # Currently this module only deals with reading and caching tags.
10 # Eventually, it could take care of updating (adding/removing/moving)
10 # Eventually, it could take care of updating (adding/removing/moving)
11 # tags too.
11 # tags too.
12
12
13 from __future__ import absolute_import
13 from __future__ import absolute_import
14
14
15 import errno
15 import errno
16 import io
16 import io
17
17
18 from .node import (
18 from .node import (
19 bin,
19 bin,
20 hex,
20 hex,
21 nullid,
21 nullid,
22 nullrev,
22 nullrev,
23 short,
23 short,
24 )
24 )
25 from .i18n import _
25 from .i18n import _
26 from . import (
26 from . import (
27 encoding,
27 encoding,
28 error,
28 error,
29 match as matchmod,
29 match as matchmod,
30 pycompat,
30 pycompat,
31 scmutil,
31 scmutil,
32 util,
32 util,
33 )
33 )
34 from .utils import stringutil
34 from .utils import stringutil
35
35
36 # Tags computation can be expensive and caches exist to make it fast in
36 # Tags computation can be expensive and caches exist to make it fast in
37 # the common case.
37 # the common case.
38 #
38 #
39 # The "hgtagsfnodes1" cache file caches the .hgtags filenode values for
39 # The "hgtagsfnodes1" cache file caches the .hgtags filenode values for
40 # each revision in the repository. The file is effectively an array of
40 # each revision in the repository. The file is effectively an array of
41 # fixed length records. Read the docs for "hgtagsfnodescache" for technical
41 # fixed length records. Read the docs for "hgtagsfnodescache" for technical
42 # details.
42 # details.
43 #
43 #
44 # The .hgtags filenode cache grows in proportion to the length of the
44 # The .hgtags filenode cache grows in proportion to the length of the
45 # changelog. The file is truncated when the # changelog is stripped.
45 # changelog. The file is truncated when the # changelog is stripped.
46 #
46 #
47 # The purpose of the filenode cache is to avoid the most expensive part
47 # The purpose of the filenode cache is to avoid the most expensive part
48 # of finding global tags, which is looking up the .hgtags filenode in the
48 # of finding global tags, which is looking up the .hgtags filenode in the
49 # manifest for each head. This can take dozens or over 100ms for
49 # manifest for each head. This can take dozens or over 100ms for
50 # repositories with very large manifests. Multiplied by dozens or even
50 # repositories with very large manifests. Multiplied by dozens or even
51 # hundreds of heads and there is a significant performance concern.
51 # hundreds of heads and there is a significant performance concern.
52 #
52 #
53 # There also exist a separate cache file for each repository filter.
53 # There also exist a separate cache file for each repository filter.
54 # These "tags-*" files store information about the history of tags.
54 # These "tags-*" files store information about the history of tags.
55 #
55 #
56 # The tags cache files consists of a cache validation line followed by
56 # The tags cache files consists of a cache validation line followed by
57 # a history of tags.
57 # a history of tags.
58 #
58 #
59 # The cache validation line has the format:
59 # The cache validation line has the format:
60 #
60 #
61 # <tiprev> <tipnode> [<filteredhash>]
61 # <tiprev> <tipnode> [<filteredhash>]
62 #
62 #
63 # <tiprev> is an integer revision and <tipnode> is a 40 character hex
63 # <tiprev> is an integer revision and <tipnode> is a 40 character hex
64 # node for that changeset. These redundantly identify the repository
64 # node for that changeset. These redundantly identify the repository
65 # tip from the time the cache was written. In addition, <filteredhash>,
65 # tip from the time the cache was written. In addition, <filteredhash>,
66 # if present, is a 40 character hex hash of the contents of the filtered
66 # if present, is a 40 character hex hash of the contents of the filtered
67 # revisions for this filter. If the set of filtered revs changes, the
67 # revisions for this filter. If the set of filtered revs changes, the
68 # hash will change and invalidate the cache.
68 # hash will change and invalidate the cache.
69 #
69 #
70 # The history part of the tags cache consists of lines of the form:
70 # The history part of the tags cache consists of lines of the form:
71 #
71 #
72 # <node> <tag>
72 # <node> <tag>
73 #
73 #
74 # (This format is identical to that of .hgtags files.)
74 # (This format is identical to that of .hgtags files.)
75 #
75 #
76 # <tag> is the tag name and <node> is the 40 character hex changeset
76 # <tag> is the tag name and <node> is the 40 character hex changeset
77 # the tag is associated with.
77 # the tag is associated with.
78 #
78 #
79 # Tags are written sorted by tag name.
79 # Tags are written sorted by tag name.
80 #
80 #
81 # Tags associated with multiple changesets have an entry for each changeset.
81 # Tags associated with multiple changesets have an entry for each changeset.
82 # The most recent changeset (in terms of revlog ordering for the head
82 # The most recent changeset (in terms of revlog ordering for the head
83 # setting it) for each tag is last.
83 # setting it) for each tag is last.
84
84
85
85
86 def fnoderevs(ui, repo, revs):
86 def fnoderevs(ui, repo, revs):
87 """return the list of '.hgtags' fnodes used in a set revisions
87 """return the list of '.hgtags' fnodes used in a set revisions
88
88
89 This is returned as list of unique fnodes. We use a list instead of a set
89 This is returned as list of unique fnodes. We use a list instead of a set
90 because order matters when it comes to tags."""
90 because order matters when it comes to tags."""
91 unfi = repo.unfiltered()
91 unfi = repo.unfiltered()
92 tonode = unfi.changelog.node
92 tonode = unfi.changelog.node
93 nodes = [tonode(r) for r in revs]
93 nodes = [tonode(r) for r in revs]
94 fnodes = _getfnodes(ui, repo, nodes)
94 fnodes = _getfnodes(ui, repo, nodes)
95 fnodes = _filterfnodes(fnodes, nodes)
95 fnodes = _filterfnodes(fnodes, nodes)
96 return fnodes
96 return fnodes
97
97
98
98
99 def _nulltonone(value):
99 def _nulltonone(value):
100 """convert nullid to None
100 """convert nullid to None
101
101
102 For tag value, nullid means "deleted". This small utility function helps
102 For tag value, nullid means "deleted". This small utility function helps
103 translating that to None."""
103 translating that to None."""
104 if value == nullid:
104 if value == nullid:
105 return None
105 return None
106 return value
106 return value
107
107
108
108
109 def difftags(ui, repo, oldfnodes, newfnodes):
109 def difftags(ui, repo, oldfnodes, newfnodes):
110 """list differences between tags expressed in two set of file-nodes
110 """list differences between tags expressed in two set of file-nodes
111
111
112 The list contains entries in the form: (tagname, oldvalue, new value).
112 The list contains entries in the form: (tagname, oldvalue, new value).
113 None is used to expressed missing value:
113 None is used to expressed missing value:
114 ('foo', None, 'abcd') is a new tag,
114 ('foo', None, 'abcd') is a new tag,
115 ('bar', 'ef01', None) is a deletion,
115 ('bar', 'ef01', None) is a deletion,
116 ('baz', 'abcd', 'ef01') is a tag movement.
116 ('baz', 'abcd', 'ef01') is a tag movement.
117 """
117 """
118 if oldfnodes == newfnodes:
118 if oldfnodes == newfnodes:
119 return []
119 return []
120 oldtags = _tagsfromfnodes(ui, repo, oldfnodes)
120 oldtags = _tagsfromfnodes(ui, repo, oldfnodes)
121 newtags = _tagsfromfnodes(ui, repo, newfnodes)
121 newtags = _tagsfromfnodes(ui, repo, newfnodes)
122
122
123 # list of (tag, old, new): None means missing
123 # list of (tag, old, new): None means missing
124 entries = []
124 entries = []
125 for tag, (new, __) in newtags.items():
125 for tag, (new, __) in newtags.items():
126 new = _nulltonone(new)
126 new = _nulltonone(new)
127 old, __ = oldtags.pop(tag, (None, None))
127 old, __ = oldtags.pop(tag, (None, None))
128 old = _nulltonone(old)
128 old = _nulltonone(old)
129 if old != new:
129 if old != new:
130 entries.append((tag, old, new))
130 entries.append((tag, old, new))
131 # handle deleted tags
131 # handle deleted tags
132 for tag, (old, __) in oldtags.items():
132 for tag, (old, __) in oldtags.items():
133 old = _nulltonone(old)
133 old = _nulltonone(old)
134 if old is not None:
134 if old is not None:
135 entries.append((tag, old, None))
135 entries.append((tag, old, None))
136 entries.sort()
136 entries.sort()
137 return entries
137 return entries
138
138
139
139
140 def writediff(fp, difflist):
140 def writediff(fp, difflist):
141 """write tags diff information to a file.
141 """write tags diff information to a file.
142
142
143 Data are stored with a line based format:
143 Data are stored with a line based format:
144
144
145 <action> <hex-node> <tag-name>\n
145 <action> <hex-node> <tag-name>\n
146
146
147 Action are defined as follow:
147 Action are defined as follow:
148 -R tag is removed,
148 -R tag is removed,
149 +A tag is added,
149 +A tag is added,
150 -M tag is moved (old value),
150 -M tag is moved (old value),
151 +M tag is moved (new value),
151 +M tag is moved (new value),
152
152
153 Example:
153 Example:
154
154
155 +A 875517b4806a848f942811a315a5bce30804ae85 t5
155 +A 875517b4806a848f942811a315a5bce30804ae85 t5
156
156
157 See documentation of difftags output for details about the input.
157 See documentation of difftags output for details about the input.
158 """
158 """
159 add = b'+A %s %s\n'
159 add = b'+A %s %s\n'
160 remove = b'-R %s %s\n'
160 remove = b'-R %s %s\n'
161 updateold = b'-M %s %s\n'
161 updateold = b'-M %s %s\n'
162 updatenew = b'+M %s %s\n'
162 updatenew = b'+M %s %s\n'
163 for tag, old, new in difflist:
163 for tag, old, new in difflist:
164 # translate to hex
164 # translate to hex
165 if old is not None:
165 if old is not None:
166 old = hex(old)
166 old = hex(old)
167 if new is not None:
167 if new is not None:
168 new = hex(new)
168 new = hex(new)
169 # write to file
169 # write to file
170 if old is None:
170 if old is None:
171 fp.write(add % (new, tag))
171 fp.write(add % (new, tag))
172 elif new is None:
172 elif new is None:
173 fp.write(remove % (old, tag))
173 fp.write(remove % (old, tag))
174 else:
174 else:
175 fp.write(updateold % (old, tag))
175 fp.write(updateold % (old, tag))
176 fp.write(updatenew % (new, tag))
176 fp.write(updatenew % (new, tag))
177
177
178
178
179 def findglobaltags(ui, repo):
179 def findglobaltags(ui, repo):
180 """Find global tags in a repo: return a tagsmap
180 """Find global tags in a repo: return a tagsmap
181
181
182 tagsmap: tag name to (node, hist) 2-tuples.
182 tagsmap: tag name to (node, hist) 2-tuples.
183
183
184 The tags cache is read and updated as a side-effect of calling.
184 The tags cache is read and updated as a side-effect of calling.
185 """
185 """
186 (heads, tagfnode, valid, cachetags, shouldwrite) = _readtagcache(ui, repo)
186 (heads, tagfnode, valid, cachetags, shouldwrite) = _readtagcache(ui, repo)
187 if cachetags is not None:
187 if cachetags is not None:
188 assert not shouldwrite
188 assert not shouldwrite
189 # XXX is this really 100% correct? are there oddball special
189 # XXX is this really 100% correct? are there oddball special
190 # cases where a global tag should outrank a local tag but won't,
190 # cases where a global tag should outrank a local tag but won't,
191 # because cachetags does not contain rank info?
191 # because cachetags does not contain rank info?
192 alltags = {}
192 alltags = {}
193 _updatetags(cachetags, alltags)
193 _updatetags(cachetags, alltags)
194 return alltags
194 return alltags
195
195
196 for head in reversed(heads): # oldest to newest
196 for head in reversed(heads): # oldest to newest
197 assert repo.changelog.index.has_node(
197 assert repo.changelog.index.has_node(
198 head
198 head
199 ), b"tag cache returned bogus head %s" % short(head)
199 ), b"tag cache returned bogus head %s" % short(head)
200 fnodes = _filterfnodes(tagfnode, reversed(heads))
200 fnodes = _filterfnodes(tagfnode, reversed(heads))
201 alltags = _tagsfromfnodes(ui, repo, fnodes)
201 alltags = _tagsfromfnodes(ui, repo, fnodes)
202
202
203 # and update the cache (if necessary)
203 # and update the cache (if necessary)
204 if shouldwrite:
204 if shouldwrite:
205 _writetagcache(ui, repo, valid, alltags)
205 _writetagcache(ui, repo, valid, alltags)
206 return alltags
206 return alltags
207
207
208
208
209 def _filterfnodes(tagfnode, nodes):
209 def _filterfnodes(tagfnode, nodes):
210 """return a list of unique fnodes
210 """return a list of unique fnodes
211
211
212 The order of this list matches the order of "nodes". Preserving this order
212 The order of this list matches the order of "nodes". Preserving this order
213 is important as reading tags in different order provides different
213 is important as reading tags in different order provides different
214 results."""
214 results."""
215 seen = set() # set of fnode
215 seen = set() # set of fnode
216 fnodes = []
216 fnodes = []
217 for no in nodes: # oldest to newest
217 for no in nodes: # oldest to newest
218 fnode = tagfnode.get(no)
218 fnode = tagfnode.get(no)
219 if fnode and fnode not in seen:
219 if fnode and fnode not in seen:
220 seen.add(fnode)
220 seen.add(fnode)
221 fnodes.append(fnode)
221 fnodes.append(fnode)
222 return fnodes
222 return fnodes
223
223
224
224
225 def _tagsfromfnodes(ui, repo, fnodes):
225 def _tagsfromfnodes(ui, repo, fnodes):
226 """return a tagsmap from a list of file-node
226 """return a tagsmap from a list of file-node
227
227
228 tagsmap: tag name to (node, hist) 2-tuples.
228 tagsmap: tag name to (node, hist) 2-tuples.
229
229
230 The order of the list matters."""
230 The order of the list matters."""
231 alltags = {}
231 alltags = {}
232 fctx = None
232 fctx = None
233 for fnode in fnodes:
233 for fnode in fnodes:
234 if fctx is None:
234 if fctx is None:
235 fctx = repo.filectx(b'.hgtags', fileid=fnode)
235 fctx = repo.filectx(b'.hgtags', fileid=fnode)
236 else:
236 else:
237 fctx = fctx.filectx(fnode)
237 fctx = fctx.filectx(fnode)
238 filetags = _readtags(ui, repo, fctx.data().splitlines(), fctx)
238 filetags = _readtags(ui, repo, fctx.data().splitlines(), fctx)
239 _updatetags(filetags, alltags)
239 _updatetags(filetags, alltags)
240 return alltags
240 return alltags
241
241
242
242
243 def readlocaltags(ui, repo, alltags, tagtypes):
243 def readlocaltags(ui, repo, alltags, tagtypes):
244 '''Read local tags in repo. Update alltags and tagtypes.'''
244 '''Read local tags in repo. Update alltags and tagtypes.'''
245 try:
245 try:
246 data = repo.vfs.read(b"localtags")
246 data = repo.vfs.read(b"localtags")
247 except IOError as inst:
247 except IOError as inst:
248 if inst.errno != errno.ENOENT:
248 if inst.errno != errno.ENOENT:
249 raise
249 raise
250 return
250 return
251
251
252 # localtags is in the local encoding; re-encode to UTF-8 on
252 # localtags is in the local encoding; re-encode to UTF-8 on
253 # input for consistency with the rest of this module.
253 # input for consistency with the rest of this module.
254 filetags = _readtags(
254 filetags = _readtags(
255 ui, repo, data.splitlines(), b"localtags", recode=encoding.fromlocal
255 ui, repo, data.splitlines(), b"localtags", recode=encoding.fromlocal
256 )
256 )
257
257
258 # remove tags pointing to invalid nodes
258 # remove tags pointing to invalid nodes
259 cl = repo.changelog
259 cl = repo.changelog
260 for t in list(filetags):
260 for t in list(filetags):
261 try:
261 try:
262 cl.rev(filetags[t][0])
262 cl.rev(filetags[t][0])
263 except (LookupError, ValueError):
263 except (LookupError, ValueError):
264 del filetags[t]
264 del filetags[t]
265
265
266 _updatetags(filetags, alltags, b'local', tagtypes)
266 _updatetags(filetags, alltags, b'local', tagtypes)
267
267
268
268
269 def _readtaghist(ui, repo, lines, fn, recode=None, calcnodelines=False):
269 def _readtaghist(ui, repo, lines, fn, recode=None, calcnodelines=False):
270 """Read tag definitions from a file (or any source of lines).
270 """Read tag definitions from a file (or any source of lines).
271
271
272 This function returns two sortdicts with similar information:
272 This function returns two sortdicts with similar information:
273
273
274 - the first dict, bintaghist, contains the tag information as expected by
274 - the first dict, bintaghist, contains the tag information as expected by
275 the _readtags function, i.e. a mapping from tag name to (node, hist):
275 the _readtags function, i.e. a mapping from tag name to (node, hist):
276 - node is the node id from the last line read for that name,
276 - node is the node id from the last line read for that name,
277 - hist is the list of node ids previously associated with it (in file
277 - hist is the list of node ids previously associated with it (in file
278 order). All node ids are binary, not hex.
278 order). All node ids are binary, not hex.
279
279
280 - the second dict, hextaglines, is a mapping from tag name to a list of
280 - the second dict, hextaglines, is a mapping from tag name to a list of
281 [hexnode, line number] pairs, ordered from the oldest to the newest node.
281 [hexnode, line number] pairs, ordered from the oldest to the newest node.
282
282
283 When calcnodelines is False the hextaglines dict is not calculated (an
283 When calcnodelines is False the hextaglines dict is not calculated (an
284 empty dict is returned). This is done to improve this function's
284 empty dict is returned). This is done to improve this function's
285 performance in cases where the line numbers are not needed.
285 performance in cases where the line numbers are not needed.
286 """
286 """
287
287
288 bintaghist = util.sortdict()
288 bintaghist = util.sortdict()
289 hextaglines = util.sortdict()
289 hextaglines = util.sortdict()
290 count = 0
290 count = 0
291
291
292 def dbg(msg):
292 def dbg(msg):
293 ui.debug(b"%s, line %d: %s\n" % (fn, count, msg))
293 ui.debug(b"%s, line %d: %s\n" % (fn, count, msg))
294
294
295 for nline, line in enumerate(lines):
295 for nline, line in enumerate(lines):
296 count += 1
296 count += 1
297 if not line:
297 if not line:
298 continue
298 continue
299 try:
299 try:
300 (nodehex, name) = line.split(b" ", 1)
300 (nodehex, name) = line.split(b" ", 1)
301 except ValueError:
301 except ValueError:
302 dbg(b"cannot parse entry")
302 dbg(b"cannot parse entry")
303 continue
303 continue
304 name = name.strip()
304 name = name.strip()
305 if recode:
305 if recode:
306 name = recode(name)
306 name = recode(name)
307 try:
307 try:
308 nodebin = bin(nodehex)
308 nodebin = bin(nodehex)
309 except TypeError:
309 except TypeError:
310 dbg(b"node '%s' is not well formed" % nodehex)
310 dbg(b"node '%s' is not well formed" % nodehex)
311 continue
311 continue
312
312
313 # update filetags
313 # update filetags
314 if calcnodelines:
314 if calcnodelines:
315 # map tag name to a list of line numbers
315 # map tag name to a list of line numbers
316 if name not in hextaglines:
316 if name not in hextaglines:
317 hextaglines[name] = []
317 hextaglines[name] = []
318 hextaglines[name].append([nodehex, nline])
318 hextaglines[name].append([nodehex, nline])
319 continue
319 continue
320 # map tag name to (node, hist)
320 # map tag name to (node, hist)
321 if name not in bintaghist:
321 if name not in bintaghist:
322 bintaghist[name] = []
322 bintaghist[name] = []
323 bintaghist[name].append(nodebin)
323 bintaghist[name].append(nodebin)
324 return bintaghist, hextaglines
324 return bintaghist, hextaglines
325
325
326
326
327 def _readtags(ui, repo, lines, fn, recode=None, calcnodelines=False):
327 def _readtags(ui, repo, lines, fn, recode=None, calcnodelines=False):
328 """Read tag definitions from a file (or any source of lines).
328 """Read tag definitions from a file (or any source of lines).
329
329
330 Returns a mapping from tag name to (node, hist).
330 Returns a mapping from tag name to (node, hist).
331
331
332 "node" is the node id from the last line read for that name. "hist"
332 "node" is the node id from the last line read for that name. "hist"
333 is the list of node ids previously associated with it (in file order).
333 is the list of node ids previously associated with it (in file order).
334 All node ids are binary, not hex.
334 All node ids are binary, not hex.
335 """
335 """
336 filetags, nodelines = _readtaghist(
336 filetags, nodelines = _readtaghist(
337 ui, repo, lines, fn, recode=recode, calcnodelines=calcnodelines
337 ui, repo, lines, fn, recode=recode, calcnodelines=calcnodelines
338 )
338 )
339 # util.sortdict().__setitem__ is much slower at replacing then inserting
339 # util.sortdict().__setitem__ is much slower at replacing then inserting
340 # new entries. The difference can matter if there are thousands of tags.
340 # new entries. The difference can matter if there are thousands of tags.
341 # Create a new sortdict to avoid the performance penalty.
341 # Create a new sortdict to avoid the performance penalty.
342 newtags = util.sortdict()
342 newtags = util.sortdict()
343 for tag, taghist in filetags.items():
343 for tag, taghist in filetags.items():
344 newtags[tag] = (taghist[-1], taghist[:-1])
344 newtags[tag] = (taghist[-1], taghist[:-1])
345 return newtags
345 return newtags
346
346
347
347
348 def _updatetags(filetags, alltags, tagtype=None, tagtypes=None):
348 def _updatetags(filetags, alltags, tagtype=None, tagtypes=None):
349 """Incorporate the tag info read from one file into dictionnaries
349 """Incorporate the tag info read from one file into dictionnaries
350
350
351 The first one, 'alltags', is a "tagmaps" (see 'findglobaltags' for details).
351 The first one, 'alltags', is a "tagmaps" (see 'findglobaltags' for details).
352
352
353 The second one, 'tagtypes', is optional and will be updated to track the
353 The second one, 'tagtypes', is optional and will be updated to track the
354 "tagtype" of entries in the tagmaps. When set, the 'tagtype' argument also
354 "tagtype" of entries in the tagmaps. When set, the 'tagtype' argument also
355 needs to be set."""
355 needs to be set."""
356 if tagtype is None:
356 if tagtype is None:
357 assert tagtypes is None
357 assert tagtypes is None
358
358
359 for name, nodehist in pycompat.iteritems(filetags):
359 for name, nodehist in pycompat.iteritems(filetags):
360 if name not in alltags:
360 if name not in alltags:
361 alltags[name] = nodehist
361 alltags[name] = nodehist
362 if tagtype is not None:
362 if tagtype is not None:
363 tagtypes[name] = tagtype
363 tagtypes[name] = tagtype
364 continue
364 continue
365
365
366 # we prefer alltags[name] if:
366 # we prefer alltags[name] if:
367 # it supersedes us OR
367 # it supersedes us OR
368 # mutual supersedes and it has a higher rank
368 # mutual supersedes and it has a higher rank
369 # otherwise we win because we're tip-most
369 # otherwise we win because we're tip-most
370 anode, ahist = nodehist
370 anode, ahist = nodehist
371 bnode, bhist = alltags[name]
371 bnode, bhist = alltags[name]
372 if (
372 if (
373 bnode != anode
373 bnode != anode
374 and anode in bhist
374 and anode in bhist
375 and (bnode not in ahist or len(bhist) > len(ahist))
375 and (bnode not in ahist or len(bhist) > len(ahist))
376 ):
376 ):
377 anode = bnode
377 anode = bnode
378 elif tagtype is not None:
378 elif tagtype is not None:
379 tagtypes[name] = tagtype
379 tagtypes[name] = tagtype
380 ahist.extend([n for n in bhist if n not in ahist])
380 ahist.extend([n for n in bhist if n not in ahist])
381 alltags[name] = anode, ahist
381 alltags[name] = anode, ahist
382
382
383
383
384 def _filename(repo):
384 def _filename(repo):
385 """name of a tagcache file for a given repo or repoview"""
385 """name of a tagcache file for a given repo or repoview"""
386 filename = b'tags2'
386 filename = b'tags2'
387 if repo.filtername:
387 if repo.filtername:
388 filename = b'%s-%s' % (filename, repo.filtername)
388 filename = b'%s-%s' % (filename, repo.filtername)
389 return filename
389 return filename
390
390
391
391
392 def _readtagcache(ui, repo):
392 def _readtagcache(ui, repo):
393 """Read the tag cache.
393 """Read the tag cache.
394
394
395 Returns a tuple (heads, fnodes, validinfo, cachetags, shouldwrite).
395 Returns a tuple (heads, fnodes, validinfo, cachetags, shouldwrite).
396
396
397 If the cache is completely up-to-date, "cachetags" is a dict of the
397 If the cache is completely up-to-date, "cachetags" is a dict of the
398 form returned by _readtags() and "heads", "fnodes", and "validinfo" are
398 form returned by _readtags() and "heads", "fnodes", and "validinfo" are
399 None and "shouldwrite" is False.
399 None and "shouldwrite" is False.
400
400
401 If the cache is not up to date, "cachetags" is None. "heads" is a list
401 If the cache is not up to date, "cachetags" is None. "heads" is a list
402 of all heads currently in the repository, ordered from tip to oldest.
402 of all heads currently in the repository, ordered from tip to oldest.
403 "validinfo" is a tuple describing cache validation info. This is used
403 "validinfo" is a tuple describing cache validation info. This is used
404 when writing the tags cache. "fnodes" is a mapping from head to .hgtags
404 when writing the tags cache. "fnodes" is a mapping from head to .hgtags
405 filenode. "shouldwrite" is True.
405 filenode. "shouldwrite" is True.
406
406
407 If the cache is not up to date, the caller is responsible for reading tag
407 If the cache is not up to date, the caller is responsible for reading tag
408 info from each returned head. (See findglobaltags().)
408 info from each returned head. (See findglobaltags().)
409 """
409 """
410 try:
410 try:
411 cachefile = repo.cachevfs(_filename(repo), b'r')
411 cachefile = repo.cachevfs(_filename(repo), b'r')
412 # force reading the file for static-http
412 # force reading the file for static-http
413 cachelines = iter(cachefile)
413 cachelines = iter(cachefile)
414 except IOError:
414 except IOError:
415 cachefile = None
415 cachefile = None
416
416
417 cacherev = None
417 cacherev = None
418 cachenode = None
418 cachenode = None
419 cachehash = None
419 cachehash = None
420 if cachefile:
420 if cachefile:
421 try:
421 try:
422 validline = next(cachelines)
422 validline = next(cachelines)
423 validline = validline.split()
423 validline = validline.split()
424 cacherev = int(validline[0])
424 cacherev = int(validline[0])
425 cachenode = bin(validline[1])
425 cachenode = bin(validline[1])
426 if len(validline) > 2:
426 if len(validline) > 2:
427 cachehash = bin(validline[2])
427 cachehash = bin(validline[2])
428 except Exception:
428 except Exception:
429 # corruption of the cache, just recompute it.
429 # corruption of the cache, just recompute it.
430 pass
430 pass
431
431
432 tipnode = repo.changelog.tip()
432 tipnode = repo.changelog.tip()
433 tiprev = len(repo.changelog) - 1
433 tiprev = len(repo.changelog) - 1
434
434
435 # Case 1 (common): tip is the same, so nothing has changed.
435 # Case 1 (common): tip is the same, so nothing has changed.
436 # (Unchanged tip trivially means no changesets have been added.
436 # (Unchanged tip trivially means no changesets have been added.
437 # But, thanks to localrepository.destroyed(), it also means none
437 # But, thanks to localrepository.destroyed(), it also means none
438 # have been destroyed by strip or rollback.)
438 # have been destroyed by strip or rollback.)
439 if (
439 if (
440 cacherev == tiprev
440 cacherev == tiprev
441 and cachenode == tipnode
441 and cachenode == tipnode
442 and cachehash == scmutil.filteredhash(repo, tiprev)
442 and cachehash == scmutil.filteredhash(repo, tiprev)
443 ):
443 ):
444 tags = _readtags(ui, repo, cachelines, cachefile.name)
444 tags = _readtags(ui, repo, cachelines, cachefile.name)
445 cachefile.close()
445 cachefile.close()
446 return (None, None, None, tags, False)
446 return (None, None, None, tags, False)
447 if cachefile:
447 if cachefile:
448 cachefile.close() # ignore rest of file
448 cachefile.close() # ignore rest of file
449
449
450 valid = (tiprev, tipnode, scmutil.filteredhash(repo, tiprev))
450 valid = (tiprev, tipnode, scmutil.filteredhash(repo, tiprev))
451
451
452 repoheads = repo.heads()
452 repoheads = repo.heads()
453 # Case 2 (uncommon): empty repo; get out quickly and don't bother
453 # Case 2 (uncommon): empty repo; get out quickly and don't bother
454 # writing an empty cache.
454 # writing an empty cache.
455 if repoheads == [nullid]:
455 if repoheads == [nullid]:
456 return ([], {}, valid, {}, False)
456 return ([], {}, valid, {}, False)
457
457
458 # Case 3 (uncommon): cache file missing or empty.
458 # Case 3 (uncommon): cache file missing or empty.
459
459
460 # Case 4 (uncommon): tip rev decreased. This should only happen
460 # Case 4 (uncommon): tip rev decreased. This should only happen
461 # when we're called from localrepository.destroyed(). Refresh the
461 # when we're called from localrepository.destroyed(). Refresh the
462 # cache so future invocations will not see disappeared heads in the
462 # cache so future invocations will not see disappeared heads in the
463 # cache.
463 # cache.
464
464
465 # Case 5 (common): tip has changed, so we've added/replaced heads.
465 # Case 5 (common): tip has changed, so we've added/replaced heads.
466
466
467 # As it happens, the code to handle cases 3, 4, 5 is the same.
467 # As it happens, the code to handle cases 3, 4, 5 is the same.
468
468
469 # N.B. in case 4 (nodes destroyed), "new head" really means "newly
469 # N.B. in case 4 (nodes destroyed), "new head" really means "newly
470 # exposed".
470 # exposed".
471 if not len(repo.file(b'.hgtags')):
471 if not len(repo.file(b'.hgtags')):
472 # No tags have ever been committed, so we can avoid a
472 # No tags have ever been committed, so we can avoid a
473 # potentially expensive search.
473 # potentially expensive search.
474 return ([], {}, valid, None, True)
474 return ([], {}, valid, None, True)
475
475
476 # Now we have to lookup the .hgtags filenode for every new head.
476 # Now we have to lookup the .hgtags filenode for every new head.
477 # This is the most expensive part of finding tags, so performance
477 # This is the most expensive part of finding tags, so performance
478 # depends primarily on the size of newheads. Worst case: no cache
478 # depends primarily on the size of newheads. Worst case: no cache
479 # file, so newheads == repoheads.
479 # file, so newheads == repoheads.
480 # Reversed order helps the cache ('repoheads' is in descending order)
480 # Reversed order helps the cache ('repoheads' is in descending order)
481 cachefnode = _getfnodes(ui, repo, reversed(repoheads))
481 cachefnode = _getfnodes(ui, repo, reversed(repoheads))
482
482
483 # Caller has to iterate over all heads, but can use the filenodes in
483 # Caller has to iterate over all heads, but can use the filenodes in
484 # cachefnode to get to each .hgtags revision quickly.
484 # cachefnode to get to each .hgtags revision quickly.
485 return (repoheads, cachefnode, valid, None, True)
485 return (repoheads, cachefnode, valid, None, True)
486
486
487
487
488 def _getfnodes(ui, repo, nodes):
488 def _getfnodes(ui, repo, nodes):
489 """return .hgtags fnodes for a list of changeset nodes
489 """return .hgtags fnodes for a list of changeset nodes
490
490
491 Return value is a {node: fnode} mapping. There will be no entry for nodes
491 Return value is a {node: fnode} mapping. There will be no entry for nodes
492 without a '.hgtags' file.
492 without a '.hgtags' file.
493 """
493 """
494 starttime = util.timer()
494 starttime = util.timer()
495 fnodescache = hgtagsfnodescache(repo.unfiltered())
495 fnodescache = hgtagsfnodescache(repo.unfiltered())
496 cachefnode = {}
496 cachefnode = {}
497 for node in nodes:
497 for node in nodes:
498 fnode = fnodescache.getfnode(node)
498 fnode = fnodescache.getfnode(node)
499 if fnode != nullid:
499 if fnode != nullid:
500 cachefnode[node] = fnode
500 cachefnode[node] = fnode
501
501
502 fnodescache.write()
502 fnodescache.write()
503
503
504 duration = util.timer() - starttime
504 duration = util.timer() - starttime
505 ui.log(
505 ui.log(
506 b'tagscache',
506 b'tagscache',
507 b'%d/%d cache hits/lookups in %0.4f seconds\n',
507 b'%d/%d cache hits/lookups in %0.4f seconds\n',
508 fnodescache.hitcount,
508 fnodescache.hitcount,
509 fnodescache.lookupcount,
509 fnodescache.lookupcount,
510 duration,
510 duration,
511 )
511 )
512 return cachefnode
512 return cachefnode
513
513
514
514
515 def _writetagcache(ui, repo, valid, cachetags):
515 def _writetagcache(ui, repo, valid, cachetags):
516 filename = _filename(repo)
516 filename = _filename(repo)
517 try:
517 try:
518 cachefile = repo.cachevfs(filename, b'w', atomictemp=True)
518 cachefile = repo.cachevfs(filename, b'w', atomictemp=True)
519 except (OSError, IOError):
519 except (OSError, IOError):
520 return
520 return
521
521
522 ui.log(
522 ui.log(
523 b'tagscache',
523 b'tagscache',
524 b'writing .hg/cache/%s with %d tags\n',
524 b'writing .hg/cache/%s with %d tags\n',
525 filename,
525 filename,
526 len(cachetags),
526 len(cachetags),
527 )
527 )
528
528
529 if valid[2]:
529 if valid[2]:
530 cachefile.write(
530 cachefile.write(
531 b'%d %s %s\n' % (valid[0], hex(valid[1]), hex(valid[2]))
531 b'%d %s %s\n' % (valid[0], hex(valid[1]), hex(valid[2]))
532 )
532 )
533 else:
533 else:
534 cachefile.write(b'%d %s\n' % (valid[0], hex(valid[1])))
534 cachefile.write(b'%d %s\n' % (valid[0], hex(valid[1])))
535
535
536 # Tag names in the cache are in UTF-8 -- which is the whole reason
536 # Tag names in the cache are in UTF-8 -- which is the whole reason
537 # we keep them in UTF-8 throughout this module. If we converted
537 # we keep them in UTF-8 throughout this module. If we converted
538 # them local encoding on input, we would lose info writing them to
538 # them local encoding on input, we would lose info writing them to
539 # the cache.
539 # the cache.
540 for (name, (node, hist)) in sorted(pycompat.iteritems(cachetags)):
540 for (name, (node, hist)) in sorted(pycompat.iteritems(cachetags)):
541 for n in hist:
541 for n in hist:
542 cachefile.write(b"%s %s\n" % (hex(n), name))
542 cachefile.write(b"%s %s\n" % (hex(n), name))
543 cachefile.write(b"%s %s\n" % (hex(node), name))
543 cachefile.write(b"%s %s\n" % (hex(node), name))
544
544
545 try:
545 try:
546 cachefile.close()
546 cachefile.close()
547 except (OSError, IOError):
547 except (OSError, IOError):
548 pass
548 pass
549
549
550
550
551 def tag(repo, names, node, message, local, user, date, editor=False):
551 def tag(repo, names, node, message, local, user, date, editor=False):
552 """tag a revision with one or more symbolic names.
552 """tag a revision with one or more symbolic names.
553
553
554 names is a list of strings or, when adding a single tag, names may be a
554 names is a list of strings or, when adding a single tag, names may be a
555 string.
555 string.
556
556
557 if local is True, the tags are stored in a per-repository file.
557 if local is True, the tags are stored in a per-repository file.
558 otherwise, they are stored in the .hgtags file, and a new
558 otherwise, they are stored in the .hgtags file, and a new
559 changeset is committed with the change.
559 changeset is committed with the change.
560
560
561 keyword arguments:
561 keyword arguments:
562
562
563 local: whether to store tags in non-version-controlled file
563 local: whether to store tags in non-version-controlled file
564 (default False)
564 (default False)
565
565
566 message: commit message to use if committing
566 message: commit message to use if committing
567
567
568 user: name of user to use if committing
568 user: name of user to use if committing
569
569
570 date: date tuple to use if committing"""
570 date: date tuple to use if committing"""
571
571
572 if not local:
572 if not local:
573 m = matchmod.exact([b'.hgtags'])
573 m = matchmod.exact([b'.hgtags'])
574 st = repo.status(match=m, unknown=True, ignored=True)
574 st = repo.status(match=m, unknown=True, ignored=True)
575 if any(
575 if any(
576 (
576 (
577 st.modified,
577 st.modified,
578 st.added,
578 st.added,
579 st.removed,
579 st.removed,
580 st.deleted,
580 st.deleted,
581 st.unknown,
581 st.unknown,
582 st.ignored,
582 st.ignored,
583 )
583 )
584 ):
584 ):
585 raise error.Abort(
585 raise error.Abort(
586 _(b'working copy of .hgtags is changed'),
586 _(b'working copy of .hgtags is changed'),
587 hint=_(b'please commit .hgtags manually'),
587 hint=_(b'please commit .hgtags manually'),
588 )
588 )
589
589
590 with repo.wlock():
590 with repo.wlock():
591 repo.tags() # instantiate the cache
591 repo.tags() # instantiate the cache
592 _tag(repo, names, node, message, local, user, date, editor=editor)
592 _tag(repo, names, node, message, local, user, date, editor=editor)
593
593
594
594
595 def _tag(
595 def _tag(
596 repo, names, node, message, local, user, date, extra=None, editor=False
596 repo, names, node, message, local, user, date, extra=None, editor=False
597 ):
597 ):
598 if isinstance(names, bytes):
598 if isinstance(names, bytes):
599 names = (names,)
599 names = (names,)
600
600
601 branches = repo.branchmap()
601 branches = repo.branchmap()
602 for name in names:
602 for name in names:
603 repo.hook(b'pretag', throw=True, node=hex(node), tag=name, local=local)
603 repo.hook(b'pretag', throw=True, node=hex(node), tag=name, local=local)
604 if name in branches:
604 if name in branches:
605 repo.ui.warn(
605 repo.ui.warn(
606 _(b"warning: tag %s conflicts with existing branch name\n")
606 _(b"warning: tag %s conflicts with existing branch name\n")
607 % name
607 % name
608 )
608 )
609
609
610 def writetags(fp, names, munge, prevtags):
610 def writetags(fp, names, munge, prevtags):
611 fp.seek(0, io.SEEK_END)
611 fp.seek(0, io.SEEK_END)
612 if prevtags and not prevtags.endswith(b'\n'):
612 if prevtags and not prevtags.endswith(b'\n'):
613 fp.write(b'\n')
613 fp.write(b'\n')
614 for name in names:
614 for name in names:
615 if munge:
615 if munge:
616 m = munge(name)
616 m = munge(name)
617 else:
617 else:
618 m = name
618 m = name
619
619
620 if repo._tagscache.tagtypes and name in repo._tagscache.tagtypes:
620 if repo._tagscache.tagtypes and name in repo._tagscache.tagtypes:
621 old = repo.tags().get(name, nullid)
621 old = repo.tags().get(name, nullid)
622 fp.write(b'%s %s\n' % (hex(old), m))
622 fp.write(b'%s %s\n' % (hex(old), m))
623 fp.write(b'%s %s\n' % (hex(node), m))
623 fp.write(b'%s %s\n' % (hex(node), m))
624 fp.close()
624 fp.close()
625
625
626 prevtags = b''
626 prevtags = b''
627 if local:
627 if local:
628 try:
628 try:
629 fp = repo.vfs(b'localtags', b'r+')
629 fp = repo.vfs(b'localtags', b'r+')
630 except IOError:
630 except IOError:
631 fp = repo.vfs(b'localtags', b'a')
631 fp = repo.vfs(b'localtags', b'a')
632 else:
632 else:
633 prevtags = fp.read()
633 prevtags = fp.read()
634
634
635 # local tags are stored in the current charset
635 # local tags are stored in the current charset
636 writetags(fp, names, None, prevtags)
636 writetags(fp, names, None, prevtags)
637 for name in names:
637 for name in names:
638 repo.hook(b'tag', node=hex(node), tag=name, local=local)
638 repo.hook(b'tag', node=hex(node), tag=name, local=local)
639 return
639 return
640
640
641 try:
641 try:
642 fp = repo.wvfs(b'.hgtags', b'rb+')
642 fp = repo.wvfs(b'.hgtags', b'rb+')
643 except IOError as e:
643 except IOError as e:
644 if e.errno != errno.ENOENT:
644 if e.errno != errno.ENOENT:
645 raise
645 raise
646 fp = repo.wvfs(b'.hgtags', b'ab')
646 fp = repo.wvfs(b'.hgtags', b'ab')
647 else:
647 else:
648 prevtags = fp.read()
648 prevtags = fp.read()
649
649
650 # committed tags are stored in UTF-8
650 # committed tags are stored in UTF-8
651 writetags(fp, names, encoding.fromlocal, prevtags)
651 writetags(fp, names, encoding.fromlocal, prevtags)
652
652
653 fp.close()
653 fp.close()
654
654
655 repo.invalidatecaches()
655 repo.invalidatecaches()
656
656
657 if b'.hgtags' not in repo.dirstate:
657 if b'.hgtags' not in repo.dirstate:
658 repo[None].add([b'.hgtags'])
658 repo[None].add([b'.hgtags'])
659
659
660 m = matchmod.exact([b'.hgtags'])
660 m = matchmod.exact([b'.hgtags'])
661 tagnode = repo.commit(
661 tagnode = repo.commit(
662 message, user, date, extra=extra, match=m, editor=editor
662 message, user, date, extra=extra, match=m, editor=editor
663 )
663 )
664
664
665 for name in names:
665 for name in names:
666 repo.hook(b'tag', node=hex(node), tag=name, local=local)
666 repo.hook(b'tag', node=hex(node), tag=name, local=local)
667
667
668 return tagnode
668 return tagnode
669
669
670
670
671 _fnodescachefile = b'hgtagsfnodes1'
671 _fnodescachefile = b'hgtagsfnodes1'
672 _fnodesrecsize = 4 + 20 # changeset fragment + filenode
672 _fnodesrecsize = 4 + 20 # changeset fragment + filenode
673 _fnodesmissingrec = b'\xff' * 24
673 _fnodesmissingrec = b'\xff' * 24
674
674
675
675
676 class hgtagsfnodescache(object):
676 class hgtagsfnodescache(object):
677 """Persistent cache mapping revisions to .hgtags filenodes.
677 """Persistent cache mapping revisions to .hgtags filenodes.
678
678
679 The cache is an array of records. Each item in the array corresponds to
679 The cache is an array of records. Each item in the array corresponds to
680 a changelog revision. Values in the array contain the first 4 bytes of
680 a changelog revision. Values in the array contain the first 4 bytes of
681 the node hash and the 20 bytes .hgtags filenode for that revision.
681 the node hash and the 20 bytes .hgtags filenode for that revision.
682
682
683 The first 4 bytes are present as a form of verification. Repository
683 The first 4 bytes are present as a form of verification. Repository
684 stripping and rewriting may change the node at a numeric revision in the
684 stripping and rewriting may change the node at a numeric revision in the
685 changelog. The changeset fragment serves as a verifier to detect
685 changelog. The changeset fragment serves as a verifier to detect
686 rewriting. This logic is shared with the rev branch cache (see
686 rewriting. This logic is shared with the rev branch cache (see
687 branchmap.py).
687 branchmap.py).
688
688
689 The instance holds in memory the full cache content but entries are
689 The instance holds in memory the full cache content but entries are
690 only parsed on read.
690 only parsed on read.
691
691
692 Instances behave like lists. ``c[i]`` works where i is a rev or
692 Instances behave like lists. ``c[i]`` works where i is a rev or
693 changeset node. Missing indexes are populated automatically on access.
693 changeset node. Missing indexes are populated automatically on access.
694 """
694 """
695
695
696 def __init__(self, repo):
696 def __init__(self, repo):
697 assert repo.filtername is None
697 assert repo.filtername is None
698
698
699 self._repo = repo
699 self._repo = repo
700
700
701 # Only for reporting purposes.
701 # Only for reporting purposes.
702 self.lookupcount = 0
702 self.lookupcount = 0
703 self.hitcount = 0
703 self.hitcount = 0
704
704
705 try:
705 try:
706 data = repo.cachevfs.read(_fnodescachefile)
706 data = repo.cachevfs.read(_fnodescachefile)
707 except (OSError, IOError):
707 except (OSError, IOError):
708 data = b""
708 data = b""
709 self._raw = bytearray(data)
709 self._raw = bytearray(data)
710
710
711 # The end state of self._raw is an array that is of the exact length
711 # The end state of self._raw is an array that is of the exact length
712 # required to hold a record for every revision in the repository.
712 # required to hold a record for every revision in the repository.
713 # We truncate or extend the array as necessary. self._dirtyoffset is
713 # We truncate or extend the array as necessary. self._dirtyoffset is
714 # defined to be the start offset at which we need to write the output
714 # defined to be the start offset at which we need to write the output
715 # file. This offset is also adjusted when new entries are calculated
715 # file. This offset is also adjusted when new entries are calculated
716 # for array members.
716 # for array members.
717 cllen = len(repo.changelog)
717 cllen = len(repo.changelog)
718 wantedlen = cllen * _fnodesrecsize
718 wantedlen = cllen * _fnodesrecsize
719 rawlen = len(self._raw)
719 rawlen = len(self._raw)
720
720
721 self._dirtyoffset = None
721 self._dirtyoffset = None
722
722
723 rawlentokeep = min(
723 rawlentokeep = min(
724 wantedlen, (rawlen // _fnodesrecsize) * _fnodesrecsize
724 wantedlen, (rawlen // _fnodesrecsize) * _fnodesrecsize
725 )
725 )
726 if rawlen > rawlentokeep:
726 if rawlen > rawlentokeep:
727 # There's no easy way to truncate array instances. This seems
727 # There's no easy way to truncate array instances. This seems
728 # slightly less evil than copying a potentially large array slice.
728 # slightly less evil than copying a potentially large array slice.
729 for i in range(rawlen - rawlentokeep):
729 for i in range(rawlen - rawlentokeep):
730 self._raw.pop()
730 self._raw.pop()
731 rawlen = len(self._raw)
731 rawlen = len(self._raw)
732 self._dirtyoffset = rawlen
732 self._dirtyoffset = rawlen
733 if rawlen < wantedlen:
733 if rawlen < wantedlen:
734 if self._dirtyoffset is None:
734 if self._dirtyoffset is None:
735 self._dirtyoffset = rawlen
735 self._dirtyoffset = rawlen
736 # TODO: zero fill entire record, because it's invalid not missing?
736 self._raw.extend(b'\xff' * (wantedlen - rawlen))
737 self._raw.extend(b'\xff' * (wantedlen - rawlen))
737
738
738 def getfnode(self, node, computemissing=True):
739 def getfnode(self, node, computemissing=True):
739 """Obtain the filenode of the .hgtags file at a specified revision.
740 """Obtain the filenode of the .hgtags file at a specified revision.
740
741
741 If the value is in the cache, the entry will be validated and returned.
742 If the value is in the cache, the entry will be validated and returned.
742 Otherwise, the filenode will be computed and returned unless
743 Otherwise, the filenode will be computed and returned unless
743 "computemissing" is False, in which case None will be returned without
744 "computemissing" is False. In that case, None will be returned if
745 the entry is missing or False if the entry is invalid without
744 any potentially expensive computation being performed.
746 any potentially expensive computation being performed.
745
747
746 If an .hgtags does not exist at the specified revision, nullid is
748 If an .hgtags does not exist at the specified revision, nullid is
747 returned.
749 returned.
748 """
750 """
749 if node == nullid:
751 if node == nullid:
750 return nullid
752 return nullid
751
753
752 ctx = self._repo[node]
754 ctx = self._repo[node]
753 rev = ctx.rev()
755 rev = ctx.rev()
754
756
755 self.lookupcount += 1
757 self.lookupcount += 1
756
758
757 offset = rev * _fnodesrecsize
759 offset = rev * _fnodesrecsize
758 record = b'%s' % self._raw[offset : offset + _fnodesrecsize]
760 record = b'%s' % self._raw[offset : offset + _fnodesrecsize]
759 properprefix = node[0:4]
761 properprefix = node[0:4]
760
762
761 # Validate and return existing entry.
763 # Validate and return existing entry.
762 if record != _fnodesmissingrec and len(record) == _fnodesrecsize:
764 if record != _fnodesmissingrec and len(record) == _fnodesrecsize:
763 fileprefix = record[0:4]
765 fileprefix = record[0:4]
764
766
765 if fileprefix == properprefix:
767 if fileprefix == properprefix:
766 self.hitcount += 1
768 self.hitcount += 1
767 return record[4:]
769 return record[4:]
768
770
769 # Fall through.
771 # Fall through.
770
772
771 # If we get here, the entry is either missing or invalid.
773 # If we get here, the entry is either missing or invalid.
772
774
773 if not computemissing:
775 if not computemissing:
776 if record != _fnodesmissingrec:
777 return False
774 return None
778 return None
775
779
776 fnode = None
780 fnode = None
777 cl = self._repo.changelog
781 cl = self._repo.changelog
778 p1rev, p2rev = cl._uncheckedparentrevs(rev)
782 p1rev, p2rev = cl._uncheckedparentrevs(rev)
779 p1node = cl.node(p1rev)
783 p1node = cl.node(p1rev)
780 p1fnode = self.getfnode(p1node, computemissing=False)
784 p1fnode = self.getfnode(p1node, computemissing=False)
781 if p2rev != nullrev:
785 if p2rev != nullrev:
782 # There is some no-merge changeset where p1 is null and p2 is set
786 # There is some no-merge changeset where p1 is null and p2 is set
783 # Processing them as merge is just slower, but still gives a good
787 # Processing them as merge is just slower, but still gives a good
784 # result.
788 # result.
785 p2node = cl.node(p1rev)
789 p2node = cl.node(p1rev)
786 p2fnode = self.getfnode(p2node, computemissing=False)
790 p2fnode = self.getfnode(p2node, computemissing=False)
787 if p1fnode != p2fnode:
791 if p1fnode != p2fnode:
788 # we cannot rely on readfast because we don't know against what
792 # we cannot rely on readfast because we don't know against what
789 # parent the readfast delta is computed
793 # parent the readfast delta is computed
790 p1fnode = None
794 p1fnode = None
791 if p1fnode is not None:
795 if p1fnode:
792 mctx = ctx.manifestctx()
796 mctx = ctx.manifestctx()
793 fnode = mctx.readfast().get(b'.hgtags')
797 fnode = mctx.readfast().get(b'.hgtags')
794 if fnode is None:
798 if fnode is None:
795 fnode = p1fnode
799 fnode = p1fnode
796 if fnode is None:
800 if fnode is None:
797 # Populate missing entry.
801 # Populate missing entry.
798 try:
802 try:
799 fnode = ctx.filenode(b'.hgtags')
803 fnode = ctx.filenode(b'.hgtags')
800 except error.LookupError:
804 except error.LookupError:
801 # No .hgtags file on this revision.
805 # No .hgtags file on this revision.
802 fnode = nullid
806 fnode = nullid
803
807
804 self._writeentry(offset, properprefix, fnode)
808 self._writeentry(offset, properprefix, fnode)
805 return fnode
809 return fnode
806
810
807 def setfnode(self, node, fnode):
811 def setfnode(self, node, fnode):
808 """Set the .hgtags filenode for a given changeset."""
812 """Set the .hgtags filenode for a given changeset."""
809 assert len(fnode) == 20
813 assert len(fnode) == 20
810 ctx = self._repo[node]
814 ctx = self._repo[node]
811
815
812 # Do a lookup first to avoid writing if nothing has changed.
816 # Do a lookup first to avoid writing if nothing has changed.
813 if self.getfnode(ctx.node(), computemissing=False) == fnode:
817 if self.getfnode(ctx.node(), computemissing=False) == fnode:
814 return
818 return
815
819
816 self._writeentry(ctx.rev() * _fnodesrecsize, node[0:4], fnode)
820 self._writeentry(ctx.rev() * _fnodesrecsize, node[0:4], fnode)
817
821
818 def _writeentry(self, offset, prefix, fnode):
822 def _writeentry(self, offset, prefix, fnode):
819 # Slices on array instances only accept other array.
823 # Slices on array instances only accept other array.
820 entry = bytearray(prefix + fnode)
824 entry = bytearray(prefix + fnode)
821 self._raw[offset : offset + _fnodesrecsize] = entry
825 self._raw[offset : offset + _fnodesrecsize] = entry
822 # self._dirtyoffset could be None.
826 # self._dirtyoffset could be None.
823 self._dirtyoffset = min(self._dirtyoffset or 0, offset or 0)
827 self._dirtyoffset = min(self._dirtyoffset or 0, offset or 0)
824
828
825 def write(self):
829 def write(self):
826 """Perform all necessary writes to cache file.
830 """Perform all necessary writes to cache file.
827
831
828 This may no-op if no writes are needed or if a write lock could
832 This may no-op if no writes are needed or if a write lock could
829 not be obtained.
833 not be obtained.
830 """
834 """
831 if self._dirtyoffset is None:
835 if self._dirtyoffset is None:
832 return
836 return
833
837
834 data = self._raw[self._dirtyoffset :]
838 data = self._raw[self._dirtyoffset :]
835 if not data:
839 if not data:
836 return
840 return
837
841
838 repo = self._repo
842 repo = self._repo
839
843
840 try:
844 try:
841 lock = repo.lock(wait=False)
845 lock = repo.lock(wait=False)
842 except error.LockError:
846 except error.LockError:
843 repo.ui.log(
847 repo.ui.log(
844 b'tagscache',
848 b'tagscache',
845 b'not writing .hg/cache/%s because '
849 b'not writing .hg/cache/%s because '
846 b'lock cannot be acquired\n' % _fnodescachefile,
850 b'lock cannot be acquired\n' % _fnodescachefile,
847 )
851 )
848 return
852 return
849
853
850 try:
854 try:
851 f = repo.cachevfs.open(_fnodescachefile, b'ab')
855 f = repo.cachevfs.open(_fnodescachefile, b'ab')
852 try:
856 try:
853 # if the file has been truncated
857 # if the file has been truncated
854 actualoffset = f.tell()
858 actualoffset = f.tell()
855 if actualoffset < self._dirtyoffset:
859 if actualoffset < self._dirtyoffset:
856 self._dirtyoffset = actualoffset
860 self._dirtyoffset = actualoffset
857 data = self._raw[self._dirtyoffset :]
861 data = self._raw[self._dirtyoffset :]
858 f.seek(self._dirtyoffset)
862 f.seek(self._dirtyoffset)
859 f.truncate()
863 f.truncate()
860 repo.ui.log(
864 repo.ui.log(
861 b'tagscache',
865 b'tagscache',
862 b'writing %d bytes to cache/%s\n'
866 b'writing %d bytes to cache/%s\n'
863 % (len(data), _fnodescachefile),
867 % (len(data), _fnodescachefile),
864 )
868 )
865 f.write(data)
869 f.write(data)
866 self._dirtyoffset = None
870 self._dirtyoffset = None
867 finally:
871 finally:
868 f.close()
872 f.close()
869 except (IOError, OSError) as inst:
873 except (IOError, OSError) as inst:
870 repo.ui.log(
874 repo.ui.log(
871 b'tagscache',
875 b'tagscache',
872 b"couldn't write cache/%s: %s\n"
876 b"couldn't write cache/%s: %s\n"
873 % (_fnodescachefile, stringutil.forcebytestr(inst)),
877 % (_fnodescachefile, stringutil.forcebytestr(inst)),
874 )
878 )
875 finally:
879 finally:
876 lock.release()
880 lock.release()
@@ -1,864 +1,892 b''
1 setup
1 setup
2
2
3 $ cat >> $HGRCPATH << EOF
3 $ cat >> $HGRCPATH << EOF
4 > [extensions]
4 > [extensions]
5 > blackbox=
5 > blackbox=
6 > mock=$TESTDIR/mockblackbox.py
6 > mock=$TESTDIR/mockblackbox.py
7 > [blackbox]
7 > [blackbox]
8 > track = command, commandfinish, tagscache
8 > track = command, commandfinish, tagscache
9 > EOF
9 > EOF
10
10
11 Helper functions:
11 Helper functions:
12
12
13 $ cacheexists() {
13 $ cacheexists() {
14 > [ -f .hg/cache/tags2-visible ] && echo "tag cache exists" || echo "no tag cache"
14 > [ -f .hg/cache/tags2-visible ] && echo "tag cache exists" || echo "no tag cache"
15 > }
15 > }
16
16
17 $ fnodescacheexists() {
17 $ fnodescacheexists() {
18 > [ -f .hg/cache/hgtagsfnodes1 ] && echo "fnodes cache exists" || echo "no fnodes cache"
18 > [ -f .hg/cache/hgtagsfnodes1 ] && echo "fnodes cache exists" || echo "no fnodes cache"
19 > }
19 > }
20
20
21 $ dumptags() {
21 $ dumptags() {
22 > rev=$1
22 > rev=$1
23 > echo "rev $rev: .hgtags:"
23 > echo "rev $rev: .hgtags:"
24 > hg cat -r$rev .hgtags
24 > hg cat -r$rev .hgtags
25 > }
25 > }
26
26
27 # XXX need to test that the tag cache works when we strip an old head
27 # XXX need to test that the tag cache works when we strip an old head
28 # and add a new one rooted off non-tip: i.e. node and rev of tip are the
28 # and add a new one rooted off non-tip: i.e. node and rev of tip are the
29 # same, but stuff has changed behind tip.
29 # same, but stuff has changed behind tip.
30
30
31 Setup:
31 Setup:
32
32
33 $ hg init t
33 $ hg init t
34 $ cd t
34 $ cd t
35 $ cacheexists
35 $ cacheexists
36 no tag cache
36 no tag cache
37 $ fnodescacheexists
37 $ fnodescacheexists
38 no fnodes cache
38 no fnodes cache
39 $ hg id
39 $ hg id
40 000000000000 tip
40 000000000000 tip
41 $ cacheexists
41 $ cacheexists
42 no tag cache
42 no tag cache
43 $ fnodescacheexists
43 $ fnodescacheexists
44 no fnodes cache
44 no fnodes cache
45 $ echo a > a
45 $ echo a > a
46 $ hg add a
46 $ hg add a
47 $ hg commit -m "test"
47 $ hg commit -m "test"
48 $ hg co
48 $ hg co
49 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
49 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
50 $ hg identify
50 $ hg identify
51 acb14030fe0a tip
51 acb14030fe0a tip
52 $ hg identify -r 'wdir()'
52 $ hg identify -r 'wdir()'
53 acb14030fe0a tip
53 acb14030fe0a tip
54 $ cacheexists
54 $ cacheexists
55 tag cache exists
55 tag cache exists
56 No fnodes cache because .hgtags file doesn't exist
56 No fnodes cache because .hgtags file doesn't exist
57 (this is an implementation detail)
57 (this is an implementation detail)
58 $ fnodescacheexists
58 $ fnodescacheexists
59 no fnodes cache
59 no fnodes cache
60
60
61 Try corrupting the cache
61 Try corrupting the cache
62
62
63 $ printf 'a b' > .hg/cache/tags2-visible
63 $ printf 'a b' > .hg/cache/tags2-visible
64 $ hg identify
64 $ hg identify
65 acb14030fe0a tip
65 acb14030fe0a tip
66 $ cacheexists
66 $ cacheexists
67 tag cache exists
67 tag cache exists
68 $ fnodescacheexists
68 $ fnodescacheexists
69 no fnodes cache
69 no fnodes cache
70 $ hg identify
70 $ hg identify
71 acb14030fe0a tip
71 acb14030fe0a tip
72
72
73 Create local tag with long name:
73 Create local tag with long name:
74
74
75 $ T=`hg identify --debug --id`
75 $ T=`hg identify --debug --id`
76 $ hg tag -l "This is a local tag with a really long name!"
76 $ hg tag -l "This is a local tag with a really long name!"
77 $ hg tags
77 $ hg tags
78 tip 0:acb14030fe0a
78 tip 0:acb14030fe0a
79 This is a local tag with a really long name! 0:acb14030fe0a
79 This is a local tag with a really long name! 0:acb14030fe0a
80 $ rm .hg/localtags
80 $ rm .hg/localtags
81
81
82 Create a tag behind hg's back:
82 Create a tag behind hg's back:
83
83
84 $ echo "$T first" > .hgtags
84 $ echo "$T first" > .hgtags
85 $ cat .hgtags
85 $ cat .hgtags
86 acb14030fe0a21b60322c440ad2d20cf7685a376 first
86 acb14030fe0a21b60322c440ad2d20cf7685a376 first
87 $ hg add .hgtags
87 $ hg add .hgtags
88 $ hg commit -m "add tags"
88 $ hg commit -m "add tags"
89 $ hg tags
89 $ hg tags
90 tip 1:b9154636be93
90 tip 1:b9154636be93
91 first 0:acb14030fe0a
91 first 0:acb14030fe0a
92 $ hg identify
92 $ hg identify
93 b9154636be93 tip
93 b9154636be93 tip
94
94
95 We should have a fnodes cache now that we have a real tag
95 We should have a fnodes cache now that we have a real tag
96 The cache should have an empty entry for rev 0 and a valid entry for rev 1.
96 The cache should have an empty entry for rev 0 and a valid entry for rev 1.
97
97
98
98
99 $ fnodescacheexists
99 $ fnodescacheexists
100 fnodes cache exists
100 fnodes cache exists
101 $ f --size --hexdump .hg/cache/hgtagsfnodes1
101 $ f --size --hexdump .hg/cache/hgtagsfnodes1
102 .hg/cache/hgtagsfnodes1: size=48
102 .hg/cache/hgtagsfnodes1: size=48
103 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
103 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
104 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
104 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
105 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
105 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
106 $ hg debugtagscache
106 $ hg debugtagscache
107 0 acb14030fe0a21b60322c440ad2d20cf7685a376 missing/invalid
107 0 acb14030fe0a21b60322c440ad2d20cf7685a376 missing
108 1 b9154636be938d3d431e75a7c906504a079bfe07 26b7b4a773e09ee3c52f510e19e05e1ff966d859
108 1 b9154636be938d3d431e75a7c906504a079bfe07 26b7b4a773e09ee3c52f510e19e05e1ff966d859
109
109
110 Repeat with cold tag cache:
110 Repeat with cold tag cache:
111
111
112 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
112 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
113 $ hg identify
113 $ hg identify
114 b9154636be93 tip
114 b9154636be93 tip
115
115
116 $ fnodescacheexists
116 $ fnodescacheexists
117 fnodes cache exists
117 fnodes cache exists
118 $ f --size --hexdump .hg/cache/hgtagsfnodes1
118 $ f --size --hexdump .hg/cache/hgtagsfnodes1
119 .hg/cache/hgtagsfnodes1: size=48
119 .hg/cache/hgtagsfnodes1: size=48
120 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
120 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
121 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
121 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
122 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
122 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
123
123
124 And again, but now unable to write tag cache or lock file:
124 And again, but now unable to write tag cache or lock file:
125
125
126 #if unix-permissions no-fsmonitor
126 #if unix-permissions no-fsmonitor
127
127
128 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
128 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
129 $ chmod 555 .hg/cache
129 $ chmod 555 .hg/cache
130 $ hg identify
130 $ hg identify
131 b9154636be93 tip
131 b9154636be93 tip
132 $ chmod 755 .hg/cache
132 $ chmod 755 .hg/cache
133
133
134 (this block should be protected by no-fsmonitor, because "chmod 555 .hg"
134 (this block should be protected by no-fsmonitor, because "chmod 555 .hg"
135 makes watchman fail at accessing to files under .hg)
135 makes watchman fail at accessing to files under .hg)
136
136
137 $ chmod 555 .hg
137 $ chmod 555 .hg
138 $ hg identify
138 $ hg identify
139 b9154636be93 tip
139 b9154636be93 tip
140 $ chmod 755 .hg
140 $ chmod 755 .hg
141 #endif
141 #endif
142
142
143 Tag cache debug info written to blackbox log
143 Tag cache debug info written to blackbox log
144
144
145 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
145 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
146 $ hg identify
146 $ hg identify
147 b9154636be93 tip
147 b9154636be93 tip
148 $ hg blackbox -l 6
148 $ hg blackbox -l 6
149 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
149 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
150 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing 48 bytes to cache/hgtagsfnodes1
150 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing 48 bytes to cache/hgtagsfnodes1
151 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
151 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
152 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
152 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
153 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
153 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
154 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
154 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
155
155
156 Failure to acquire lock results in no write
156 Failure to acquire lock results in no write
157
157
158 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
158 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
159 $ echo 'foo:1' > .hg/store/lock
159 $ echo 'foo:1' > .hg/store/lock
160 $ hg identify
160 $ hg identify
161 b9154636be93 tip
161 b9154636be93 tip
162 $ hg blackbox -l 6
162 $ hg blackbox -l 6
163 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
163 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
164 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> not writing .hg/cache/hgtagsfnodes1 because lock cannot be acquired
164 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> not writing .hg/cache/hgtagsfnodes1 because lock cannot be acquired
165 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
165 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
166 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
166 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
167 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
167 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
168 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
168 1970/01/01 00:00:00 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
169
169
170 $ fnodescacheexists
170 $ fnodescacheexists
171 no fnodes cache
171 no fnodes cache
172
172
173 $ rm .hg/store/lock
173 $ rm .hg/store/lock
174
174
175 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
175 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
176 $ hg identify
176 $ hg identify
177 b9154636be93 tip
177 b9154636be93 tip
178
178
179 Create a branch:
179 Create a branch:
180
180
181 $ echo bb > a
181 $ echo bb > a
182 $ hg status
182 $ hg status
183 M a
183 M a
184 $ hg identify
184 $ hg identify
185 b9154636be93+ tip
185 b9154636be93+ tip
186 $ hg co first
186 $ hg co first
187 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
187 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
188 $ hg id
188 $ hg id
189 acb14030fe0a+ first
189 acb14030fe0a+ first
190 $ hg id -r 'wdir()'
190 $ hg id -r 'wdir()'
191 acb14030fe0a+ first
191 acb14030fe0a+ first
192 $ hg -v id
192 $ hg -v id
193 acb14030fe0a+ first
193 acb14030fe0a+ first
194 $ hg status
194 $ hg status
195 M a
195 M a
196 $ echo 1 > b
196 $ echo 1 > b
197 $ hg add b
197 $ hg add b
198 $ hg commit -m "branch"
198 $ hg commit -m "branch"
199 created new head
199 created new head
200
200
201 Creating a new commit shouldn't append the .hgtags fnodes cache until
201 Creating a new commit shouldn't append the .hgtags fnodes cache until
202 tags info is accessed
202 tags info is accessed
203
203
204 $ f --size --hexdump .hg/cache/hgtagsfnodes1
204 $ f --size --hexdump .hg/cache/hgtagsfnodes1
205 .hg/cache/hgtagsfnodes1: size=48
205 .hg/cache/hgtagsfnodes1: size=48
206 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
206 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
207 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
207 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
208 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
208 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
209
209
210 $ hg id
210 $ hg id
211 c8edf04160c7 tip
211 c8edf04160c7 tip
212
212
213 First 4 bytes of record 3 are changeset fragment
213 First 4 bytes of record 3 are changeset fragment
214
214
215 $ f --size --hexdump .hg/cache/hgtagsfnodes1
215 $ f --size --hexdump .hg/cache/hgtagsfnodes1
216 .hg/cache/hgtagsfnodes1: size=72
216 .hg/cache/hgtagsfnodes1: size=72
217 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
217 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
218 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
218 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
219 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
219 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
220 0030: c8 ed f0 41 00 00 00 00 00 00 00 00 00 00 00 00 |...A............|
220 0030: c8 ed f0 41 00 00 00 00 00 00 00 00 00 00 00 00 |...A............|
221 0040: 00 00 00 00 00 00 00 00 |........|
221 0040: 00 00 00 00 00 00 00 00 |........|
222
222
223 Merge the two heads:
223 Merge the two heads:
224
224
225 $ hg merge 1
225 $ hg merge 1
226 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
226 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
227 (branch merge, don't forget to commit)
227 (branch merge, don't forget to commit)
228 $ hg blackbox -l3
228 $ hg blackbox -l3
229 1970/01/01 00:00:00 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28 (5000)> merge 1
229 1970/01/01 00:00:00 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28 (5000)> merge 1
230 1970/01/01 00:00:00 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> merge 1 exited 0 after * seconds (glob)
230 1970/01/01 00:00:00 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> merge 1 exited 0 after * seconds (glob)
231 1970/01/01 00:00:00 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l3
231 1970/01/01 00:00:00 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l3
232 $ hg id
232 $ hg id
233 c8edf04160c7+b9154636be93+ tip
233 c8edf04160c7+b9154636be93+ tip
234 $ hg status
234 $ hg status
235 M .hgtags
235 M .hgtags
236 $ hg commit -m "merge"
236 $ hg commit -m "merge"
237
237
238 Create a fake head, make sure tag not visible afterwards:
238 Create a fake head, make sure tag not visible afterwards:
239
239
240 $ cp .hgtags tags
240 $ cp .hgtags tags
241 $ hg tag last
241 $ hg tag last
242 $ hg rm .hgtags
242 $ hg rm .hgtags
243 $ hg commit -m "remove"
243 $ hg commit -m "remove"
244
244
245 $ mv tags .hgtags
245 $ mv tags .hgtags
246 $ hg add .hgtags
246 $ hg add .hgtags
247 $ hg commit -m "readd"
247 $ hg commit -m "readd"
248 $
248 $
249 $ hg tags
249 $ hg tags
250 tip 6:35ff301afafe
250 tip 6:35ff301afafe
251 first 0:acb14030fe0a
251 first 0:acb14030fe0a
252
252
253 Add invalid tags:
253 Add invalid tags:
254
254
255 $ echo "spam" >> .hgtags
255 $ echo "spam" >> .hgtags
256 $ echo >> .hgtags
256 $ echo >> .hgtags
257 $ echo "foo bar" >> .hgtags
257 $ echo "foo bar" >> .hgtags
258 $ echo "a5a5 invalid" >> .hg/localtags
258 $ echo "a5a5 invalid" >> .hg/localtags
259 $ cat .hgtags
259 $ cat .hgtags
260 acb14030fe0a21b60322c440ad2d20cf7685a376 first
260 acb14030fe0a21b60322c440ad2d20cf7685a376 first
261 spam
261 spam
262
262
263 foo bar
263 foo bar
264 $ hg commit -m "tags"
264 $ hg commit -m "tags"
265
265
266 Report tag parse error on other head:
266 Report tag parse error on other head:
267
267
268 $ hg up 3
268 $ hg up 3
269 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
269 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
270 $ echo 'x y' >> .hgtags
270 $ echo 'x y' >> .hgtags
271 $ hg commit -m "head"
271 $ hg commit -m "head"
272 created new head
272 created new head
273
273
274 $ hg tags --debug
274 $ hg tags --debug
275 .hgtags@75d9f02dfe28, line 2: cannot parse entry
275 .hgtags@75d9f02dfe28, line 2: cannot parse entry
276 .hgtags@75d9f02dfe28, line 4: node 'foo' is not well formed
276 .hgtags@75d9f02dfe28, line 4: node 'foo' is not well formed
277 .hgtags@c4be69a18c11, line 2: node 'x' is not well formed
277 .hgtags@c4be69a18c11, line 2: node 'x' is not well formed
278 tip 8:c4be69a18c11e8bc3a5fdbb576017c25f7d84663
278 tip 8:c4be69a18c11e8bc3a5fdbb576017c25f7d84663
279 first 0:acb14030fe0a21b60322c440ad2d20cf7685a376
279 first 0:acb14030fe0a21b60322c440ad2d20cf7685a376
280 $ hg tip
280 $ hg tip
281 changeset: 8:c4be69a18c11
281 changeset: 8:c4be69a18c11
282 tag: tip
282 tag: tip
283 parent: 3:ac5e980c4dc0
283 parent: 3:ac5e980c4dc0
284 user: test
284 user: test
285 date: Thu Jan 01 00:00:00 1970 +0000
285 date: Thu Jan 01 00:00:00 1970 +0000
286 summary: head
286 summary: head
287
287
288
288
289 Test tag precedence rules:
289 Test tag precedence rules:
290
290
291 $ cd ..
291 $ cd ..
292 $ hg init t2
292 $ hg init t2
293 $ cd t2
293 $ cd t2
294 $ echo foo > foo
294 $ echo foo > foo
295 $ hg add foo
295 $ hg add foo
296 $ hg ci -m 'add foo' # rev 0
296 $ hg ci -m 'add foo' # rev 0
297 $ hg tag bar # rev 1
297 $ hg tag bar # rev 1
298 $ echo >> foo
298 $ echo >> foo
299 $ hg ci -m 'change foo 1' # rev 2
299 $ hg ci -m 'change foo 1' # rev 2
300 $ hg up -C 1
300 $ hg up -C 1
301 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
301 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
302 $ hg tag -r 1 -f bar # rev 3
302 $ hg tag -r 1 -f bar # rev 3
303 $ hg up -C 1
303 $ hg up -C 1
304 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
304 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
305 $ echo >> foo
305 $ echo >> foo
306 $ hg ci -m 'change foo 2' # rev 4
306 $ hg ci -m 'change foo 2' # rev 4
307 created new head
307 created new head
308 $ hg tags
308 $ hg tags
309 tip 4:0c192d7d5e6b
309 tip 4:0c192d7d5e6b
310 bar 1:78391a272241
310 bar 1:78391a272241
311
311
312 Repeat in case of cache effects:
312 Repeat in case of cache effects:
313
313
314 $ hg tags
314 $ hg tags
315 tip 4:0c192d7d5e6b
315 tip 4:0c192d7d5e6b
316 bar 1:78391a272241
316 bar 1:78391a272241
317
317
318 Detailed dump of tag info:
318 Detailed dump of tag info:
319
319
320 $ hg heads -q # expect 4, 3, 2
320 $ hg heads -q # expect 4, 3, 2
321 4:0c192d7d5e6b
321 4:0c192d7d5e6b
322 3:6fa450212aeb
322 3:6fa450212aeb
323 2:7a94127795a3
323 2:7a94127795a3
324 $ dumptags 2
324 $ dumptags 2
325 rev 2: .hgtags:
325 rev 2: .hgtags:
326 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
326 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
327 $ dumptags 3
327 $ dumptags 3
328 rev 3: .hgtags:
328 rev 3: .hgtags:
329 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
329 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
330 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
330 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
331 78391a272241d70354aa14c874552cad6b51bb42 bar
331 78391a272241d70354aa14c874552cad6b51bb42 bar
332 $ dumptags 4
332 $ dumptags 4
333 rev 4: .hgtags:
333 rev 4: .hgtags:
334 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
334 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
335
335
336 Dump cache:
336 Dump cache:
337
337
338 $ cat .hg/cache/tags2-visible
338 $ cat .hg/cache/tags2-visible
339 4 0c192d7d5e6b78a714de54a2e9627952a877e25a
339 4 0c192d7d5e6b78a714de54a2e9627952a877e25a
340 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
340 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
341 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
341 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
342 78391a272241d70354aa14c874552cad6b51bb42 bar
342 78391a272241d70354aa14c874552cad6b51bb42 bar
343
343
344 $ f --size --hexdump .hg/cache/hgtagsfnodes1
344 $ f --size --hexdump .hg/cache/hgtagsfnodes1
345 .hg/cache/hgtagsfnodes1: size=120
345 .hg/cache/hgtagsfnodes1: size=120
346 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
346 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
347 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
347 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
348 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
348 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
349 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
349 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
350 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
350 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
351 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
351 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
352 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
352 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
353 0070: 78 ee 5a 2d ad bc 94 3d |x.Z-...=|
353 0070: 78 ee 5a 2d ad bc 94 3d |x.Z-...=|
354
354
355 Corrupt the .hgtags fnodes cache
355 Corrupt the .hgtags fnodes cache
356 Extra junk data at the end should get overwritten on next cache update
356 Extra junk data at the end should get overwritten on next cache update
357
357
358 $ echo extra >> .hg/cache/hgtagsfnodes1
358 $ echo extra >> .hg/cache/hgtagsfnodes1
359 $ echo dummy1 > foo
359 $ echo dummy1 > foo
360 $ hg commit -m throwaway1
360 $ hg commit -m throwaway1
361
361
362 $ hg tags
362 $ hg tags
363 tip 5:8dbfe60eff30
363 tip 5:8dbfe60eff30
364 bar 1:78391a272241
364 bar 1:78391a272241
365
365
366 $ hg blackbox -l 6
366 $ hg blackbox -l 6
367 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags
367 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags
368 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing 24 bytes to cache/hgtagsfnodes1
368 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing 24 bytes to cache/hgtagsfnodes1
369 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> 3/4 cache hits/lookups in * seconds (glob)
369 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> 3/4 cache hits/lookups in * seconds (glob)
370 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing .hg/cache/tags2-visible with 1 tags
370 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing .hg/cache/tags2-visible with 1 tags
371 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags exited 0 after * seconds (glob)
371 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags exited 0 after * seconds (glob)
372 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> blackbox -l 6
372 1970/01/01 00:00:00 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> blackbox -l 6
373
373
374 On junk data + missing cache entries, hg also overwrites the junk.
374 On junk data + missing cache entries, hg also overwrites the junk.
375
375
376 $ rm -f .hg/cache/tags2-visible
376 $ rm -f .hg/cache/tags2-visible
377 >>> import os
377 >>> import os
378 >>> with open(".hg/cache/hgtagsfnodes1", "ab+") as fp:
378 >>> with open(".hg/cache/hgtagsfnodes1", "ab+") as fp:
379 ... fp.seek(-10, os.SEEK_END) and None
379 ... fp.seek(-10, os.SEEK_END) and None
380 ... fp.truncate() and None
380 ... fp.truncate() and None
381
381
382 $ hg debugtagscache | tail -2
382 $ hg debugtagscache | tail -2
383 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
383 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
384 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 missing/invalid
384 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 missing
385 $ hg tags
385 $ hg tags
386 tip 5:8dbfe60eff30
386 tip 5:8dbfe60eff30
387 bar 1:78391a272241
387 bar 1:78391a272241
388 $ hg debugtagscache | tail -2
388 $ hg debugtagscache | tail -2
389 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
389 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
390 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8af31de17fab7422878ee5a2dadbc943d
390 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8af31de17fab7422878ee5a2dadbc943d
391
391
392 If the 4 bytes of node hash for a record don't match an existing node, the entry
393 is flagged as invalid.
394
395 >>> import os
396 >>> with open(".hg/cache/hgtagsfnodes1", "rb+") as fp:
397 ... fp.seek(-24, os.SEEK_END) and None
398 ... fp.write(b'\xde\xad') and None
399
400 $ f --size --hexdump .hg/cache/hgtagsfnodes1
401 .hg/cache/hgtagsfnodes1: size=144
402 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
403 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
404 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
405 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
406 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
407 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
408 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
409 0070: 78 ee 5a 2d ad bc 94 3d de ad e6 0e 0c 04 f2 a8 |x.Z-...=........|
410 0080: af 31 de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |.1....B(x.Z-...=|
411
412 $ hg debugtagscache | tail -2
413 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
414 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 invalid
415
416 $ hg tags
417 tip 5:8dbfe60eff30
418 bar 1:78391a272241
419
392 #if unix-permissions no-root
420 #if unix-permissions no-root
393 Errors writing to .hgtags fnodes cache are silently ignored
421 Errors writing to .hgtags fnodes cache are silently ignored
394
422
395 $ echo dummy2 > foo
423 $ echo dummy2 > foo
396 $ hg commit -m throwaway2
424 $ hg commit -m throwaway2
397
425
398 $ chmod a-w .hg/cache/hgtagsfnodes1
426 $ chmod a-w .hg/cache/hgtagsfnodes1
399 $ rm -f .hg/cache/tags2-visible
427 $ rm -f .hg/cache/tags2-visible
400
428
401 $ hg tags
429 $ hg tags
402 tip 6:b968051b5cf3
430 tip 6:b968051b5cf3
403 bar 1:78391a272241
431 bar 1:78391a272241
404
432
405 $ hg blackbox -l 6
433 $ hg blackbox -l 6
406 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
434 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
407 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> couldn't write cache/hgtagsfnodes1: [Errno *] * (glob)
435 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> couldn't write cache/hgtagsfnodes1: [Errno *] * (glob)
408 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 3/4 cache hits/lookups in * seconds (glob)
436 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 2/4 cache hits/lookups in * seconds (glob)
409 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
437 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
410 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
438 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
411 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
439 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
412
440
413 $ chmod a+w .hg/cache/hgtagsfnodes1
441 $ chmod a+w .hg/cache/hgtagsfnodes1
414
442
415 $ rm -f .hg/cache/tags2-visible
443 $ rm -f .hg/cache/tags2-visible
416 $ hg tags
444 $ hg tags
417 tip 6:b968051b5cf3
445 tip 6:b968051b5cf3
418 bar 1:78391a272241
446 bar 1:78391a272241
419
447
420 $ hg blackbox -l 6
448 $ hg blackbox -l 6
421 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
449 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
422 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing 24 bytes to cache/hgtagsfnodes1
450 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing 24 bytes to cache/hgtagsfnodes1
423 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 3/4 cache hits/lookups in * seconds (glob)
451 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 2/4 cache hits/lookups in * seconds (glob)
424 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
452 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
425 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
453 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
426 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
454 1970/01/01 00:00:00 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
427
455
428 $ f --size .hg/cache/hgtagsfnodes1
456 $ f --size .hg/cache/hgtagsfnodes1
429 .hg/cache/hgtagsfnodes1: size=168
457 .hg/cache/hgtagsfnodes1: size=168
430
458
431 $ hg -q --config extensions.strip= strip -r 6 --no-backup
459 $ hg -q --config extensions.strip= strip -r 6 --no-backup
432 #endif
460 #endif
433
461
434 Stripping doesn't truncate the tags cache until new data is available
462 Stripping doesn't truncate the tags cache until new data is available
435
463
436 $ rm -f .hg/cache/hgtagsfnodes1 .hg/cache/tags2-visible
464 $ rm -f .hg/cache/hgtagsfnodes1 .hg/cache/tags2-visible
437 $ hg tags
465 $ hg tags
438 tip 5:8dbfe60eff30
466 tip 5:8dbfe60eff30
439 bar 1:78391a272241
467 bar 1:78391a272241
440
468
441 $ f --size .hg/cache/hgtagsfnodes1
469 $ f --size .hg/cache/hgtagsfnodes1
442 .hg/cache/hgtagsfnodes1: size=144
470 .hg/cache/hgtagsfnodes1: size=144
443
471
444 $ hg -q --config extensions.strip= strip -r 5 --no-backup
472 $ hg -q --config extensions.strip= strip -r 5 --no-backup
445 $ hg tags
473 $ hg tags
446 tip 4:0c192d7d5e6b
474 tip 4:0c192d7d5e6b
447 bar 1:78391a272241
475 bar 1:78391a272241
448
476
449 $ hg blackbox -l 5
477 $ hg blackbox -l 5
450 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing 24 bytes to cache/hgtagsfnodes1
478 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing 24 bytes to cache/hgtagsfnodes1
451 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> 2/4 cache hits/lookups in * seconds (glob)
479 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> 2/4 cache hits/lookups in * seconds (glob)
452 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing .hg/cache/tags2-visible with 1 tags
480 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing .hg/cache/tags2-visible with 1 tags
453 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> tags exited 0 after * seconds (glob)
481 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> tags exited 0 after * seconds (glob)
454 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> blackbox -l 5
482 1970/01/01 00:00:00 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> blackbox -l 5
455
483
456 $ f --size .hg/cache/hgtagsfnodes1
484 $ f --size .hg/cache/hgtagsfnodes1
457 .hg/cache/hgtagsfnodes1: size=120
485 .hg/cache/hgtagsfnodes1: size=120
458
486
459 $ echo dummy > foo
487 $ echo dummy > foo
460 $ hg commit -m throwaway3
488 $ hg commit -m throwaway3
461
489
462 $ hg tags
490 $ hg tags
463 tip 5:035f65efb448
491 tip 5:035f65efb448
464 bar 1:78391a272241
492 bar 1:78391a272241
465
493
466 $ hg blackbox -l 6
494 $ hg blackbox -l 6
467 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags
495 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags
468 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing 24 bytes to cache/hgtagsfnodes1
496 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing 24 bytes to cache/hgtagsfnodes1
469 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> 3/4 cache hits/lookups in * seconds (glob)
497 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> 3/4 cache hits/lookups in * seconds (glob)
470 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing .hg/cache/tags2-visible with 1 tags
498 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing .hg/cache/tags2-visible with 1 tags
471 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags exited 0 after * seconds (glob)
499 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags exited 0 after * seconds (glob)
472 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> blackbox -l 6
500 1970/01/01 00:00:00 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> blackbox -l 6
473 $ f --size .hg/cache/hgtagsfnodes1
501 $ f --size .hg/cache/hgtagsfnodes1
474 .hg/cache/hgtagsfnodes1: size=144
502 .hg/cache/hgtagsfnodes1: size=144
475
503
476 $ hg -q --config extensions.strip= strip -r 5 --no-backup
504 $ hg -q --config extensions.strip= strip -r 5 --no-backup
477
505
478 Test tag removal:
506 Test tag removal:
479
507
480 $ hg tag --remove bar # rev 5
508 $ hg tag --remove bar # rev 5
481 $ hg tip -vp
509 $ hg tip -vp
482 changeset: 5:5f6e8655b1c7
510 changeset: 5:5f6e8655b1c7
483 tag: tip
511 tag: tip
484 user: test
512 user: test
485 date: Thu Jan 01 00:00:00 1970 +0000
513 date: Thu Jan 01 00:00:00 1970 +0000
486 files: .hgtags
514 files: .hgtags
487 description:
515 description:
488 Removed tag bar
516 Removed tag bar
489
517
490
518
491 diff -r 0c192d7d5e6b -r 5f6e8655b1c7 .hgtags
519 diff -r 0c192d7d5e6b -r 5f6e8655b1c7 .hgtags
492 --- a/.hgtags Thu Jan 01 00:00:00 1970 +0000
520 --- a/.hgtags Thu Jan 01 00:00:00 1970 +0000
493 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
521 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
494 @@ -1,1 +1,3 @@
522 @@ -1,1 +1,3 @@
495 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
523 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
496 +78391a272241d70354aa14c874552cad6b51bb42 bar
524 +78391a272241d70354aa14c874552cad6b51bb42 bar
497 +0000000000000000000000000000000000000000 bar
525 +0000000000000000000000000000000000000000 bar
498
526
499 $ hg tags
527 $ hg tags
500 tip 5:5f6e8655b1c7
528 tip 5:5f6e8655b1c7
501 $ hg tags # again, try to expose cache bugs
529 $ hg tags # again, try to expose cache bugs
502 tip 5:5f6e8655b1c7
530 tip 5:5f6e8655b1c7
503
531
504 Remove nonexistent tag:
532 Remove nonexistent tag:
505
533
506 $ hg tag --remove foobar
534 $ hg tag --remove foobar
507 abort: tag 'foobar' does not exist
535 abort: tag 'foobar' does not exist
508 [10]
536 [10]
509 $ hg tip
537 $ hg tip
510 changeset: 5:5f6e8655b1c7
538 changeset: 5:5f6e8655b1c7
511 tag: tip
539 tag: tip
512 user: test
540 user: test
513 date: Thu Jan 01 00:00:00 1970 +0000
541 date: Thu Jan 01 00:00:00 1970 +0000
514 summary: Removed tag bar
542 summary: Removed tag bar
515
543
516
544
517 Undo a tag with rollback:
545 Undo a tag with rollback:
518
546
519 $ hg rollback # destroy rev 5 (restore bar)
547 $ hg rollback # destroy rev 5 (restore bar)
520 repository tip rolled back to revision 4 (undo commit)
548 repository tip rolled back to revision 4 (undo commit)
521 working directory now based on revision 4
549 working directory now based on revision 4
522 $ hg tags
550 $ hg tags
523 tip 4:0c192d7d5e6b
551 tip 4:0c192d7d5e6b
524 bar 1:78391a272241
552 bar 1:78391a272241
525 $ hg tags
553 $ hg tags
526 tip 4:0c192d7d5e6b
554 tip 4:0c192d7d5e6b
527 bar 1:78391a272241
555 bar 1:78391a272241
528
556
529 Test tag rank:
557 Test tag rank:
530
558
531 $ cd ..
559 $ cd ..
532 $ hg init t3
560 $ hg init t3
533 $ cd t3
561 $ cd t3
534 $ echo foo > foo
562 $ echo foo > foo
535 $ hg add foo
563 $ hg add foo
536 $ hg ci -m 'add foo' # rev 0
564 $ hg ci -m 'add foo' # rev 0
537 $ hg tag -f bar # rev 1 bar -> 0
565 $ hg tag -f bar # rev 1 bar -> 0
538 $ hg tag -f bar # rev 2 bar -> 1
566 $ hg tag -f bar # rev 2 bar -> 1
539 $ hg tag -fr 0 bar # rev 3 bar -> 0
567 $ hg tag -fr 0 bar # rev 3 bar -> 0
540 $ hg tag -fr 1 bar # rev 4 bar -> 1
568 $ hg tag -fr 1 bar # rev 4 bar -> 1
541 $ hg tag -fr 0 bar # rev 5 bar -> 0
569 $ hg tag -fr 0 bar # rev 5 bar -> 0
542 $ hg tags
570 $ hg tags
543 tip 5:85f05169d91d
571 tip 5:85f05169d91d
544 bar 0:bbd179dfa0a7
572 bar 0:bbd179dfa0a7
545 $ hg co 3
573 $ hg co 3
546 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
574 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
547 $ echo barbar > foo
575 $ echo barbar > foo
548 $ hg ci -m 'change foo' # rev 6
576 $ hg ci -m 'change foo' # rev 6
549 created new head
577 created new head
550 $ hg tags
578 $ hg tags
551 tip 6:735c3ca72986
579 tip 6:735c3ca72986
552 bar 0:bbd179dfa0a7
580 bar 0:bbd179dfa0a7
553
581
554 Don't allow moving tag without -f:
582 Don't allow moving tag without -f:
555
583
556 $ hg tag -r 3 bar
584 $ hg tag -r 3 bar
557 abort: tag 'bar' already exists (use -f to force)
585 abort: tag 'bar' already exists (use -f to force)
558 [10]
586 [10]
559 $ hg tags
587 $ hg tags
560 tip 6:735c3ca72986
588 tip 6:735c3ca72986
561 bar 0:bbd179dfa0a7
589 bar 0:bbd179dfa0a7
562
590
563 Strip 1: expose an old head:
591 Strip 1: expose an old head:
564
592
565 $ hg --config extensions.mq= strip 5
593 $ hg --config extensions.mq= strip 5
566 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
594 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
567 $ hg tags # partly stale cache
595 $ hg tags # partly stale cache
568 tip 5:735c3ca72986
596 tip 5:735c3ca72986
569 bar 1:78391a272241
597 bar 1:78391a272241
570 $ hg tags # up-to-date cache
598 $ hg tags # up-to-date cache
571 tip 5:735c3ca72986
599 tip 5:735c3ca72986
572 bar 1:78391a272241
600 bar 1:78391a272241
573
601
574 Strip 2: destroy whole branch, no old head exposed
602 Strip 2: destroy whole branch, no old head exposed
575
603
576 $ hg --config extensions.mq= strip 4
604 $ hg --config extensions.mq= strip 4
577 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
605 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
578 $ hg tags # partly stale
606 $ hg tags # partly stale
579 tip 4:735c3ca72986
607 tip 4:735c3ca72986
580 bar 0:bbd179dfa0a7
608 bar 0:bbd179dfa0a7
581 $ rm -f .hg/cache/tags2-visible
609 $ rm -f .hg/cache/tags2-visible
582 $ hg tags # cold cache
610 $ hg tags # cold cache
583 tip 4:735c3ca72986
611 tip 4:735c3ca72986
584 bar 0:bbd179dfa0a7
612 bar 0:bbd179dfa0a7
585
613
586 Test tag rank with 3 heads:
614 Test tag rank with 3 heads:
587
615
588 $ cd ..
616 $ cd ..
589 $ hg init t4
617 $ hg init t4
590 $ cd t4
618 $ cd t4
591 $ echo foo > foo
619 $ echo foo > foo
592 $ hg add
620 $ hg add
593 adding foo
621 adding foo
594 $ hg ci -m 'add foo' # rev 0
622 $ hg ci -m 'add foo' # rev 0
595 $ hg tag bar # rev 1 bar -> 0
623 $ hg tag bar # rev 1 bar -> 0
596 $ hg tag -f bar # rev 2 bar -> 1
624 $ hg tag -f bar # rev 2 bar -> 1
597 $ hg up -qC 0
625 $ hg up -qC 0
598 $ hg tag -fr 2 bar # rev 3 bar -> 2
626 $ hg tag -fr 2 bar # rev 3 bar -> 2
599 $ hg tags
627 $ hg tags
600 tip 3:197c21bbbf2c
628 tip 3:197c21bbbf2c
601 bar 2:6fa450212aeb
629 bar 2:6fa450212aeb
602 $ hg up -qC 0
630 $ hg up -qC 0
603 $ hg tag -m 'retag rev 0' -fr 0 bar # rev 4 bar -> 0, but bar stays at 2
631 $ hg tag -m 'retag rev 0' -fr 0 bar # rev 4 bar -> 0, but bar stays at 2
604
632
605 Bar should still point to rev 2:
633 Bar should still point to rev 2:
606
634
607 $ hg tags
635 $ hg tags
608 tip 4:3b4b14ed0202
636 tip 4:3b4b14ed0202
609 bar 2:6fa450212aeb
637 bar 2:6fa450212aeb
610
638
611 Test that removing global/local tags does not get confused when trying
639 Test that removing global/local tags does not get confused when trying
612 to remove a tag of type X which actually only exists as a type Y:
640 to remove a tag of type X which actually only exists as a type Y:
613
641
614 $ cd ..
642 $ cd ..
615 $ hg init t5
643 $ hg init t5
616 $ cd t5
644 $ cd t5
617 $ echo foo > foo
645 $ echo foo > foo
618 $ hg add
646 $ hg add
619 adding foo
647 adding foo
620 $ hg ci -m 'add foo' # rev 0
648 $ hg ci -m 'add foo' # rev 0
621
649
622 $ hg tag -r 0 -l localtag
650 $ hg tag -r 0 -l localtag
623 $ hg tag --remove localtag
651 $ hg tag --remove localtag
624 abort: tag 'localtag' is not a global tag
652 abort: tag 'localtag' is not a global tag
625 [10]
653 [10]
626 $
654 $
627 $ hg tag -r 0 globaltag
655 $ hg tag -r 0 globaltag
628 $ hg tag --remove -l globaltag
656 $ hg tag --remove -l globaltag
629 abort: tag 'globaltag' is not a local tag
657 abort: tag 'globaltag' is not a local tag
630 [10]
658 [10]
631 $ hg tags -v
659 $ hg tags -v
632 tip 1:a0b6fe111088
660 tip 1:a0b6fe111088
633 localtag 0:bbd179dfa0a7 local
661 localtag 0:bbd179dfa0a7 local
634 globaltag 0:bbd179dfa0a7
662 globaltag 0:bbd179dfa0a7
635
663
636 Templated output:
664 Templated output:
637
665
638 (immediate values)
666 (immediate values)
639
667
640 $ hg tags -T '{pad(tag, 9)} {rev}:{node} ({type})\n'
668 $ hg tags -T '{pad(tag, 9)} {rev}:{node} ({type})\n'
641 tip 1:a0b6fe111088c8c29567d3876cc466aa02927cae ()
669 tip 1:a0b6fe111088c8c29567d3876cc466aa02927cae ()
642 localtag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa (local)
670 localtag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa (local)
643 globaltag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa ()
671 globaltag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa ()
644
672
645 (ctx/revcache dependent)
673 (ctx/revcache dependent)
646
674
647 $ hg tags -T '{pad(tag, 9)} {rev} {file_adds}\n'
675 $ hg tags -T '{pad(tag, 9)} {rev} {file_adds}\n'
648 tip 1 .hgtags
676 tip 1 .hgtags
649 localtag 0 foo
677 localtag 0 foo
650 globaltag 0 foo
678 globaltag 0 foo
651
679
652 $ hg tags -T '{pad(tag, 9)} {rev}:{node|shortest}\n'
680 $ hg tags -T '{pad(tag, 9)} {rev}:{node|shortest}\n'
653 tip 1:a0b6
681 tip 1:a0b6
654 localtag 0:bbd1
682 localtag 0:bbd1
655 globaltag 0:bbd1
683 globaltag 0:bbd1
656
684
657 Test for issue3911
685 Test for issue3911
658
686
659 $ hg tag -r 0 -l localtag2
687 $ hg tag -r 0 -l localtag2
660 $ hg tag -l --remove localtag2
688 $ hg tag -l --remove localtag2
661 $ hg tags -v
689 $ hg tags -v
662 tip 1:a0b6fe111088
690 tip 1:a0b6fe111088
663 localtag 0:bbd179dfa0a7 local
691 localtag 0:bbd179dfa0a7 local
664 globaltag 0:bbd179dfa0a7
692 globaltag 0:bbd179dfa0a7
665
693
666 $ hg tag -r 1 -f localtag
694 $ hg tag -r 1 -f localtag
667 $ hg tags -v
695 $ hg tags -v
668 tip 2:5c70a037bb37
696 tip 2:5c70a037bb37
669 localtag 1:a0b6fe111088
697 localtag 1:a0b6fe111088
670 globaltag 0:bbd179dfa0a7
698 globaltag 0:bbd179dfa0a7
671
699
672 $ hg tags -v
700 $ hg tags -v
673 tip 2:5c70a037bb37
701 tip 2:5c70a037bb37
674 localtag 1:a0b6fe111088
702 localtag 1:a0b6fe111088
675 globaltag 0:bbd179dfa0a7
703 globaltag 0:bbd179dfa0a7
676
704
677 $ hg tag -r 1 localtag2
705 $ hg tag -r 1 localtag2
678 $ hg tags -v
706 $ hg tags -v
679 tip 3:bbfb8cd42be2
707 tip 3:bbfb8cd42be2
680 localtag2 1:a0b6fe111088
708 localtag2 1:a0b6fe111088
681 localtag 1:a0b6fe111088
709 localtag 1:a0b6fe111088
682 globaltag 0:bbd179dfa0a7
710 globaltag 0:bbd179dfa0a7
683
711
684 $ hg tags -v
712 $ hg tags -v
685 tip 3:bbfb8cd42be2
713 tip 3:bbfb8cd42be2
686 localtag2 1:a0b6fe111088
714 localtag2 1:a0b6fe111088
687 localtag 1:a0b6fe111088
715 localtag 1:a0b6fe111088
688 globaltag 0:bbd179dfa0a7
716 globaltag 0:bbd179dfa0a7
689
717
690 $ cd ..
718 $ cd ..
691
719
692 Create a repository with tags data to test .hgtags fnodes transfer
720 Create a repository with tags data to test .hgtags fnodes transfer
693
721
694 $ hg init tagsserver
722 $ hg init tagsserver
695 $ cd tagsserver
723 $ cd tagsserver
696 $ touch foo
724 $ touch foo
697 $ hg -q commit -A -m initial
725 $ hg -q commit -A -m initial
698 $ hg tag -m 'tag 0.1' 0.1
726 $ hg tag -m 'tag 0.1' 0.1
699 $ echo second > foo
727 $ echo second > foo
700 $ hg commit -m second
728 $ hg commit -m second
701 $ hg tag -m 'tag 0.2' 0.2
729 $ hg tag -m 'tag 0.2' 0.2
702 $ hg tags
730 $ hg tags
703 tip 3:40f0358cb314
731 tip 3:40f0358cb314
704 0.2 2:f63cc8fe54e4
732 0.2 2:f63cc8fe54e4
705 0.1 0:96ee1d7354c4
733 0.1 0:96ee1d7354c4
706 $ cd ..
734 $ cd ..
707
735
708 Cloning should pull down hgtags fnodes mappings and write the cache file
736 Cloning should pull down hgtags fnodes mappings and write the cache file
709
737
710 $ hg clone --pull tagsserver tagsclient
738 $ hg clone --pull tagsserver tagsclient
711 requesting all changes
739 requesting all changes
712 adding changesets
740 adding changesets
713 adding manifests
741 adding manifests
714 adding file changes
742 adding file changes
715 added 4 changesets with 4 changes to 2 files
743 added 4 changesets with 4 changes to 2 files
716 new changesets 96ee1d7354c4:40f0358cb314
744 new changesets 96ee1d7354c4:40f0358cb314
717 updating to branch default
745 updating to branch default
718 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
746 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
719
747
720 Missing tags2* files means the cache wasn't written through the normal mechanism.
748 Missing tags2* files means the cache wasn't written through the normal mechanism.
721
749
722 $ ls tagsclient/.hg/cache
750 $ ls tagsclient/.hg/cache
723 branch2-base
751 branch2-base
724 branch2-immutable
752 branch2-immutable
725 branch2-served
753 branch2-served
726 branch2-served.hidden
754 branch2-served.hidden
727 branch2-visible
755 branch2-visible
728 branch2-visible-hidden
756 branch2-visible-hidden
729 hgtagsfnodes1
757 hgtagsfnodes1
730 rbc-names-v1
758 rbc-names-v1
731 rbc-revs-v1
759 rbc-revs-v1
732 tags2
760 tags2
733 tags2-served
761 tags2-served
734
762
735 Cache should contain the head only, even though other nodes have tags data
763 Cache should contain the head only, even though other nodes have tags data
736
764
737 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
765 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
738 tagsclient/.hg/cache/hgtagsfnodes1: size=96
766 tagsclient/.hg/cache/hgtagsfnodes1: size=96
739 0000: 96 ee 1d 73 00 00 00 00 00 00 00 00 00 00 00 00 |...s............|
767 0000: 96 ee 1d 73 00 00 00 00 00 00 00 00 00 00 00 00 |...s............|
740 0010: 00 00 00 00 00 00 00 00 c4 da b0 c2 94 65 e1 c6 |.............e..|
768 0010: 00 00 00 00 00 00 00 00 c4 da b0 c2 94 65 e1 c6 |.............e..|
741 0020: 0d f7 f0 dd 32 04 ea 57 78 c8 97 97 79 fc d5 95 |....2..Wx...y...|
769 0020: 0d f7 f0 dd 32 04 ea 57 78 c8 97 97 79 fc d5 95 |....2..Wx...y...|
742 0030: f6 3c c8 fe 94 65 e1 c6 0d f7 f0 dd 32 04 ea 57 |.<...e......2..W|
770 0030: f6 3c c8 fe 94 65 e1 c6 0d f7 f0 dd 32 04 ea 57 |.<...e......2..W|
743 0040: 78 c8 97 97 79 fc d5 95 40 f0 35 8c 19 e0 a7 d3 |x...y...@.5.....|
771 0040: 78 c8 97 97 79 fc d5 95 40 f0 35 8c 19 e0 a7 d3 |x...y...@.5.....|
744 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
772 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
745
773
746 Running hg tags should produce tags2* file and not change cache
774 Running hg tags should produce tags2* file and not change cache
747
775
748 $ hg -R tagsclient tags
776 $ hg -R tagsclient tags
749 tip 3:40f0358cb314
777 tip 3:40f0358cb314
750 0.2 2:f63cc8fe54e4
778 0.2 2:f63cc8fe54e4
751 0.1 0:96ee1d7354c4
779 0.1 0:96ee1d7354c4
752
780
753 $ ls tagsclient/.hg/cache
781 $ ls tagsclient/.hg/cache
754 branch2-base
782 branch2-base
755 branch2-immutable
783 branch2-immutable
756 branch2-served
784 branch2-served
757 branch2-served.hidden
785 branch2-served.hidden
758 branch2-visible
786 branch2-visible
759 branch2-visible-hidden
787 branch2-visible-hidden
760 hgtagsfnodes1
788 hgtagsfnodes1
761 rbc-names-v1
789 rbc-names-v1
762 rbc-revs-v1
790 rbc-revs-v1
763 tags2
791 tags2
764 tags2-served
792 tags2-served
765 tags2-visible
793 tags2-visible
766
794
767 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
795 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
768 tagsclient/.hg/cache/hgtagsfnodes1: size=96
796 tagsclient/.hg/cache/hgtagsfnodes1: size=96
769 0000: 96 ee 1d 73 00 00 00 00 00 00 00 00 00 00 00 00 |...s............|
797 0000: 96 ee 1d 73 00 00 00 00 00 00 00 00 00 00 00 00 |...s............|
770 0010: 00 00 00 00 00 00 00 00 c4 da b0 c2 94 65 e1 c6 |.............e..|
798 0010: 00 00 00 00 00 00 00 00 c4 da b0 c2 94 65 e1 c6 |.............e..|
771 0020: 0d f7 f0 dd 32 04 ea 57 78 c8 97 97 79 fc d5 95 |....2..Wx...y...|
799 0020: 0d f7 f0 dd 32 04 ea 57 78 c8 97 97 79 fc d5 95 |....2..Wx...y...|
772 0030: f6 3c c8 fe 94 65 e1 c6 0d f7 f0 dd 32 04 ea 57 |.<...e......2..W|
800 0030: f6 3c c8 fe 94 65 e1 c6 0d f7 f0 dd 32 04 ea 57 |.<...e......2..W|
773 0040: 78 c8 97 97 79 fc d5 95 40 f0 35 8c 19 e0 a7 d3 |x...y...@.5.....|
801 0040: 78 c8 97 97 79 fc d5 95 40 f0 35 8c 19 e0 a7 d3 |x...y...@.5.....|
774 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
802 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
775
803
776 Check that the bundle includes cache data
804 Check that the bundle includes cache data
777
805
778 $ hg -R tagsclient bundle --all ./test-cache-in-bundle-all-rev.hg
806 $ hg -R tagsclient bundle --all ./test-cache-in-bundle-all-rev.hg
779 4 changesets found
807 4 changesets found
780 $ hg debugbundle ./test-cache-in-bundle-all-rev.hg
808 $ hg debugbundle ./test-cache-in-bundle-all-rev.hg
781 Stream params: {Compression: BZ}
809 Stream params: {Compression: BZ}
782 changegroup -- {nbchanges: 4, version: 02} (mandatory: True)
810 changegroup -- {nbchanges: 4, version: 02} (mandatory: True)
783 96ee1d7354c4ad7372047672c36a1f561e3a6a4c
811 96ee1d7354c4ad7372047672c36a1f561e3a6a4c
784 c4dab0c2fd337eb9191f80c3024830a4889a8f34
812 c4dab0c2fd337eb9191f80c3024830a4889a8f34
785 f63cc8fe54e4d326f8d692805d70e092f851ddb1
813 f63cc8fe54e4d326f8d692805d70e092f851ddb1
786 40f0358cb314c824a5929ee527308d90e023bc10
814 40f0358cb314c824a5929ee527308d90e023bc10
787 hgtagsfnodes -- {} (mandatory: True)
815 hgtagsfnodes -- {} (mandatory: True)
788 cache:rev-branch-cache -- {} (mandatory: False)
816 cache:rev-branch-cache -- {} (mandatory: False)
789
817
790 Check that local clone includes cache data
818 Check that local clone includes cache data
791
819
792 $ hg clone tagsclient tags-local-clone
820 $ hg clone tagsclient tags-local-clone
793 updating to branch default
821 updating to branch default
794 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
822 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
795 $ (cd tags-local-clone/.hg/cache/; ls -1 tag*)
823 $ (cd tags-local-clone/.hg/cache/; ls -1 tag*)
796 tags2
824 tags2
797 tags2-served
825 tags2-served
798 tags2-visible
826 tags2-visible
799
827
800 Avoid writing logs on trying to delete an already deleted tag
828 Avoid writing logs on trying to delete an already deleted tag
801 $ hg init issue5752
829 $ hg init issue5752
802 $ cd issue5752
830 $ cd issue5752
803 $ echo > a
831 $ echo > a
804 $ hg commit -Am 'add a'
832 $ hg commit -Am 'add a'
805 adding a
833 adding a
806 $ hg tag a
834 $ hg tag a
807 $ hg tags
835 $ hg tags
808 tip 1:bd7ee4f3939b
836 tip 1:bd7ee4f3939b
809 a 0:a8a82d372bb3
837 a 0:a8a82d372bb3
810 $ hg log
838 $ hg log
811 changeset: 1:bd7ee4f3939b
839 changeset: 1:bd7ee4f3939b
812 tag: tip
840 tag: tip
813 user: test
841 user: test
814 date: Thu Jan 01 00:00:00 1970 +0000
842 date: Thu Jan 01 00:00:00 1970 +0000
815 summary: Added tag a for changeset a8a82d372bb3
843 summary: Added tag a for changeset a8a82d372bb3
816
844
817 changeset: 0:a8a82d372bb3
845 changeset: 0:a8a82d372bb3
818 tag: a
846 tag: a
819 user: test
847 user: test
820 date: Thu Jan 01 00:00:00 1970 +0000
848 date: Thu Jan 01 00:00:00 1970 +0000
821 summary: add a
849 summary: add a
822
850
823 $ hg tag --remove a
851 $ hg tag --remove a
824 $ hg log
852 $ hg log
825 changeset: 2:e7feacc7ec9e
853 changeset: 2:e7feacc7ec9e
826 tag: tip
854 tag: tip
827 user: test
855 user: test
828 date: Thu Jan 01 00:00:00 1970 +0000
856 date: Thu Jan 01 00:00:00 1970 +0000
829 summary: Removed tag a
857 summary: Removed tag a
830
858
831 changeset: 1:bd7ee4f3939b
859 changeset: 1:bd7ee4f3939b
832 user: test
860 user: test
833 date: Thu Jan 01 00:00:00 1970 +0000
861 date: Thu Jan 01 00:00:00 1970 +0000
834 summary: Added tag a for changeset a8a82d372bb3
862 summary: Added tag a for changeset a8a82d372bb3
835
863
836 changeset: 0:a8a82d372bb3
864 changeset: 0:a8a82d372bb3
837 user: test
865 user: test
838 date: Thu Jan 01 00:00:00 1970 +0000
866 date: Thu Jan 01 00:00:00 1970 +0000
839 summary: add a
867 summary: add a
840
868
841 $ hg tag --remove a
869 $ hg tag --remove a
842 abort: tag 'a' is already removed
870 abort: tag 'a' is already removed
843 [10]
871 [10]
844 $ hg log
872 $ hg log
845 changeset: 2:e7feacc7ec9e
873 changeset: 2:e7feacc7ec9e
846 tag: tip
874 tag: tip
847 user: test
875 user: test
848 date: Thu Jan 01 00:00:00 1970 +0000
876 date: Thu Jan 01 00:00:00 1970 +0000
849 summary: Removed tag a
877 summary: Removed tag a
850
878
851 changeset: 1:bd7ee4f3939b
879 changeset: 1:bd7ee4f3939b
852 user: test
880 user: test
853 date: Thu Jan 01 00:00:00 1970 +0000
881 date: Thu Jan 01 00:00:00 1970 +0000
854 summary: Added tag a for changeset a8a82d372bb3
882 summary: Added tag a for changeset a8a82d372bb3
855
883
856 changeset: 0:a8a82d372bb3
884 changeset: 0:a8a82d372bb3
857 user: test
885 user: test
858 date: Thu Jan 01 00:00:00 1970 +0000
886 date: Thu Jan 01 00:00:00 1970 +0000
859 summary: add a
887 summary: add a
860
888
861 $ cat .hgtags
889 $ cat .hgtags
862 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
890 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
863 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
891 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
864 0000000000000000000000000000000000000000 a
892 0000000000000000000000000000000000000000 a
General Comments 0
You need to be logged in to leave comments. Login now