##// END OF EJS Templates
merge: stable into default
Raphaël Gomès -
r49849:bde2e4ef merge default
parent child Browse files
Show More
@@ -1,2588 +1,2589 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148
148
149 import collections
149 import collections
150 import errno
150 import errno
151 import os
151 import os
152 import re
152 import re
153 import string
153 import string
154 import struct
154 import struct
155 import sys
155 import sys
156
156
157 from .i18n import _
157 from .i18n import _
158 from .node import (
158 from .node import (
159 hex,
159 hex,
160 short,
160 short,
161 )
161 )
162 from . import (
162 from . import (
163 bookmarks,
163 bookmarks,
164 changegroup,
164 changegroup,
165 encoding,
165 encoding,
166 error,
166 error,
167 obsolete,
167 obsolete,
168 phases,
168 phases,
169 pushkey,
169 pushkey,
170 pycompat,
170 pycompat,
171 requirements,
171 requirements,
172 scmutil,
172 scmutil,
173 streamclone,
173 streamclone,
174 tags,
174 tags,
175 url,
175 url,
176 util,
176 util,
177 )
177 )
178 from .utils import (
178 from .utils import (
179 stringutil,
179 stringutil,
180 urlutil,
180 urlutil,
181 )
181 )
182 from .interfaces import repository
182 from .interfaces import repository
183
183
184 urlerr = util.urlerr
184 urlerr = util.urlerr
185 urlreq = util.urlreq
185 urlreq = util.urlreq
186
186
187 _pack = struct.pack
187 _pack = struct.pack
188 _unpack = struct.unpack
188 _unpack = struct.unpack
189
189
190 _fstreamparamsize = b'>i'
190 _fstreamparamsize = b'>i'
191 _fpartheadersize = b'>i'
191 _fpartheadersize = b'>i'
192 _fparttypesize = b'>B'
192 _fparttypesize = b'>B'
193 _fpartid = b'>I'
193 _fpartid = b'>I'
194 _fpayloadsize = b'>i'
194 _fpayloadsize = b'>i'
195 _fpartparamcount = b'>BB'
195 _fpartparamcount = b'>BB'
196
196
197 preferedchunksize = 32768
197 preferedchunksize = 32768
198
198
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
199 _parttypeforbidden = re.compile(b'[^a-zA-Z0-9_:-]')
200
200
201
201
202 def outdebug(ui, message):
202 def outdebug(ui, message):
203 """debug regarding output stream (bundling)"""
203 """debug regarding output stream (bundling)"""
204 if ui.configbool(b'devel', b'bundle2.debug'):
204 if ui.configbool(b'devel', b'bundle2.debug'):
205 ui.debug(b'bundle2-output: %s\n' % message)
205 ui.debug(b'bundle2-output: %s\n' % message)
206
206
207
207
208 def indebug(ui, message):
208 def indebug(ui, message):
209 """debug on input stream (unbundling)"""
209 """debug on input stream (unbundling)"""
210 if ui.configbool(b'devel', b'bundle2.debug'):
210 if ui.configbool(b'devel', b'bundle2.debug'):
211 ui.debug(b'bundle2-input: %s\n' % message)
211 ui.debug(b'bundle2-input: %s\n' % message)
212
212
213
213
214 def validateparttype(parttype):
214 def validateparttype(parttype):
215 """raise ValueError if a parttype contains invalid character"""
215 """raise ValueError if a parttype contains invalid character"""
216 if _parttypeforbidden.search(parttype):
216 if _parttypeforbidden.search(parttype):
217 raise ValueError(parttype)
217 raise ValueError(parttype)
218
218
219
219
220 def _makefpartparamsizes(nbparams):
220 def _makefpartparamsizes(nbparams):
221 """return a struct format to read part parameter sizes
221 """return a struct format to read part parameter sizes
222
222
223 The number parameters is variable so we need to build that format
223 The number parameters is variable so we need to build that format
224 dynamically.
224 dynamically.
225 """
225 """
226 return b'>' + (b'BB' * nbparams)
226 return b'>' + (b'BB' * nbparams)
227
227
228
228
229 parthandlermapping = {}
229 parthandlermapping = {}
230
230
231
231
232 def parthandler(parttype, params=()):
232 def parthandler(parttype, params=()):
233 """decorator that register a function as a bundle2 part handler
233 """decorator that register a function as a bundle2 part handler
234
234
235 eg::
235 eg::
236
236
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
237 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
238 def myparttypehandler(...):
238 def myparttypehandler(...):
239 '''process a part of type "my part".'''
239 '''process a part of type "my part".'''
240 ...
240 ...
241 """
241 """
242 validateparttype(parttype)
242 validateparttype(parttype)
243
243
244 def _decorator(func):
244 def _decorator(func):
245 lparttype = parttype.lower() # enforce lower case matching.
245 lparttype = parttype.lower() # enforce lower case matching.
246 assert lparttype not in parthandlermapping
246 assert lparttype not in parthandlermapping
247 parthandlermapping[lparttype] = func
247 parthandlermapping[lparttype] = func
248 func.params = frozenset(params)
248 func.params = frozenset(params)
249 return func
249 return func
250
250
251 return _decorator
251 return _decorator
252
252
253
253
254 class unbundlerecords:
254 class unbundlerecords:
255 """keep record of what happens during and unbundle
255 """keep record of what happens during and unbundle
256
256
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
257 New records are added using `records.add('cat', obj)`. Where 'cat' is a
258 category of record and obj is an arbitrary object.
258 category of record and obj is an arbitrary object.
259
259
260 `records['cat']` will return all entries of this category 'cat'.
260 `records['cat']` will return all entries of this category 'cat'.
261
261
262 Iterating on the object itself will yield `('category', obj)` tuples
262 Iterating on the object itself will yield `('category', obj)` tuples
263 for all entries.
263 for all entries.
264
264
265 All iterations happens in chronological order.
265 All iterations happens in chronological order.
266 """
266 """
267
267
268 def __init__(self):
268 def __init__(self):
269 self._categories = {}
269 self._categories = {}
270 self._sequences = []
270 self._sequences = []
271 self._replies = {}
271 self._replies = {}
272
272
273 def add(self, category, entry, inreplyto=None):
273 def add(self, category, entry, inreplyto=None):
274 """add a new record of a given category.
274 """add a new record of a given category.
275
275
276 The entry can then be retrieved in the list returned by
276 The entry can then be retrieved in the list returned by
277 self['category']."""
277 self['category']."""
278 self._categories.setdefault(category, []).append(entry)
278 self._categories.setdefault(category, []).append(entry)
279 self._sequences.append((category, entry))
279 self._sequences.append((category, entry))
280 if inreplyto is not None:
280 if inreplyto is not None:
281 self.getreplies(inreplyto).add(category, entry)
281 self.getreplies(inreplyto).add(category, entry)
282
282
283 def getreplies(self, partid):
283 def getreplies(self, partid):
284 """get the records that are replies to a specific part"""
284 """get the records that are replies to a specific part"""
285 return self._replies.setdefault(partid, unbundlerecords())
285 return self._replies.setdefault(partid, unbundlerecords())
286
286
287 def __getitem__(self, cat):
287 def __getitem__(self, cat):
288 return tuple(self._categories.get(cat, ()))
288 return tuple(self._categories.get(cat, ()))
289
289
290 def __iter__(self):
290 def __iter__(self):
291 return iter(self._sequences)
291 return iter(self._sequences)
292
292
293 def __len__(self):
293 def __len__(self):
294 return len(self._sequences)
294 return len(self._sequences)
295
295
296 def __nonzero__(self):
296 def __nonzero__(self):
297 return bool(self._sequences)
297 return bool(self._sequences)
298
298
299 __bool__ = __nonzero__
299 __bool__ = __nonzero__
300
300
301
301
302 class bundleoperation:
302 class bundleoperation:
303 """an object that represents a single bundling process
303 """an object that represents a single bundling process
304
304
305 Its purpose is to carry unbundle-related objects and states.
305 Its purpose is to carry unbundle-related objects and states.
306
306
307 A new object should be created at the beginning of each bundle processing.
307 A new object should be created at the beginning of each bundle processing.
308 The object is to be returned by the processing function.
308 The object is to be returned by the processing function.
309
309
310 The object has very little content now it will ultimately contain:
310 The object has very little content now it will ultimately contain:
311 * an access to the repo the bundle is applied to,
311 * an access to the repo the bundle is applied to,
312 * a ui object,
312 * a ui object,
313 * a way to retrieve a transaction to add changes to the repo,
313 * a way to retrieve a transaction to add changes to the repo,
314 * a way to record the result of processing each part,
314 * a way to record the result of processing each part,
315 * a way to construct a bundle response when applicable.
315 * a way to construct a bundle response when applicable.
316 """
316 """
317
317
318 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
318 def __init__(self, repo, transactiongetter, captureoutput=True, source=b''):
319 self.repo = repo
319 self.repo = repo
320 self.ui = repo.ui
320 self.ui = repo.ui
321 self.records = unbundlerecords()
321 self.records = unbundlerecords()
322 self.reply = None
322 self.reply = None
323 self.captureoutput = captureoutput
323 self.captureoutput = captureoutput
324 self.hookargs = {}
324 self.hookargs = {}
325 self._gettransaction = transactiongetter
325 self._gettransaction = transactiongetter
326 # carries value that can modify part behavior
326 # carries value that can modify part behavior
327 self.modes = {}
327 self.modes = {}
328 self.source = source
328 self.source = source
329
329
330 def gettransaction(self):
330 def gettransaction(self):
331 transaction = self._gettransaction()
331 transaction = self._gettransaction()
332
332
333 if self.hookargs:
333 if self.hookargs:
334 # the ones added to the transaction supercede those added
334 # the ones added to the transaction supercede those added
335 # to the operation.
335 # to the operation.
336 self.hookargs.update(transaction.hookargs)
336 self.hookargs.update(transaction.hookargs)
337 transaction.hookargs = self.hookargs
337 transaction.hookargs = self.hookargs
338
338
339 # mark the hookargs as flushed. further attempts to add to
339 # mark the hookargs as flushed. further attempts to add to
340 # hookargs will result in an abort.
340 # hookargs will result in an abort.
341 self.hookargs = None
341 self.hookargs = None
342
342
343 return transaction
343 return transaction
344
344
345 def addhookargs(self, hookargs):
345 def addhookargs(self, hookargs):
346 if self.hookargs is None:
346 if self.hookargs is None:
347 raise error.ProgrammingError(
347 raise error.ProgrammingError(
348 b'attempted to add hookargs to '
348 b'attempted to add hookargs to '
349 b'operation after transaction started'
349 b'operation after transaction started'
350 )
350 )
351 self.hookargs.update(hookargs)
351 self.hookargs.update(hookargs)
352
352
353
353
354 class TransactionUnavailable(RuntimeError):
354 class TransactionUnavailable(RuntimeError):
355 pass
355 pass
356
356
357
357
358 def _notransaction():
358 def _notransaction():
359 """default method to get a transaction while processing a bundle
359 """default method to get a transaction while processing a bundle
360
360
361 Raise an exception to highlight the fact that no transaction was expected
361 Raise an exception to highlight the fact that no transaction was expected
362 to be created"""
362 to be created"""
363 raise TransactionUnavailable()
363 raise TransactionUnavailable()
364
364
365
365
366 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
366 def applybundle(repo, unbundler, tr, source, url=None, **kwargs):
367 # transform me into unbundler.apply() as soon as the freeze is lifted
367 # transform me into unbundler.apply() as soon as the freeze is lifted
368 if isinstance(unbundler, unbundle20):
368 if isinstance(unbundler, unbundle20):
369 tr.hookargs[b'bundle2'] = b'1'
369 tr.hookargs[b'bundle2'] = b'1'
370 if source is not None and b'source' not in tr.hookargs:
370 if source is not None and b'source' not in tr.hookargs:
371 tr.hookargs[b'source'] = source
371 tr.hookargs[b'source'] = source
372 if url is not None and b'url' not in tr.hookargs:
372 if url is not None and b'url' not in tr.hookargs:
373 tr.hookargs[b'url'] = url
373 tr.hookargs[b'url'] = url
374 return processbundle(repo, unbundler, lambda: tr, source=source)
374 return processbundle(repo, unbundler, lambda: tr, source=source)
375 else:
375 else:
376 # the transactiongetter won't be used, but we might as well set it
376 # the transactiongetter won't be used, but we might as well set it
377 op = bundleoperation(repo, lambda: tr, source=source)
377 op = bundleoperation(repo, lambda: tr, source=source)
378 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
378 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
379 return op
379 return op
380
380
381
381
382 class partiterator:
382 class partiterator:
383 def __init__(self, repo, op, unbundler):
383 def __init__(self, repo, op, unbundler):
384 self.repo = repo
384 self.repo = repo
385 self.op = op
385 self.op = op
386 self.unbundler = unbundler
386 self.unbundler = unbundler
387 self.iterator = None
387 self.iterator = None
388 self.count = 0
388 self.count = 0
389 self.current = None
389 self.current = None
390
390
391 def __enter__(self):
391 def __enter__(self):
392 def func():
392 def func():
393 itr = enumerate(self.unbundler.iterparts(), 1)
393 itr = enumerate(self.unbundler.iterparts(), 1)
394 for count, p in itr:
394 for count, p in itr:
395 self.count = count
395 self.count = count
396 self.current = p
396 self.current = p
397 yield p
397 yield p
398 p.consume()
398 p.consume()
399 self.current = None
399 self.current = None
400
400
401 self.iterator = func()
401 self.iterator = func()
402 return self.iterator
402 return self.iterator
403
403
404 def __exit__(self, type, exc, tb):
404 def __exit__(self, type, exc, tb):
405 if not self.iterator:
405 if not self.iterator:
406 return
406 return
407
407
408 # Only gracefully abort in a normal exception situation. User aborts
408 # Only gracefully abort in a normal exception situation. User aborts
409 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
409 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
410 # and should not gracefully cleanup.
410 # and should not gracefully cleanup.
411 if isinstance(exc, Exception):
411 if isinstance(exc, Exception):
412 # Any exceptions seeking to the end of the bundle at this point are
412 # Any exceptions seeking to the end of the bundle at this point are
413 # almost certainly related to the underlying stream being bad.
413 # almost certainly related to the underlying stream being bad.
414 # And, chances are that the exception we're handling is related to
414 # And, chances are that the exception we're handling is related to
415 # getting in that bad state. So, we swallow the seeking error and
415 # getting in that bad state. So, we swallow the seeking error and
416 # re-raise the original error.
416 # re-raise the original error.
417 seekerror = False
417 seekerror = False
418 try:
418 try:
419 if self.current:
419 if self.current:
420 # consume the part content to not corrupt the stream.
420 # consume the part content to not corrupt the stream.
421 self.current.consume()
421 self.current.consume()
422
422
423 for part in self.iterator:
423 for part in self.iterator:
424 # consume the bundle content
424 # consume the bundle content
425 part.consume()
425 part.consume()
426 except Exception:
426 except Exception:
427 seekerror = True
427 seekerror = True
428
428
429 # Small hack to let caller code distinguish exceptions from bundle2
429 # Small hack to let caller code distinguish exceptions from bundle2
430 # processing from processing the old format. This is mostly needed
430 # processing from processing the old format. This is mostly needed
431 # to handle different return codes to unbundle according to the type
431 # to handle different return codes to unbundle according to the type
432 # of bundle. We should probably clean up or drop this return code
432 # of bundle. We should probably clean up or drop this return code
433 # craziness in a future version.
433 # craziness in a future version.
434 exc.duringunbundle2 = True
434 exc.duringunbundle2 = True
435 salvaged = []
435 salvaged = []
436 replycaps = None
436 replycaps = None
437 if self.op.reply is not None:
437 if self.op.reply is not None:
438 salvaged = self.op.reply.salvageoutput()
438 salvaged = self.op.reply.salvageoutput()
439 replycaps = self.op.reply.capabilities
439 replycaps = self.op.reply.capabilities
440 exc._replycaps = replycaps
440 exc._replycaps = replycaps
441 exc._bundle2salvagedoutput = salvaged
441 exc._bundle2salvagedoutput = salvaged
442
442
443 # Re-raising from a variable loses the original stack. So only use
443 # Re-raising from a variable loses the original stack. So only use
444 # that form if we need to.
444 # that form if we need to.
445 if seekerror:
445 if seekerror:
446 raise exc
446 raise exc
447
447
448 self.repo.ui.debug(
448 self.repo.ui.debug(
449 b'bundle2-input-bundle: %i parts total\n' % self.count
449 b'bundle2-input-bundle: %i parts total\n' % self.count
450 )
450 )
451
451
452
452
453 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
453 def processbundle(repo, unbundler, transactiongetter=None, op=None, source=b''):
454 """This function process a bundle, apply effect to/from a repo
454 """This function process a bundle, apply effect to/from a repo
455
455
456 It iterates over each part then searches for and uses the proper handling
456 It iterates over each part then searches for and uses the proper handling
457 code to process the part. Parts are processed in order.
457 code to process the part. Parts are processed in order.
458
458
459 Unknown Mandatory part will abort the process.
459 Unknown Mandatory part will abort the process.
460
460
461 It is temporarily possible to provide a prebuilt bundleoperation to the
461 It is temporarily possible to provide a prebuilt bundleoperation to the
462 function. This is used to ensure output is properly propagated in case of
462 function. This is used to ensure output is properly propagated in case of
463 an error during the unbundling. This output capturing part will likely be
463 an error during the unbundling. This output capturing part will likely be
464 reworked and this ability will probably go away in the process.
464 reworked and this ability will probably go away in the process.
465 """
465 """
466 if op is None:
466 if op is None:
467 if transactiongetter is None:
467 if transactiongetter is None:
468 transactiongetter = _notransaction
468 transactiongetter = _notransaction
469 op = bundleoperation(repo, transactiongetter, source=source)
469 op = bundleoperation(repo, transactiongetter, source=source)
470 # todo:
470 # todo:
471 # - replace this is a init function soon.
471 # - replace this is a init function soon.
472 # - exception catching
472 # - exception catching
473 unbundler.params
473 unbundler.params
474 if repo.ui.debugflag:
474 if repo.ui.debugflag:
475 msg = [b'bundle2-input-bundle:']
475 msg = [b'bundle2-input-bundle:']
476 if unbundler.params:
476 if unbundler.params:
477 msg.append(b' %i params' % len(unbundler.params))
477 msg.append(b' %i params' % len(unbundler.params))
478 if op._gettransaction is None or op._gettransaction is _notransaction:
478 if op._gettransaction is None or op._gettransaction is _notransaction:
479 msg.append(b' no-transaction')
479 msg.append(b' no-transaction')
480 else:
480 else:
481 msg.append(b' with-transaction')
481 msg.append(b' with-transaction')
482 msg.append(b'\n')
482 msg.append(b'\n')
483 repo.ui.debug(b''.join(msg))
483 repo.ui.debug(b''.join(msg))
484
484
485 processparts(repo, op, unbundler)
485 processparts(repo, op, unbundler)
486
486
487 return op
487 return op
488
488
489
489
490 def processparts(repo, op, unbundler):
490 def processparts(repo, op, unbundler):
491 with partiterator(repo, op, unbundler) as parts:
491 with partiterator(repo, op, unbundler) as parts:
492 for part in parts:
492 for part in parts:
493 _processpart(op, part)
493 _processpart(op, part)
494
494
495
495
496 def _processchangegroup(op, cg, tr, source, url, **kwargs):
496 def _processchangegroup(op, cg, tr, source, url, **kwargs):
497 ret = cg.apply(op.repo, tr, source, url, **kwargs)
497 ret = cg.apply(op.repo, tr, source, url, **kwargs)
498 op.records.add(
498 op.records.add(
499 b'changegroup',
499 b'changegroup',
500 {
500 {
501 b'return': ret,
501 b'return': ret,
502 },
502 },
503 )
503 )
504 return ret
504 return ret
505
505
506
506
507 def _gethandler(op, part):
507 def _gethandler(op, part):
508 status = b'unknown' # used by debug output
508 status = b'unknown' # used by debug output
509 try:
509 try:
510 handler = parthandlermapping.get(part.type)
510 handler = parthandlermapping.get(part.type)
511 if handler is None:
511 if handler is None:
512 status = b'unsupported-type'
512 status = b'unsupported-type'
513 raise error.BundleUnknownFeatureError(parttype=part.type)
513 raise error.BundleUnknownFeatureError(parttype=part.type)
514 indebug(op.ui, b'found a handler for part %s' % part.type)
514 indebug(op.ui, b'found a handler for part %s' % part.type)
515 unknownparams = part.mandatorykeys - handler.params
515 unknownparams = part.mandatorykeys - handler.params
516 if unknownparams:
516 if unknownparams:
517 unknownparams = list(unknownparams)
517 unknownparams = list(unknownparams)
518 unknownparams.sort()
518 unknownparams.sort()
519 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
519 status = b'unsupported-params (%s)' % b', '.join(unknownparams)
520 raise error.BundleUnknownFeatureError(
520 raise error.BundleUnknownFeatureError(
521 parttype=part.type, params=unknownparams
521 parttype=part.type, params=unknownparams
522 )
522 )
523 status = b'supported'
523 status = b'supported'
524 except error.BundleUnknownFeatureError as exc:
524 except error.BundleUnknownFeatureError as exc:
525 if part.mandatory: # mandatory parts
525 if part.mandatory: # mandatory parts
526 raise
526 raise
527 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
527 indebug(op.ui, b'ignoring unsupported advisory part %s' % exc)
528 return # skip to part processing
528 return # skip to part processing
529 finally:
529 finally:
530 if op.ui.debugflag:
530 if op.ui.debugflag:
531 msg = [b'bundle2-input-part: "%s"' % part.type]
531 msg = [b'bundle2-input-part: "%s"' % part.type]
532 if not part.mandatory:
532 if not part.mandatory:
533 msg.append(b' (advisory)')
533 msg.append(b' (advisory)')
534 nbmp = len(part.mandatorykeys)
534 nbmp = len(part.mandatorykeys)
535 nbap = len(part.params) - nbmp
535 nbap = len(part.params) - nbmp
536 if nbmp or nbap:
536 if nbmp or nbap:
537 msg.append(b' (params:')
537 msg.append(b' (params:')
538 if nbmp:
538 if nbmp:
539 msg.append(b' %i mandatory' % nbmp)
539 msg.append(b' %i mandatory' % nbmp)
540 if nbap:
540 if nbap:
541 msg.append(b' %i advisory' % nbmp)
541 msg.append(b' %i advisory' % nbmp)
542 msg.append(b')')
542 msg.append(b')')
543 msg.append(b' %s\n' % status)
543 msg.append(b' %s\n' % status)
544 op.ui.debug(b''.join(msg))
544 op.ui.debug(b''.join(msg))
545
545
546 return handler
546 return handler
547
547
548
548
549 def _processpart(op, part):
549 def _processpart(op, part):
550 """process a single part from a bundle
550 """process a single part from a bundle
551
551
552 The part is guaranteed to have been fully consumed when the function exits
552 The part is guaranteed to have been fully consumed when the function exits
553 (even if an exception is raised)."""
553 (even if an exception is raised)."""
554 handler = _gethandler(op, part)
554 handler = _gethandler(op, part)
555 if handler is None:
555 if handler is None:
556 return
556 return
557
557
558 # handler is called outside the above try block so that we don't
558 # handler is called outside the above try block so that we don't
559 # risk catching KeyErrors from anything other than the
559 # risk catching KeyErrors from anything other than the
560 # parthandlermapping lookup (any KeyError raised by handler()
560 # parthandlermapping lookup (any KeyError raised by handler()
561 # itself represents a defect of a different variety).
561 # itself represents a defect of a different variety).
562 output = None
562 output = None
563 if op.captureoutput and op.reply is not None:
563 if op.captureoutput and op.reply is not None:
564 op.ui.pushbuffer(error=True, subproc=True)
564 op.ui.pushbuffer(error=True, subproc=True)
565 output = b''
565 output = b''
566 try:
566 try:
567 handler(op, part)
567 handler(op, part)
568 finally:
568 finally:
569 if output is not None:
569 if output is not None:
570 output = op.ui.popbuffer()
570 output = op.ui.popbuffer()
571 if output:
571 if output:
572 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
572 outpart = op.reply.newpart(b'output', data=output, mandatory=False)
573 outpart.addparam(
573 outpart.addparam(
574 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
574 b'in-reply-to', pycompat.bytestr(part.id), mandatory=False
575 )
575 )
576
576
577
577
578 def decodecaps(blob):
578 def decodecaps(blob):
579 """decode a bundle2 caps bytes blob into a dictionary
579 """decode a bundle2 caps bytes blob into a dictionary
580
580
581 The blob is a list of capabilities (one per line)
581 The blob is a list of capabilities (one per line)
582 Capabilities may have values using a line of the form::
582 Capabilities may have values using a line of the form::
583
583
584 capability=value1,value2,value3
584 capability=value1,value2,value3
585
585
586 The values are always a list."""
586 The values are always a list."""
587 caps = {}
587 caps = {}
588 for line in blob.splitlines():
588 for line in blob.splitlines():
589 if not line:
589 if not line:
590 continue
590 continue
591 if b'=' not in line:
591 if b'=' not in line:
592 key, vals = line, ()
592 key, vals = line, ()
593 else:
593 else:
594 key, vals = line.split(b'=', 1)
594 key, vals = line.split(b'=', 1)
595 vals = vals.split(b',')
595 vals = vals.split(b',')
596 key = urlreq.unquote(key)
596 key = urlreq.unquote(key)
597 vals = [urlreq.unquote(v) for v in vals]
597 vals = [urlreq.unquote(v) for v in vals]
598 caps[key] = vals
598 caps[key] = vals
599 return caps
599 return caps
600
600
601
601
602 def encodecaps(caps):
602 def encodecaps(caps):
603 """encode a bundle2 caps dictionary into a bytes blob"""
603 """encode a bundle2 caps dictionary into a bytes blob"""
604 chunks = []
604 chunks = []
605 for ca in sorted(caps):
605 for ca in sorted(caps):
606 vals = caps[ca]
606 vals = caps[ca]
607 ca = urlreq.quote(ca)
607 ca = urlreq.quote(ca)
608 vals = [urlreq.quote(v) for v in vals]
608 vals = [urlreq.quote(v) for v in vals]
609 if vals:
609 if vals:
610 ca = b"%s=%s" % (ca, b','.join(vals))
610 ca = b"%s=%s" % (ca, b','.join(vals))
611 chunks.append(ca)
611 chunks.append(ca)
612 return b'\n'.join(chunks)
612 return b'\n'.join(chunks)
613
613
614
614
615 bundletypes = {
615 bundletypes = {
616 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
616 b"": (b"", b'UN'), # only when using unbundle on ssh and old http servers
617 # since the unification ssh accepts a header but there
617 # since the unification ssh accepts a header but there
618 # is no capability signaling it.
618 # is no capability signaling it.
619 b"HG20": (), # special-cased below
619 b"HG20": (), # special-cased below
620 b"HG10UN": (b"HG10UN", b'UN'),
620 b"HG10UN": (b"HG10UN", b'UN'),
621 b"HG10BZ": (b"HG10", b'BZ'),
621 b"HG10BZ": (b"HG10", b'BZ'),
622 b"HG10GZ": (b"HG10GZ", b'GZ'),
622 b"HG10GZ": (b"HG10GZ", b'GZ'),
623 }
623 }
624
624
625 # hgweb uses this list to communicate its preferred type
625 # hgweb uses this list to communicate its preferred type
626 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
626 bundlepriority = [b'HG10GZ', b'HG10BZ', b'HG10UN']
627
627
628
628
629 class bundle20:
629 class bundle20:
630 """represent an outgoing bundle2 container
630 """represent an outgoing bundle2 container
631
631
632 Use the `addparam` method to add stream level parameter. and `newpart` to
632 Use the `addparam` method to add stream level parameter. and `newpart` to
633 populate it. Then call `getchunks` to retrieve all the binary chunks of
633 populate it. Then call `getchunks` to retrieve all the binary chunks of
634 data that compose the bundle2 container."""
634 data that compose the bundle2 container."""
635
635
636 _magicstring = b'HG20'
636 _magicstring = b'HG20'
637
637
638 def __init__(self, ui, capabilities=()):
638 def __init__(self, ui, capabilities=()):
639 self.ui = ui
639 self.ui = ui
640 self._params = []
640 self._params = []
641 self._parts = []
641 self._parts = []
642 self.capabilities = dict(capabilities)
642 self.capabilities = dict(capabilities)
643 self._compengine = util.compengines.forbundletype(b'UN')
643 self._compengine = util.compengines.forbundletype(b'UN')
644 self._compopts = None
644 self._compopts = None
645 # If compression is being handled by a consumer of the raw
645 # If compression is being handled by a consumer of the raw
646 # data (e.g. the wire protocol), unsetting this flag tells
646 # data (e.g. the wire protocol), unsetting this flag tells
647 # consumers that the bundle is best left uncompressed.
647 # consumers that the bundle is best left uncompressed.
648 self.prefercompressed = True
648 self.prefercompressed = True
649
649
650 def setcompression(self, alg, compopts=None):
650 def setcompression(self, alg, compopts=None):
651 """setup core part compression to <alg>"""
651 """setup core part compression to <alg>"""
652 if alg in (None, b'UN'):
652 if alg in (None, b'UN'):
653 return
653 return
654 assert not any(n.lower() == b'compression' for n, v in self._params)
654 assert not any(n.lower() == b'compression' for n, v in self._params)
655 self.addparam(b'Compression', alg)
655 self.addparam(b'Compression', alg)
656 self._compengine = util.compengines.forbundletype(alg)
656 self._compengine = util.compengines.forbundletype(alg)
657 self._compopts = compopts
657 self._compopts = compopts
658
658
659 @property
659 @property
660 def nbparts(self):
660 def nbparts(self):
661 """total number of parts added to the bundler"""
661 """total number of parts added to the bundler"""
662 return len(self._parts)
662 return len(self._parts)
663
663
664 # methods used to defines the bundle2 content
664 # methods used to defines the bundle2 content
665 def addparam(self, name, value=None):
665 def addparam(self, name, value=None):
666 """add a stream level parameter"""
666 """add a stream level parameter"""
667 if not name:
667 if not name:
668 raise error.ProgrammingError(b'empty parameter name')
668 raise error.ProgrammingError(b'empty parameter name')
669 if name[0:1] not in pycompat.bytestr(
669 if name[0:1] not in pycompat.bytestr(
670 string.ascii_letters # pytype: disable=wrong-arg-types
670 string.ascii_letters # pytype: disable=wrong-arg-types
671 ):
671 ):
672 raise error.ProgrammingError(
672 raise error.ProgrammingError(
673 b'non letter first character: %s' % name
673 b'non letter first character: %s' % name
674 )
674 )
675 self._params.append((name, value))
675 self._params.append((name, value))
676
676
677 def addpart(self, part):
677 def addpart(self, part):
678 """add a new part to the bundle2 container
678 """add a new part to the bundle2 container
679
679
680 Parts contains the actual applicative payload."""
680 Parts contains the actual applicative payload."""
681 assert part.id is None
681 assert part.id is None
682 part.id = len(self._parts) # very cheap counter
682 part.id = len(self._parts) # very cheap counter
683 self._parts.append(part)
683 self._parts.append(part)
684
684
685 def newpart(self, typeid, *args, **kwargs):
685 def newpart(self, typeid, *args, **kwargs):
686 """create a new part and add it to the containers
686 """create a new part and add it to the containers
687
687
688 As the part is directly added to the containers. For now, this means
688 As the part is directly added to the containers. For now, this means
689 that any failure to properly initialize the part after calling
689 that any failure to properly initialize the part after calling
690 ``newpart`` should result in a failure of the whole bundling process.
690 ``newpart`` should result in a failure of the whole bundling process.
691
691
692 You can still fall back to manually create and add if you need better
692 You can still fall back to manually create and add if you need better
693 control."""
693 control."""
694 part = bundlepart(typeid, *args, **kwargs)
694 part = bundlepart(typeid, *args, **kwargs)
695 self.addpart(part)
695 self.addpart(part)
696 return part
696 return part
697
697
698 # methods used to generate the bundle2 stream
698 # methods used to generate the bundle2 stream
699 def getchunks(self):
699 def getchunks(self):
700 if self.ui.debugflag:
700 if self.ui.debugflag:
701 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
701 msg = [b'bundle2-output-bundle: "%s",' % self._magicstring]
702 if self._params:
702 if self._params:
703 msg.append(b' (%i params)' % len(self._params))
703 msg.append(b' (%i params)' % len(self._params))
704 msg.append(b' %i parts total\n' % len(self._parts))
704 msg.append(b' %i parts total\n' % len(self._parts))
705 self.ui.debug(b''.join(msg))
705 self.ui.debug(b''.join(msg))
706 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
706 outdebug(self.ui, b'start emission of %s stream' % self._magicstring)
707 yield self._magicstring
707 yield self._magicstring
708 param = self._paramchunk()
708 param = self._paramchunk()
709 outdebug(self.ui, b'bundle parameter: %s' % param)
709 outdebug(self.ui, b'bundle parameter: %s' % param)
710 yield _pack(_fstreamparamsize, len(param))
710 yield _pack(_fstreamparamsize, len(param))
711 if param:
711 if param:
712 yield param
712 yield param
713 for chunk in self._compengine.compressstream(
713 for chunk in self._compengine.compressstream(
714 self._getcorechunk(), self._compopts
714 self._getcorechunk(), self._compopts
715 ):
715 ):
716 yield chunk
716 yield chunk
717
717
718 def _paramchunk(self):
718 def _paramchunk(self):
719 """return a encoded version of all stream parameters"""
719 """return a encoded version of all stream parameters"""
720 blocks = []
720 blocks = []
721 for par, value in self._params:
721 for par, value in self._params:
722 par = urlreq.quote(par)
722 par = urlreq.quote(par)
723 if value is not None:
723 if value is not None:
724 value = urlreq.quote(value)
724 value = urlreq.quote(value)
725 par = b'%s=%s' % (par, value)
725 par = b'%s=%s' % (par, value)
726 blocks.append(par)
726 blocks.append(par)
727 return b' '.join(blocks)
727 return b' '.join(blocks)
728
728
729 def _getcorechunk(self):
729 def _getcorechunk(self):
730 """yield chunk for the core part of the bundle
730 """yield chunk for the core part of the bundle
731
731
732 (all but headers and parameters)"""
732 (all but headers and parameters)"""
733 outdebug(self.ui, b'start of parts')
733 outdebug(self.ui, b'start of parts')
734 for part in self._parts:
734 for part in self._parts:
735 outdebug(self.ui, b'bundle part: "%s"' % part.type)
735 outdebug(self.ui, b'bundle part: "%s"' % part.type)
736 for chunk in part.getchunks(ui=self.ui):
736 for chunk in part.getchunks(ui=self.ui):
737 yield chunk
737 yield chunk
738 outdebug(self.ui, b'end of bundle')
738 outdebug(self.ui, b'end of bundle')
739 yield _pack(_fpartheadersize, 0)
739 yield _pack(_fpartheadersize, 0)
740
740
741 def salvageoutput(self):
741 def salvageoutput(self):
742 """return a list with a copy of all output parts in the bundle
742 """return a list with a copy of all output parts in the bundle
743
743
744 This is meant to be used during error handling to make sure we preserve
744 This is meant to be used during error handling to make sure we preserve
745 server output"""
745 server output"""
746 salvaged = []
746 salvaged = []
747 for part in self._parts:
747 for part in self._parts:
748 if part.type.startswith(b'output'):
748 if part.type.startswith(b'output'):
749 salvaged.append(part.copy())
749 salvaged.append(part.copy())
750 return salvaged
750 return salvaged
751
751
752
752
753 class unpackermixin:
753 class unpackermixin:
754 """A mixin to extract bytes and struct data from a stream"""
754 """A mixin to extract bytes and struct data from a stream"""
755
755
756 def __init__(self, fp):
756 def __init__(self, fp):
757 self._fp = fp
757 self._fp = fp
758
758
759 def _unpack(self, format):
759 def _unpack(self, format):
760 """unpack this struct format from the stream
760 """unpack this struct format from the stream
761
761
762 This method is meant for internal usage by the bundle2 protocol only.
762 This method is meant for internal usage by the bundle2 protocol only.
763 They directly manipulate the low level stream including bundle2 level
763 They directly manipulate the low level stream including bundle2 level
764 instruction.
764 instruction.
765
765
766 Do not use it to implement higher-level logic or methods."""
766 Do not use it to implement higher-level logic or methods."""
767 data = self._readexact(struct.calcsize(format))
767 data = self._readexact(struct.calcsize(format))
768 return _unpack(format, data)
768 return _unpack(format, data)
769
769
770 def _readexact(self, size):
770 def _readexact(self, size):
771 """read exactly <size> bytes from the stream
771 """read exactly <size> bytes from the stream
772
772
773 This method is meant for internal usage by the bundle2 protocol only.
773 This method is meant for internal usage by the bundle2 protocol only.
774 They directly manipulate the low level stream including bundle2 level
774 They directly manipulate the low level stream including bundle2 level
775 instruction.
775 instruction.
776
776
777 Do not use it to implement higher-level logic or methods."""
777 Do not use it to implement higher-level logic or methods."""
778 return changegroup.readexactly(self._fp, size)
778 return changegroup.readexactly(self._fp, size)
779
779
780
780
781 def getunbundler(ui, fp, magicstring=None):
781 def getunbundler(ui, fp, magicstring=None):
782 """return a valid unbundler object for a given magicstring"""
782 """return a valid unbundler object for a given magicstring"""
783 if magicstring is None:
783 if magicstring is None:
784 magicstring = changegroup.readexactly(fp, 4)
784 magicstring = changegroup.readexactly(fp, 4)
785 magic, version = magicstring[0:2], magicstring[2:4]
785 magic, version = magicstring[0:2], magicstring[2:4]
786 if magic != b'HG':
786 if magic != b'HG':
787 ui.debug(
787 ui.debug(
788 b"error: invalid magic: %r (version %r), should be 'HG'\n"
788 b"error: invalid magic: %r (version %r), should be 'HG'\n"
789 % (magic, version)
789 % (magic, version)
790 )
790 )
791 raise error.Abort(_(b'not a Mercurial bundle'))
791 raise error.Abort(_(b'not a Mercurial bundle'))
792 unbundlerclass = formatmap.get(version)
792 unbundlerclass = formatmap.get(version)
793 if unbundlerclass is None:
793 if unbundlerclass is None:
794 raise error.Abort(_(b'unknown bundle version %s') % version)
794 raise error.Abort(_(b'unknown bundle version %s') % version)
795 unbundler = unbundlerclass(ui, fp)
795 unbundler = unbundlerclass(ui, fp)
796 indebug(ui, b'start processing of %s stream' % magicstring)
796 indebug(ui, b'start processing of %s stream' % magicstring)
797 return unbundler
797 return unbundler
798
798
799
799
800 class unbundle20(unpackermixin):
800 class unbundle20(unpackermixin):
801 """interpret a bundle2 stream
801 """interpret a bundle2 stream
802
802
803 This class is fed with a binary stream and yields parts through its
803 This class is fed with a binary stream and yields parts through its
804 `iterparts` methods."""
804 `iterparts` methods."""
805
805
806 _magicstring = b'HG20'
806 _magicstring = b'HG20'
807
807
808 def __init__(self, ui, fp):
808 def __init__(self, ui, fp):
809 """If header is specified, we do not read it out of the stream."""
809 """If header is specified, we do not read it out of the stream."""
810 self.ui = ui
810 self.ui = ui
811 self._compengine = util.compengines.forbundletype(b'UN')
811 self._compengine = util.compengines.forbundletype(b'UN')
812 self._compressed = None
812 self._compressed = None
813 super(unbundle20, self).__init__(fp)
813 super(unbundle20, self).__init__(fp)
814
814
815 @util.propertycache
815 @util.propertycache
816 def params(self):
816 def params(self):
817 """dictionary of stream level parameters"""
817 """dictionary of stream level parameters"""
818 indebug(self.ui, b'reading bundle2 stream parameters')
818 indebug(self.ui, b'reading bundle2 stream parameters')
819 params = {}
819 params = {}
820 paramssize = self._unpack(_fstreamparamsize)[0]
820 paramssize = self._unpack(_fstreamparamsize)[0]
821 if paramssize < 0:
821 if paramssize < 0:
822 raise error.BundleValueError(
822 raise error.BundleValueError(
823 b'negative bundle param size: %i' % paramssize
823 b'negative bundle param size: %i' % paramssize
824 )
824 )
825 if paramssize:
825 if paramssize:
826 params = self._readexact(paramssize)
826 params = self._readexact(paramssize)
827 params = self._processallparams(params)
827 params = self._processallparams(params)
828 return params
828 return params
829
829
830 def _processallparams(self, paramsblock):
830 def _processallparams(self, paramsblock):
831 """ """
831 """ """
832 params = util.sortdict()
832 params = util.sortdict()
833 for p in paramsblock.split(b' '):
833 for p in paramsblock.split(b' '):
834 p = p.split(b'=', 1)
834 p = p.split(b'=', 1)
835 p = [urlreq.unquote(i) for i in p]
835 p = [urlreq.unquote(i) for i in p]
836 if len(p) < 2:
836 if len(p) < 2:
837 p.append(None)
837 p.append(None)
838 self._processparam(*p)
838 self._processparam(*p)
839 params[p[0]] = p[1]
839 params[p[0]] = p[1]
840 return params
840 return params
841
841
842 def _processparam(self, name, value):
842 def _processparam(self, name, value):
843 """process a parameter, applying its effect if needed
843 """process a parameter, applying its effect if needed
844
844
845 Parameter starting with a lower case letter are advisory and will be
845 Parameter starting with a lower case letter are advisory and will be
846 ignored when unknown. Those starting with an upper case letter are
846 ignored when unknown. Those starting with an upper case letter are
847 mandatory and will this function will raise a KeyError when unknown.
847 mandatory and will this function will raise a KeyError when unknown.
848
848
849 Note: no option are currently supported. Any input will be either
849 Note: no option are currently supported. Any input will be either
850 ignored or failing.
850 ignored or failing.
851 """
851 """
852 if not name:
852 if not name:
853 raise ValueError('empty parameter name')
853 raise ValueError('empty parameter name')
854 if name[0:1] not in pycompat.bytestr(
854 if name[0:1] not in pycompat.bytestr(
855 string.ascii_letters # pytype: disable=wrong-arg-types
855 string.ascii_letters # pytype: disable=wrong-arg-types
856 ):
856 ):
857 raise ValueError('non letter first character: %s' % name)
857 raise ValueError('non letter first character: %s' % name)
858 try:
858 try:
859 handler = b2streamparamsmap[name.lower()]
859 handler = b2streamparamsmap[name.lower()]
860 except KeyError:
860 except KeyError:
861 if name[0:1].islower():
861 if name[0:1].islower():
862 indebug(self.ui, b"ignoring unknown parameter %s" % name)
862 indebug(self.ui, b"ignoring unknown parameter %s" % name)
863 else:
863 else:
864 raise error.BundleUnknownFeatureError(params=(name,))
864 raise error.BundleUnknownFeatureError(params=(name,))
865 else:
865 else:
866 handler(self, name, value)
866 handler(self, name, value)
867
867
868 def _forwardchunks(self):
868 def _forwardchunks(self):
869 """utility to transfer a bundle2 as binary
869 """utility to transfer a bundle2 as binary
870
870
871 This is made necessary by the fact the 'getbundle' command over 'ssh'
871 This is made necessary by the fact the 'getbundle' command over 'ssh'
872 have no way to know then the reply end, relying on the bundle to be
872 have no way to know then the reply end, relying on the bundle to be
873 interpreted to know its end. This is terrible and we are sorry, but we
873 interpreted to know its end. This is terrible and we are sorry, but we
874 needed to move forward to get general delta enabled.
874 needed to move forward to get general delta enabled.
875 """
875 """
876 yield self._magicstring
876 yield self._magicstring
877 assert 'params' not in vars(self)
877 assert 'params' not in vars(self)
878 paramssize = self._unpack(_fstreamparamsize)[0]
878 paramssize = self._unpack(_fstreamparamsize)[0]
879 if paramssize < 0:
879 if paramssize < 0:
880 raise error.BundleValueError(
880 raise error.BundleValueError(
881 b'negative bundle param size: %i' % paramssize
881 b'negative bundle param size: %i' % paramssize
882 )
882 )
883 if paramssize:
883 if paramssize:
884 params = self._readexact(paramssize)
884 params = self._readexact(paramssize)
885 self._processallparams(params)
885 self._processallparams(params)
886 # The payload itself is decompressed below, so drop
886 # The payload itself is decompressed below, so drop
887 # the compression parameter passed down to compensate.
887 # the compression parameter passed down to compensate.
888 outparams = []
888 outparams = []
889 for p in params.split(b' '):
889 for p in params.split(b' '):
890 k, v = p.split(b'=', 1)
890 k, v = p.split(b'=', 1)
891 if k.lower() != b'compression':
891 if k.lower() != b'compression':
892 outparams.append(p)
892 outparams.append(p)
893 outparams = b' '.join(outparams)
893 outparams = b' '.join(outparams)
894 yield _pack(_fstreamparamsize, len(outparams))
894 yield _pack(_fstreamparamsize, len(outparams))
895 yield outparams
895 yield outparams
896 else:
896 else:
897 yield _pack(_fstreamparamsize, paramssize)
897 yield _pack(_fstreamparamsize, paramssize)
898 # From there, payload might need to be decompressed
898 # From there, payload might need to be decompressed
899 self._fp = self._compengine.decompressorreader(self._fp)
899 self._fp = self._compengine.decompressorreader(self._fp)
900 emptycount = 0
900 emptycount = 0
901 while emptycount < 2:
901 while emptycount < 2:
902 # so we can brainlessly loop
902 # so we can brainlessly loop
903 assert _fpartheadersize == _fpayloadsize
903 assert _fpartheadersize == _fpayloadsize
904 size = self._unpack(_fpartheadersize)[0]
904 size = self._unpack(_fpartheadersize)[0]
905 yield _pack(_fpartheadersize, size)
905 yield _pack(_fpartheadersize, size)
906 if size:
906 if size:
907 emptycount = 0
907 emptycount = 0
908 else:
908 else:
909 emptycount += 1
909 emptycount += 1
910 continue
910 continue
911 if size == flaginterrupt:
911 if size == flaginterrupt:
912 continue
912 continue
913 elif size < 0:
913 elif size < 0:
914 raise error.BundleValueError(b'negative chunk size: %i')
914 raise error.BundleValueError(b'negative chunk size: %i')
915 yield self._readexact(size)
915 yield self._readexact(size)
916
916
917 def iterparts(self, seekable=False):
917 def iterparts(self, seekable=False):
918 """yield all parts contained in the stream"""
918 """yield all parts contained in the stream"""
919 cls = seekableunbundlepart if seekable else unbundlepart
919 cls = seekableunbundlepart if seekable else unbundlepart
920 # make sure param have been loaded
920 # make sure param have been loaded
921 self.params
921 self.params
922 # From there, payload need to be decompressed
922 # From there, payload need to be decompressed
923 self._fp = self._compengine.decompressorreader(self._fp)
923 self._fp = self._compengine.decompressorreader(self._fp)
924 indebug(self.ui, b'start extraction of bundle2 parts')
924 indebug(self.ui, b'start extraction of bundle2 parts')
925 headerblock = self._readpartheader()
925 headerblock = self._readpartheader()
926 while headerblock is not None:
926 while headerblock is not None:
927 part = cls(self.ui, headerblock, self._fp)
927 part = cls(self.ui, headerblock, self._fp)
928 yield part
928 yield part
929 # Ensure part is fully consumed so we can start reading the next
929 # Ensure part is fully consumed so we can start reading the next
930 # part.
930 # part.
931 part.consume()
931 part.consume()
932
932
933 headerblock = self._readpartheader()
933 headerblock = self._readpartheader()
934 indebug(self.ui, b'end of bundle2 stream')
934 indebug(self.ui, b'end of bundle2 stream')
935
935
936 def _readpartheader(self):
936 def _readpartheader(self):
937 """reads a part header size and return the bytes blob
937 """reads a part header size and return the bytes blob
938
938
939 returns None if empty"""
939 returns None if empty"""
940 headersize = self._unpack(_fpartheadersize)[0]
940 headersize = self._unpack(_fpartheadersize)[0]
941 if headersize < 0:
941 if headersize < 0:
942 raise error.BundleValueError(
942 raise error.BundleValueError(
943 b'negative part header size: %i' % headersize
943 b'negative part header size: %i' % headersize
944 )
944 )
945 indebug(self.ui, b'part header size: %i' % headersize)
945 indebug(self.ui, b'part header size: %i' % headersize)
946 if headersize:
946 if headersize:
947 return self._readexact(headersize)
947 return self._readexact(headersize)
948 return None
948 return None
949
949
950 def compressed(self):
950 def compressed(self):
951 self.params # load params
951 self.params # load params
952 return self._compressed
952 return self._compressed
953
953
954 def close(self):
954 def close(self):
955 """close underlying file"""
955 """close underlying file"""
956 if util.safehasattr(self._fp, 'close'):
956 if util.safehasattr(self._fp, 'close'):
957 return self._fp.close()
957 return self._fp.close()
958
958
959
959
960 formatmap = {b'20': unbundle20}
960 formatmap = {b'20': unbundle20}
961
961
962 b2streamparamsmap = {}
962 b2streamparamsmap = {}
963
963
964
964
965 def b2streamparamhandler(name):
965 def b2streamparamhandler(name):
966 """register a handler for a stream level parameter"""
966 """register a handler for a stream level parameter"""
967
967
968 def decorator(func):
968 def decorator(func):
969 assert name not in formatmap
969 assert name not in formatmap
970 b2streamparamsmap[name] = func
970 b2streamparamsmap[name] = func
971 return func
971 return func
972
972
973 return decorator
973 return decorator
974
974
975
975
976 @b2streamparamhandler(b'compression')
976 @b2streamparamhandler(b'compression')
977 def processcompression(unbundler, param, value):
977 def processcompression(unbundler, param, value):
978 """read compression parameter and install payload decompression"""
978 """read compression parameter and install payload decompression"""
979 if value not in util.compengines.supportedbundletypes:
979 if value not in util.compengines.supportedbundletypes:
980 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
980 raise error.BundleUnknownFeatureError(params=(param,), values=(value,))
981 unbundler._compengine = util.compengines.forbundletype(value)
981 unbundler._compengine = util.compengines.forbundletype(value)
982 if value is not None:
982 if value is not None:
983 unbundler._compressed = True
983 unbundler._compressed = True
984
984
985
985
986 class bundlepart:
986 class bundlepart:
987 """A bundle2 part contains application level payload
987 """A bundle2 part contains application level payload
988
988
989 The part `type` is used to route the part to the application level
989 The part `type` is used to route the part to the application level
990 handler.
990 handler.
991
991
992 The part payload is contained in ``part.data``. It could be raw bytes or a
992 The part payload is contained in ``part.data``. It could be raw bytes or a
993 generator of byte chunks.
993 generator of byte chunks.
994
994
995 You can add parameters to the part using the ``addparam`` method.
995 You can add parameters to the part using the ``addparam`` method.
996 Parameters can be either mandatory (default) or advisory. Remote side
996 Parameters can be either mandatory (default) or advisory. Remote side
997 should be able to safely ignore the advisory ones.
997 should be able to safely ignore the advisory ones.
998
998
999 Both data and parameters cannot be modified after the generation has begun.
999 Both data and parameters cannot be modified after the generation has begun.
1000 """
1000 """
1001
1001
1002 def __init__(
1002 def __init__(
1003 self,
1003 self,
1004 parttype,
1004 parttype,
1005 mandatoryparams=(),
1005 mandatoryparams=(),
1006 advisoryparams=(),
1006 advisoryparams=(),
1007 data=b'',
1007 data=b'',
1008 mandatory=True,
1008 mandatory=True,
1009 ):
1009 ):
1010 validateparttype(parttype)
1010 validateparttype(parttype)
1011 self.id = None
1011 self.id = None
1012 self.type = parttype
1012 self.type = parttype
1013 self._data = data
1013 self._data = data
1014 self._mandatoryparams = list(mandatoryparams)
1014 self._mandatoryparams = list(mandatoryparams)
1015 self._advisoryparams = list(advisoryparams)
1015 self._advisoryparams = list(advisoryparams)
1016 # checking for duplicated entries
1016 # checking for duplicated entries
1017 self._seenparams = set()
1017 self._seenparams = set()
1018 for pname, __ in self._mandatoryparams + self._advisoryparams:
1018 for pname, __ in self._mandatoryparams + self._advisoryparams:
1019 if pname in self._seenparams:
1019 if pname in self._seenparams:
1020 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1020 raise error.ProgrammingError(b'duplicated params: %s' % pname)
1021 self._seenparams.add(pname)
1021 self._seenparams.add(pname)
1022 # status of the part's generation:
1022 # status of the part's generation:
1023 # - None: not started,
1023 # - None: not started,
1024 # - False: currently generated,
1024 # - False: currently generated,
1025 # - True: generation done.
1025 # - True: generation done.
1026 self._generated = None
1026 self._generated = None
1027 self.mandatory = mandatory
1027 self.mandatory = mandatory
1028
1028
1029 def __repr__(self):
1029 def __repr__(self):
1030 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1030 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
1031 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1031 return '<%s object at %x; id: %s; type: %s; mandatory: %s>' % (
1032 cls,
1032 cls,
1033 id(self),
1033 id(self),
1034 self.id,
1034 self.id,
1035 self.type,
1035 self.type,
1036 self.mandatory,
1036 self.mandatory,
1037 )
1037 )
1038
1038
1039 def copy(self):
1039 def copy(self):
1040 """return a copy of the part
1040 """return a copy of the part
1041
1041
1042 The new part have the very same content but no partid assigned yet.
1042 The new part have the very same content but no partid assigned yet.
1043 Parts with generated data cannot be copied."""
1043 Parts with generated data cannot be copied."""
1044 assert not util.safehasattr(self.data, 'next')
1044 assert not util.safehasattr(self.data, 'next')
1045 return self.__class__(
1045 return self.__class__(
1046 self.type,
1046 self.type,
1047 self._mandatoryparams,
1047 self._mandatoryparams,
1048 self._advisoryparams,
1048 self._advisoryparams,
1049 self._data,
1049 self._data,
1050 self.mandatory,
1050 self.mandatory,
1051 )
1051 )
1052
1052
1053 # methods used to defines the part content
1053 # methods used to defines the part content
1054 @property
1054 @property
1055 def data(self):
1055 def data(self):
1056 return self._data
1056 return self._data
1057
1057
1058 @data.setter
1058 @data.setter
1059 def data(self, data):
1059 def data(self, data):
1060 if self._generated is not None:
1060 if self._generated is not None:
1061 raise error.ReadOnlyPartError(b'part is being generated')
1061 raise error.ReadOnlyPartError(b'part is being generated')
1062 self._data = data
1062 self._data = data
1063
1063
1064 @property
1064 @property
1065 def mandatoryparams(self):
1065 def mandatoryparams(self):
1066 # make it an immutable tuple to force people through ``addparam``
1066 # make it an immutable tuple to force people through ``addparam``
1067 return tuple(self._mandatoryparams)
1067 return tuple(self._mandatoryparams)
1068
1068
1069 @property
1069 @property
1070 def advisoryparams(self):
1070 def advisoryparams(self):
1071 # make it an immutable tuple to force people through ``addparam``
1071 # make it an immutable tuple to force people through ``addparam``
1072 return tuple(self._advisoryparams)
1072 return tuple(self._advisoryparams)
1073
1073
1074 def addparam(self, name, value=b'', mandatory=True):
1074 def addparam(self, name, value=b'', mandatory=True):
1075 """add a parameter to the part
1075 """add a parameter to the part
1076
1076
1077 If 'mandatory' is set to True, the remote handler must claim support
1077 If 'mandatory' is set to True, the remote handler must claim support
1078 for this parameter or the unbundling will be aborted.
1078 for this parameter or the unbundling will be aborted.
1079
1079
1080 The 'name' and 'value' cannot exceed 255 bytes each.
1080 The 'name' and 'value' cannot exceed 255 bytes each.
1081 """
1081 """
1082 if self._generated is not None:
1082 if self._generated is not None:
1083 raise error.ReadOnlyPartError(b'part is being generated')
1083 raise error.ReadOnlyPartError(b'part is being generated')
1084 if name in self._seenparams:
1084 if name in self._seenparams:
1085 raise ValueError(b'duplicated params: %s' % name)
1085 raise ValueError(b'duplicated params: %s' % name)
1086 self._seenparams.add(name)
1086 self._seenparams.add(name)
1087 params = self._advisoryparams
1087 params = self._advisoryparams
1088 if mandatory:
1088 if mandatory:
1089 params = self._mandatoryparams
1089 params = self._mandatoryparams
1090 params.append((name, value))
1090 params.append((name, value))
1091
1091
1092 # methods used to generates the bundle2 stream
1092 # methods used to generates the bundle2 stream
1093 def getchunks(self, ui):
1093 def getchunks(self, ui):
1094 if self._generated is not None:
1094 if self._generated is not None:
1095 raise error.ProgrammingError(b'part can only be consumed once')
1095 raise error.ProgrammingError(b'part can only be consumed once')
1096 self._generated = False
1096 self._generated = False
1097
1097
1098 if ui.debugflag:
1098 if ui.debugflag:
1099 msg = [b'bundle2-output-part: "%s"' % self.type]
1099 msg = [b'bundle2-output-part: "%s"' % self.type]
1100 if not self.mandatory:
1100 if not self.mandatory:
1101 msg.append(b' (advisory)')
1101 msg.append(b' (advisory)')
1102 nbmp = len(self.mandatoryparams)
1102 nbmp = len(self.mandatoryparams)
1103 nbap = len(self.advisoryparams)
1103 nbap = len(self.advisoryparams)
1104 if nbmp or nbap:
1104 if nbmp or nbap:
1105 msg.append(b' (params:')
1105 msg.append(b' (params:')
1106 if nbmp:
1106 if nbmp:
1107 msg.append(b' %i mandatory' % nbmp)
1107 msg.append(b' %i mandatory' % nbmp)
1108 if nbap:
1108 if nbap:
1109 msg.append(b' %i advisory' % nbmp)
1109 msg.append(b' %i advisory' % nbmp)
1110 msg.append(b')')
1110 msg.append(b')')
1111 if not self.data:
1111 if not self.data:
1112 msg.append(b' empty payload')
1112 msg.append(b' empty payload')
1113 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1113 elif util.safehasattr(self.data, 'next') or util.safehasattr(
1114 self.data, b'__next__'
1114 self.data, b'__next__'
1115 ):
1115 ):
1116 msg.append(b' streamed payload')
1116 msg.append(b' streamed payload')
1117 else:
1117 else:
1118 msg.append(b' %i bytes payload' % len(self.data))
1118 msg.append(b' %i bytes payload' % len(self.data))
1119 msg.append(b'\n')
1119 msg.append(b'\n')
1120 ui.debug(b''.join(msg))
1120 ui.debug(b''.join(msg))
1121
1121
1122 #### header
1122 #### header
1123 if self.mandatory:
1123 if self.mandatory:
1124 parttype = self.type.upper()
1124 parttype = self.type.upper()
1125 else:
1125 else:
1126 parttype = self.type.lower()
1126 parttype = self.type.lower()
1127 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1127 outdebug(ui, b'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1128 ## parttype
1128 ## parttype
1129 header = [
1129 header = [
1130 _pack(_fparttypesize, len(parttype)),
1130 _pack(_fparttypesize, len(parttype)),
1131 parttype,
1131 parttype,
1132 _pack(_fpartid, self.id),
1132 _pack(_fpartid, self.id),
1133 ]
1133 ]
1134 ## parameters
1134 ## parameters
1135 # count
1135 # count
1136 manpar = self.mandatoryparams
1136 manpar = self.mandatoryparams
1137 advpar = self.advisoryparams
1137 advpar = self.advisoryparams
1138 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1138 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1139 # size
1139 # size
1140 parsizes = []
1140 parsizes = []
1141 for key, value in manpar:
1141 for key, value in manpar:
1142 parsizes.append(len(key))
1142 parsizes.append(len(key))
1143 parsizes.append(len(value))
1143 parsizes.append(len(value))
1144 for key, value in advpar:
1144 for key, value in advpar:
1145 parsizes.append(len(key))
1145 parsizes.append(len(key))
1146 parsizes.append(len(value))
1146 parsizes.append(len(value))
1147 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1147 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1148 header.append(paramsizes)
1148 header.append(paramsizes)
1149 # key, value
1149 # key, value
1150 for key, value in manpar:
1150 for key, value in manpar:
1151 header.append(key)
1151 header.append(key)
1152 header.append(value)
1152 header.append(value)
1153 for key, value in advpar:
1153 for key, value in advpar:
1154 header.append(key)
1154 header.append(key)
1155 header.append(value)
1155 header.append(value)
1156 ## finalize header
1156 ## finalize header
1157 try:
1157 try:
1158 headerchunk = b''.join(header)
1158 headerchunk = b''.join(header)
1159 except TypeError:
1159 except TypeError:
1160 raise TypeError(
1160 raise TypeError(
1161 'Found a non-bytes trying to '
1161 'Found a non-bytes trying to '
1162 'build bundle part header: %r' % header
1162 'build bundle part header: %r' % header
1163 )
1163 )
1164 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1164 outdebug(ui, b'header chunk size: %i' % len(headerchunk))
1165 yield _pack(_fpartheadersize, len(headerchunk))
1165 yield _pack(_fpartheadersize, len(headerchunk))
1166 yield headerchunk
1166 yield headerchunk
1167 ## payload
1167 ## payload
1168 try:
1168 try:
1169 for chunk in self._payloadchunks():
1169 for chunk in self._payloadchunks():
1170 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1170 outdebug(ui, b'payload chunk size: %i' % len(chunk))
1171 yield _pack(_fpayloadsize, len(chunk))
1171 yield _pack(_fpayloadsize, len(chunk))
1172 yield chunk
1172 yield chunk
1173 except GeneratorExit:
1173 except GeneratorExit:
1174 # GeneratorExit means that nobody is listening for our
1174 # GeneratorExit means that nobody is listening for our
1175 # results anyway, so just bail quickly rather than trying
1175 # results anyway, so just bail quickly rather than trying
1176 # to produce an error part.
1176 # to produce an error part.
1177 ui.debug(b'bundle2-generatorexit\n')
1177 ui.debug(b'bundle2-generatorexit\n')
1178 raise
1178 raise
1179 except BaseException as exc:
1179 except BaseException as exc:
1180 bexc = stringutil.forcebytestr(exc)
1180 bexc = stringutil.forcebytestr(exc)
1181 # backup exception data for later
1181 # backup exception data for later
1182 ui.debug(
1182 ui.debug(
1183 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1183 b'bundle2-input-stream-interrupt: encoding exception %s' % bexc
1184 )
1184 )
1185 tb = sys.exc_info()[2]
1185 tb = sys.exc_info()[2]
1186 msg = b'unexpected error: %s' % bexc
1186 msg = b'unexpected error: %s' % bexc
1187 interpart = bundlepart(
1187 interpart = bundlepart(
1188 b'error:abort', [(b'message', msg)], mandatory=False
1188 b'error:abort', [(b'message', msg)], mandatory=False
1189 )
1189 )
1190 interpart.id = 0
1190 interpart.id = 0
1191 yield _pack(_fpayloadsize, -1)
1191 yield _pack(_fpayloadsize, -1)
1192 for chunk in interpart.getchunks(ui=ui):
1192 for chunk in interpart.getchunks(ui=ui):
1193 yield chunk
1193 yield chunk
1194 outdebug(ui, b'closing payload chunk')
1194 outdebug(ui, b'closing payload chunk')
1195 # abort current part payload
1195 # abort current part payload
1196 yield _pack(_fpayloadsize, 0)
1196 yield _pack(_fpayloadsize, 0)
1197 pycompat.raisewithtb(exc, tb)
1197 pycompat.raisewithtb(exc, tb)
1198 # end of payload
1198 # end of payload
1199 outdebug(ui, b'closing payload chunk')
1199 outdebug(ui, b'closing payload chunk')
1200 yield _pack(_fpayloadsize, 0)
1200 yield _pack(_fpayloadsize, 0)
1201 self._generated = True
1201 self._generated = True
1202
1202
1203 def _payloadchunks(self):
1203 def _payloadchunks(self):
1204 """yield chunks of a the part payload
1204 """yield chunks of a the part payload
1205
1205
1206 Exists to handle the different methods to provide data to a part."""
1206 Exists to handle the different methods to provide data to a part."""
1207 # we only support fixed size data now.
1207 # we only support fixed size data now.
1208 # This will be improved in the future.
1208 # This will be improved in the future.
1209 if util.safehasattr(self.data, 'next') or util.safehasattr(
1209 if util.safehasattr(self.data, 'next') or util.safehasattr(
1210 self.data, b'__next__'
1210 self.data, b'__next__'
1211 ):
1211 ):
1212 buff = util.chunkbuffer(self.data)
1212 buff = util.chunkbuffer(self.data)
1213 chunk = buff.read(preferedchunksize)
1213 chunk = buff.read(preferedchunksize)
1214 while chunk:
1214 while chunk:
1215 yield chunk
1215 yield chunk
1216 chunk = buff.read(preferedchunksize)
1216 chunk = buff.read(preferedchunksize)
1217 elif len(self.data):
1217 elif len(self.data):
1218 yield self.data
1218 yield self.data
1219
1219
1220
1220
1221 flaginterrupt = -1
1221 flaginterrupt = -1
1222
1222
1223
1223
1224 class interrupthandler(unpackermixin):
1224 class interrupthandler(unpackermixin):
1225 """read one part and process it with restricted capability
1225 """read one part and process it with restricted capability
1226
1226
1227 This allows to transmit exception raised on the producer size during part
1227 This allows to transmit exception raised on the producer size during part
1228 iteration while the consumer is reading a part.
1228 iteration while the consumer is reading a part.
1229
1229
1230 Part processed in this manner only have access to a ui object,"""
1230 Part processed in this manner only have access to a ui object,"""
1231
1231
1232 def __init__(self, ui, fp):
1232 def __init__(self, ui, fp):
1233 super(interrupthandler, self).__init__(fp)
1233 super(interrupthandler, self).__init__(fp)
1234 self.ui = ui
1234 self.ui = ui
1235
1235
1236 def _readpartheader(self):
1236 def _readpartheader(self):
1237 """reads a part header size and return the bytes blob
1237 """reads a part header size and return the bytes blob
1238
1238
1239 returns None if empty"""
1239 returns None if empty"""
1240 headersize = self._unpack(_fpartheadersize)[0]
1240 headersize = self._unpack(_fpartheadersize)[0]
1241 if headersize < 0:
1241 if headersize < 0:
1242 raise error.BundleValueError(
1242 raise error.BundleValueError(
1243 b'negative part header size: %i' % headersize
1243 b'negative part header size: %i' % headersize
1244 )
1244 )
1245 indebug(self.ui, b'part header size: %i\n' % headersize)
1245 indebug(self.ui, b'part header size: %i\n' % headersize)
1246 if headersize:
1246 if headersize:
1247 return self._readexact(headersize)
1247 return self._readexact(headersize)
1248 return None
1248 return None
1249
1249
1250 def __call__(self):
1250 def __call__(self):
1251
1251
1252 self.ui.debug(
1252 self.ui.debug(
1253 b'bundle2-input-stream-interrupt: opening out of band context\n'
1253 b'bundle2-input-stream-interrupt: opening out of band context\n'
1254 )
1254 )
1255 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1255 indebug(self.ui, b'bundle2 stream interruption, looking for a part.')
1256 headerblock = self._readpartheader()
1256 headerblock = self._readpartheader()
1257 if headerblock is None:
1257 if headerblock is None:
1258 indebug(self.ui, b'no part found during interruption.')
1258 indebug(self.ui, b'no part found during interruption.')
1259 return
1259 return
1260 part = unbundlepart(self.ui, headerblock, self._fp)
1260 part = unbundlepart(self.ui, headerblock, self._fp)
1261 op = interruptoperation(self.ui)
1261 op = interruptoperation(self.ui)
1262 hardabort = False
1262 hardabort = False
1263 try:
1263 try:
1264 _processpart(op, part)
1264 _processpart(op, part)
1265 except (SystemExit, KeyboardInterrupt):
1265 except (SystemExit, KeyboardInterrupt):
1266 hardabort = True
1266 hardabort = True
1267 raise
1267 raise
1268 finally:
1268 finally:
1269 if not hardabort:
1269 if not hardabort:
1270 part.consume()
1270 part.consume()
1271 self.ui.debug(
1271 self.ui.debug(
1272 b'bundle2-input-stream-interrupt: closing out of band context\n'
1272 b'bundle2-input-stream-interrupt: closing out of band context\n'
1273 )
1273 )
1274
1274
1275
1275
1276 class interruptoperation:
1276 class interruptoperation:
1277 """A limited operation to be use by part handler during interruption
1277 """A limited operation to be use by part handler during interruption
1278
1278
1279 It only have access to an ui object.
1279 It only have access to an ui object.
1280 """
1280 """
1281
1281
1282 def __init__(self, ui):
1282 def __init__(self, ui):
1283 self.ui = ui
1283 self.ui = ui
1284 self.reply = None
1284 self.reply = None
1285 self.captureoutput = False
1285 self.captureoutput = False
1286
1286
1287 @property
1287 @property
1288 def repo(self):
1288 def repo(self):
1289 raise error.ProgrammingError(b'no repo access from stream interruption')
1289 raise error.ProgrammingError(b'no repo access from stream interruption')
1290
1290
1291 def gettransaction(self):
1291 def gettransaction(self):
1292 raise TransactionUnavailable(b'no repo access from stream interruption')
1292 raise TransactionUnavailable(b'no repo access from stream interruption')
1293
1293
1294
1294
1295 def decodepayloadchunks(ui, fh):
1295 def decodepayloadchunks(ui, fh):
1296 """Reads bundle2 part payload data into chunks.
1296 """Reads bundle2 part payload data into chunks.
1297
1297
1298 Part payload data consists of framed chunks. This function takes
1298 Part payload data consists of framed chunks. This function takes
1299 a file handle and emits those chunks.
1299 a file handle and emits those chunks.
1300 """
1300 """
1301 dolog = ui.configbool(b'devel', b'bundle2.debug')
1301 dolog = ui.configbool(b'devel', b'bundle2.debug')
1302 debug = ui.debug
1302 debug = ui.debug
1303
1303
1304 headerstruct = struct.Struct(_fpayloadsize)
1304 headerstruct = struct.Struct(_fpayloadsize)
1305 headersize = headerstruct.size
1305 headersize = headerstruct.size
1306 unpack = headerstruct.unpack
1306 unpack = headerstruct.unpack
1307
1307
1308 readexactly = changegroup.readexactly
1308 readexactly = changegroup.readexactly
1309 read = fh.read
1309 read = fh.read
1310
1310
1311 chunksize = unpack(readexactly(fh, headersize))[0]
1311 chunksize = unpack(readexactly(fh, headersize))[0]
1312 indebug(ui, b'payload chunk size: %i' % chunksize)
1312 indebug(ui, b'payload chunk size: %i' % chunksize)
1313
1313
1314 # changegroup.readexactly() is inlined below for performance.
1314 # changegroup.readexactly() is inlined below for performance.
1315 while chunksize:
1315 while chunksize:
1316 if chunksize >= 0:
1316 if chunksize >= 0:
1317 s = read(chunksize)
1317 s = read(chunksize)
1318 if len(s) < chunksize:
1318 if len(s) < chunksize:
1319 raise error.Abort(
1319 raise error.Abort(
1320 _(
1320 _(
1321 b'stream ended unexpectedly '
1321 b'stream ended unexpectedly '
1322 b' (got %d bytes, expected %d)'
1322 b' (got %d bytes, expected %d)'
1323 )
1323 )
1324 % (len(s), chunksize)
1324 % (len(s), chunksize)
1325 )
1325 )
1326
1326
1327 yield s
1327 yield s
1328 elif chunksize == flaginterrupt:
1328 elif chunksize == flaginterrupt:
1329 # Interrupt "signal" detected. The regular stream is interrupted
1329 # Interrupt "signal" detected. The regular stream is interrupted
1330 # and a bundle2 part follows. Consume it.
1330 # and a bundle2 part follows. Consume it.
1331 interrupthandler(ui, fh)()
1331 interrupthandler(ui, fh)()
1332 else:
1332 else:
1333 raise error.BundleValueError(
1333 raise error.BundleValueError(
1334 b'negative payload chunk size: %s' % chunksize
1334 b'negative payload chunk size: %s' % chunksize
1335 )
1335 )
1336
1336
1337 s = read(headersize)
1337 s = read(headersize)
1338 if len(s) < headersize:
1338 if len(s) < headersize:
1339 raise error.Abort(
1339 raise error.Abort(
1340 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1340 _(b'stream ended unexpectedly (got %d bytes, expected %d)')
1341 % (len(s), chunksize)
1341 % (len(s), chunksize)
1342 )
1342 )
1343
1343
1344 chunksize = unpack(s)[0]
1344 chunksize = unpack(s)[0]
1345
1345
1346 # indebug() inlined for performance.
1346 # indebug() inlined for performance.
1347 if dolog:
1347 if dolog:
1348 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1348 debug(b'bundle2-input: payload chunk size: %i\n' % chunksize)
1349
1349
1350
1350
1351 class unbundlepart(unpackermixin):
1351 class unbundlepart(unpackermixin):
1352 """a bundle part read from a bundle"""
1352 """a bundle part read from a bundle"""
1353
1353
1354 def __init__(self, ui, header, fp):
1354 def __init__(self, ui, header, fp):
1355 super(unbundlepart, self).__init__(fp)
1355 super(unbundlepart, self).__init__(fp)
1356 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1356 self._seekable = util.safehasattr(fp, 'seek') and util.safehasattr(
1357 fp, b'tell'
1357 fp, b'tell'
1358 )
1358 )
1359 self.ui = ui
1359 self.ui = ui
1360 # unbundle state attr
1360 # unbundle state attr
1361 self._headerdata = header
1361 self._headerdata = header
1362 self._headeroffset = 0
1362 self._headeroffset = 0
1363 self._initialized = False
1363 self._initialized = False
1364 self.consumed = False
1364 self.consumed = False
1365 # part data
1365 # part data
1366 self.id = None
1366 self.id = None
1367 self.type = None
1367 self.type = None
1368 self.mandatoryparams = None
1368 self.mandatoryparams = None
1369 self.advisoryparams = None
1369 self.advisoryparams = None
1370 self.params = None
1370 self.params = None
1371 self.mandatorykeys = ()
1371 self.mandatorykeys = ()
1372 self._readheader()
1372 self._readheader()
1373 self._mandatory = None
1373 self._mandatory = None
1374 self._pos = 0
1374 self._pos = 0
1375
1375
1376 def _fromheader(self, size):
1376 def _fromheader(self, size):
1377 """return the next <size> byte from the header"""
1377 """return the next <size> byte from the header"""
1378 offset = self._headeroffset
1378 offset = self._headeroffset
1379 data = self._headerdata[offset : (offset + size)]
1379 data = self._headerdata[offset : (offset + size)]
1380 self._headeroffset = offset + size
1380 self._headeroffset = offset + size
1381 return data
1381 return data
1382
1382
1383 def _unpackheader(self, format):
1383 def _unpackheader(self, format):
1384 """read given format from header
1384 """read given format from header
1385
1385
1386 This automatically compute the size of the format to read."""
1386 This automatically compute the size of the format to read."""
1387 data = self._fromheader(struct.calcsize(format))
1387 data = self._fromheader(struct.calcsize(format))
1388 return _unpack(format, data)
1388 return _unpack(format, data)
1389
1389
1390 def _initparams(self, mandatoryparams, advisoryparams):
1390 def _initparams(self, mandatoryparams, advisoryparams):
1391 """internal function to setup all logic related parameters"""
1391 """internal function to setup all logic related parameters"""
1392 # make it read only to prevent people touching it by mistake.
1392 # make it read only to prevent people touching it by mistake.
1393 self.mandatoryparams = tuple(mandatoryparams)
1393 self.mandatoryparams = tuple(mandatoryparams)
1394 self.advisoryparams = tuple(advisoryparams)
1394 self.advisoryparams = tuple(advisoryparams)
1395 # user friendly UI
1395 # user friendly UI
1396 self.params = util.sortdict(self.mandatoryparams)
1396 self.params = util.sortdict(self.mandatoryparams)
1397 self.params.update(self.advisoryparams)
1397 self.params.update(self.advisoryparams)
1398 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1398 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1399
1399
1400 def _readheader(self):
1400 def _readheader(self):
1401 """read the header and setup the object"""
1401 """read the header and setup the object"""
1402 typesize = self._unpackheader(_fparttypesize)[0]
1402 typesize = self._unpackheader(_fparttypesize)[0]
1403 self.type = self._fromheader(typesize)
1403 self.type = self._fromheader(typesize)
1404 indebug(self.ui, b'part type: "%s"' % self.type)
1404 indebug(self.ui, b'part type: "%s"' % self.type)
1405 self.id = self._unpackheader(_fpartid)[0]
1405 self.id = self._unpackheader(_fpartid)[0]
1406 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1406 indebug(self.ui, b'part id: "%s"' % pycompat.bytestr(self.id))
1407 # extract mandatory bit from type
1407 # extract mandatory bit from type
1408 self.mandatory = self.type != self.type.lower()
1408 self.mandatory = self.type != self.type.lower()
1409 self.type = self.type.lower()
1409 self.type = self.type.lower()
1410 ## reading parameters
1410 ## reading parameters
1411 # param count
1411 # param count
1412 mancount, advcount = self._unpackheader(_fpartparamcount)
1412 mancount, advcount = self._unpackheader(_fpartparamcount)
1413 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1413 indebug(self.ui, b'part parameters: %i' % (mancount + advcount))
1414 # param size
1414 # param size
1415 fparamsizes = _makefpartparamsizes(mancount + advcount)
1415 fparamsizes = _makefpartparamsizes(mancount + advcount)
1416 paramsizes = self._unpackheader(fparamsizes)
1416 paramsizes = self._unpackheader(fparamsizes)
1417 # make it a list of couple again
1417 # make it a list of couple again
1418 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1418 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1419 # split mandatory from advisory
1419 # split mandatory from advisory
1420 mansizes = paramsizes[:mancount]
1420 mansizes = paramsizes[:mancount]
1421 advsizes = paramsizes[mancount:]
1421 advsizes = paramsizes[mancount:]
1422 # retrieve param value
1422 # retrieve param value
1423 manparams = []
1423 manparams = []
1424 for key, value in mansizes:
1424 for key, value in mansizes:
1425 manparams.append((self._fromheader(key), self._fromheader(value)))
1425 manparams.append((self._fromheader(key), self._fromheader(value)))
1426 advparams = []
1426 advparams = []
1427 for key, value in advsizes:
1427 for key, value in advsizes:
1428 advparams.append((self._fromheader(key), self._fromheader(value)))
1428 advparams.append((self._fromheader(key), self._fromheader(value)))
1429 self._initparams(manparams, advparams)
1429 self._initparams(manparams, advparams)
1430 ## part payload
1430 ## part payload
1431 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1431 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1432 # we read the data, tell it
1432 # we read the data, tell it
1433 self._initialized = True
1433 self._initialized = True
1434
1434
1435 def _payloadchunks(self):
1435 def _payloadchunks(self):
1436 """Generator of decoded chunks in the payload."""
1436 """Generator of decoded chunks in the payload."""
1437 return decodepayloadchunks(self.ui, self._fp)
1437 return decodepayloadchunks(self.ui, self._fp)
1438
1438
1439 def consume(self):
1439 def consume(self):
1440 """Read the part payload until completion.
1440 """Read the part payload until completion.
1441
1441
1442 By consuming the part data, the underlying stream read offset will
1442 By consuming the part data, the underlying stream read offset will
1443 be advanced to the next part (or end of stream).
1443 be advanced to the next part (or end of stream).
1444 """
1444 """
1445 if self.consumed:
1445 if self.consumed:
1446 return
1446 return
1447
1447
1448 chunk = self.read(32768)
1448 chunk = self.read(32768)
1449 while chunk:
1449 while chunk:
1450 self._pos += len(chunk)
1450 self._pos += len(chunk)
1451 chunk = self.read(32768)
1451 chunk = self.read(32768)
1452
1452
1453 def read(self, size=None):
1453 def read(self, size=None):
1454 """read payload data"""
1454 """read payload data"""
1455 if not self._initialized:
1455 if not self._initialized:
1456 self._readheader()
1456 self._readheader()
1457 if size is None:
1457 if size is None:
1458 data = self._payloadstream.read()
1458 data = self._payloadstream.read()
1459 else:
1459 else:
1460 data = self._payloadstream.read(size)
1460 data = self._payloadstream.read(size)
1461 self._pos += len(data)
1461 self._pos += len(data)
1462 if size is None or len(data) < size:
1462 if size is None or len(data) < size:
1463 if not self.consumed and self._pos:
1463 if not self.consumed and self._pos:
1464 self.ui.debug(
1464 self.ui.debug(
1465 b'bundle2-input-part: total payload size %i\n' % self._pos
1465 b'bundle2-input-part: total payload size %i\n' % self._pos
1466 )
1466 )
1467 self.consumed = True
1467 self.consumed = True
1468 return data
1468 return data
1469
1469
1470
1470
1471 class seekableunbundlepart(unbundlepart):
1471 class seekableunbundlepart(unbundlepart):
1472 """A bundle2 part in a bundle that is seekable.
1472 """A bundle2 part in a bundle that is seekable.
1473
1473
1474 Regular ``unbundlepart`` instances can only be read once. This class
1474 Regular ``unbundlepart`` instances can only be read once. This class
1475 extends ``unbundlepart`` to enable bi-directional seeking within the
1475 extends ``unbundlepart`` to enable bi-directional seeking within the
1476 part.
1476 part.
1477
1477
1478 Bundle2 part data consists of framed chunks. Offsets when seeking
1478 Bundle2 part data consists of framed chunks. Offsets when seeking
1479 refer to the decoded data, not the offsets in the underlying bundle2
1479 refer to the decoded data, not the offsets in the underlying bundle2
1480 stream.
1480 stream.
1481
1481
1482 To facilitate quickly seeking within the decoded data, instances of this
1482 To facilitate quickly seeking within the decoded data, instances of this
1483 class maintain a mapping between offsets in the underlying stream and
1483 class maintain a mapping between offsets in the underlying stream and
1484 the decoded payload. This mapping will consume memory in proportion
1484 the decoded payload. This mapping will consume memory in proportion
1485 to the number of chunks within the payload (which almost certainly
1485 to the number of chunks within the payload (which almost certainly
1486 increases in proportion with the size of the part).
1486 increases in proportion with the size of the part).
1487 """
1487 """
1488
1488
1489 def __init__(self, ui, header, fp):
1489 def __init__(self, ui, header, fp):
1490 # (payload, file) offsets for chunk starts.
1490 # (payload, file) offsets for chunk starts.
1491 self._chunkindex = []
1491 self._chunkindex = []
1492
1492
1493 super(seekableunbundlepart, self).__init__(ui, header, fp)
1493 super(seekableunbundlepart, self).__init__(ui, header, fp)
1494
1494
1495 def _payloadchunks(self, chunknum=0):
1495 def _payloadchunks(self, chunknum=0):
1496 '''seek to specified chunk and start yielding data'''
1496 '''seek to specified chunk and start yielding data'''
1497 if len(self._chunkindex) == 0:
1497 if len(self._chunkindex) == 0:
1498 assert chunknum == 0, b'Must start with chunk 0'
1498 assert chunknum == 0, b'Must start with chunk 0'
1499 self._chunkindex.append((0, self._tellfp()))
1499 self._chunkindex.append((0, self._tellfp()))
1500 else:
1500 else:
1501 assert chunknum < len(self._chunkindex), (
1501 assert chunknum < len(self._chunkindex), (
1502 b'Unknown chunk %d' % chunknum
1502 b'Unknown chunk %d' % chunknum
1503 )
1503 )
1504 self._seekfp(self._chunkindex[chunknum][1])
1504 self._seekfp(self._chunkindex[chunknum][1])
1505
1505
1506 pos = self._chunkindex[chunknum][0]
1506 pos = self._chunkindex[chunknum][0]
1507
1507
1508 for chunk in decodepayloadchunks(self.ui, self._fp):
1508 for chunk in decodepayloadchunks(self.ui, self._fp):
1509 chunknum += 1
1509 chunknum += 1
1510 pos += len(chunk)
1510 pos += len(chunk)
1511 if chunknum == len(self._chunkindex):
1511 if chunknum == len(self._chunkindex):
1512 self._chunkindex.append((pos, self._tellfp()))
1512 self._chunkindex.append((pos, self._tellfp()))
1513
1513
1514 yield chunk
1514 yield chunk
1515
1515
1516 def _findchunk(self, pos):
1516 def _findchunk(self, pos):
1517 '''for a given payload position, return a chunk number and offset'''
1517 '''for a given payload position, return a chunk number and offset'''
1518 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1518 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1519 if ppos == pos:
1519 if ppos == pos:
1520 return chunk, 0
1520 return chunk, 0
1521 elif ppos > pos:
1521 elif ppos > pos:
1522 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1522 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1523 raise ValueError(b'Unknown chunk')
1523 raise ValueError(b'Unknown chunk')
1524
1524
1525 def tell(self):
1525 def tell(self):
1526 return self._pos
1526 return self._pos
1527
1527
1528 def seek(self, offset, whence=os.SEEK_SET):
1528 def seek(self, offset, whence=os.SEEK_SET):
1529 if whence == os.SEEK_SET:
1529 if whence == os.SEEK_SET:
1530 newpos = offset
1530 newpos = offset
1531 elif whence == os.SEEK_CUR:
1531 elif whence == os.SEEK_CUR:
1532 newpos = self._pos + offset
1532 newpos = self._pos + offset
1533 elif whence == os.SEEK_END:
1533 elif whence == os.SEEK_END:
1534 if not self.consumed:
1534 if not self.consumed:
1535 # Can't use self.consume() here because it advances self._pos.
1535 # Can't use self.consume() here because it advances self._pos.
1536 chunk = self.read(32768)
1536 chunk = self.read(32768)
1537 while chunk:
1537 while chunk:
1538 chunk = self.read(32768)
1538 chunk = self.read(32768)
1539 newpos = self._chunkindex[-1][0] - offset
1539 newpos = self._chunkindex[-1][0] - offset
1540 else:
1540 else:
1541 raise ValueError(b'Unknown whence value: %r' % (whence,))
1541 raise ValueError(b'Unknown whence value: %r' % (whence,))
1542
1542
1543 if newpos > self._chunkindex[-1][0] and not self.consumed:
1543 if newpos > self._chunkindex[-1][0] and not self.consumed:
1544 # Can't use self.consume() here because it advances self._pos.
1544 # Can't use self.consume() here because it advances self._pos.
1545 chunk = self.read(32768)
1545 chunk = self.read(32768)
1546 while chunk:
1546 while chunk:
1547 chunk = self.read(32668)
1547 chunk = self.read(32668)
1548
1548
1549 if not 0 <= newpos <= self._chunkindex[-1][0]:
1549 if not 0 <= newpos <= self._chunkindex[-1][0]:
1550 raise ValueError(b'Offset out of range')
1550 raise ValueError(b'Offset out of range')
1551
1551
1552 if self._pos != newpos:
1552 if self._pos != newpos:
1553 chunk, internaloffset = self._findchunk(newpos)
1553 chunk, internaloffset = self._findchunk(newpos)
1554 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1554 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1555 adjust = self.read(internaloffset)
1555 adjust = self.read(internaloffset)
1556 if len(adjust) != internaloffset:
1556 if len(adjust) != internaloffset:
1557 raise error.Abort(_(b'Seek failed\n'))
1557 raise error.Abort(_(b'Seek failed\n'))
1558 self._pos = newpos
1558 self._pos = newpos
1559
1559
1560 def _seekfp(self, offset, whence=0):
1560 def _seekfp(self, offset, whence=0):
1561 """move the underlying file pointer
1561 """move the underlying file pointer
1562
1562
1563 This method is meant for internal usage by the bundle2 protocol only.
1563 This method is meant for internal usage by the bundle2 protocol only.
1564 They directly manipulate the low level stream including bundle2 level
1564 They directly manipulate the low level stream including bundle2 level
1565 instruction.
1565 instruction.
1566
1566
1567 Do not use it to implement higher-level logic or methods."""
1567 Do not use it to implement higher-level logic or methods."""
1568 if self._seekable:
1568 if self._seekable:
1569 return self._fp.seek(offset, whence)
1569 return self._fp.seek(offset, whence)
1570 else:
1570 else:
1571 raise NotImplementedError(_(b'File pointer is not seekable'))
1571 raise NotImplementedError(_(b'File pointer is not seekable'))
1572
1572
1573 def _tellfp(self):
1573 def _tellfp(self):
1574 """return the file offset, or None if file is not seekable
1574 """return the file offset, or None if file is not seekable
1575
1575
1576 This method is meant for internal usage by the bundle2 protocol only.
1576 This method is meant for internal usage by the bundle2 protocol only.
1577 They directly manipulate the low level stream including bundle2 level
1577 They directly manipulate the low level stream including bundle2 level
1578 instruction.
1578 instruction.
1579
1579
1580 Do not use it to implement higher-level logic or methods."""
1580 Do not use it to implement higher-level logic or methods."""
1581 if self._seekable:
1581 if self._seekable:
1582 try:
1582 try:
1583 return self._fp.tell()
1583 return self._fp.tell()
1584 except IOError as e:
1584 except IOError as e:
1585 if e.errno == errno.ESPIPE:
1585 if e.errno == errno.ESPIPE:
1586 self._seekable = False
1586 self._seekable = False
1587 else:
1587 else:
1588 raise
1588 raise
1589 return None
1589 return None
1590
1590
1591
1591
1592 # These are only the static capabilities.
1592 # These are only the static capabilities.
1593 # Check the 'getrepocaps' function for the rest.
1593 # Check the 'getrepocaps' function for the rest.
1594 capabilities = {
1594 capabilities = {
1595 b'HG20': (),
1595 b'HG20': (),
1596 b'bookmarks': (),
1596 b'bookmarks': (),
1597 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1597 b'error': (b'abort', b'unsupportedcontent', b'pushraced', b'pushkey'),
1598 b'listkeys': (),
1598 b'listkeys': (),
1599 b'pushkey': (),
1599 b'pushkey': (),
1600 b'digests': tuple(sorted(util.DIGESTS.keys())),
1600 b'digests': tuple(sorted(util.DIGESTS.keys())),
1601 b'remote-changegroup': (b'http', b'https'),
1601 b'remote-changegroup': (b'http', b'https'),
1602 b'hgtagsfnodes': (),
1602 b'hgtagsfnodes': (),
1603 b'phases': (b'heads',),
1603 b'phases': (b'heads',),
1604 b'stream': (b'v2',),
1604 b'stream': (b'v2',),
1605 }
1605 }
1606
1606
1607
1607
1608 def getrepocaps(repo, allowpushback=False, role=None):
1608 def getrepocaps(repo, allowpushback=False, role=None):
1609 """return the bundle2 capabilities for a given repo
1609 """return the bundle2 capabilities for a given repo
1610
1610
1611 Exists to allow extensions (like evolution) to mutate the capabilities.
1611 Exists to allow extensions (like evolution) to mutate the capabilities.
1612
1612
1613 The returned value is used for servers advertising their capabilities as
1613 The returned value is used for servers advertising their capabilities as
1614 well as clients advertising their capabilities to servers as part of
1614 well as clients advertising their capabilities to servers as part of
1615 bundle2 requests. The ``role`` argument specifies which is which.
1615 bundle2 requests. The ``role`` argument specifies which is which.
1616 """
1616 """
1617 if role not in (b'client', b'server'):
1617 if role not in (b'client', b'server'):
1618 raise error.ProgrammingError(b'role argument must be client or server')
1618 raise error.ProgrammingError(b'role argument must be client or server')
1619
1619
1620 caps = capabilities.copy()
1620 caps = capabilities.copy()
1621 caps[b'changegroup'] = tuple(
1621 caps[b'changegroup'] = tuple(
1622 sorted(changegroup.supportedincomingversions(repo))
1622 sorted(changegroup.supportedincomingversions(repo))
1623 )
1623 )
1624 if obsolete.isenabled(repo, obsolete.exchangeopt):
1624 if obsolete.isenabled(repo, obsolete.exchangeopt):
1625 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1625 supportedformat = tuple(b'V%i' % v for v in obsolete.formats)
1626 caps[b'obsmarkers'] = supportedformat
1626 caps[b'obsmarkers'] = supportedformat
1627 if allowpushback:
1627 if allowpushback:
1628 caps[b'pushback'] = ()
1628 caps[b'pushback'] = ()
1629 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1629 cpmode = repo.ui.config(b'server', b'concurrent-push-mode')
1630 if cpmode == b'check-related':
1630 if cpmode == b'check-related':
1631 caps[b'checkheads'] = (b'related',)
1631 caps[b'checkheads'] = (b'related',)
1632 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1632 if b'phases' in repo.ui.configlist(b'devel', b'legacy.exchange'):
1633 caps.pop(b'phases')
1633 caps.pop(b'phases')
1634
1634
1635 # Don't advertise stream clone support in server mode if not configured.
1635 # Don't advertise stream clone support in server mode if not configured.
1636 if role == b'server':
1636 if role == b'server':
1637 streamsupported = repo.ui.configbool(
1637 streamsupported = repo.ui.configbool(
1638 b'server', b'uncompressed', untrusted=True
1638 b'server', b'uncompressed', untrusted=True
1639 )
1639 )
1640 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1640 featuresupported = repo.ui.configbool(b'server', b'bundle2.stream')
1641
1641
1642 if not streamsupported or not featuresupported:
1642 if not streamsupported or not featuresupported:
1643 caps.pop(b'stream')
1643 caps.pop(b'stream')
1644 # Else always advertise support on client, because payload support
1644 # Else always advertise support on client, because payload support
1645 # should always be advertised.
1645 # should always be advertised.
1646
1646
1647 # b'rev-branch-cache is no longer advertised, but still supported
1647 # b'rev-branch-cache is no longer advertised, but still supported
1648 # for legacy clients.
1648 # for legacy clients.
1649
1649
1650 return caps
1650 return caps
1651
1651
1652
1652
1653 def bundle2caps(remote):
1653 def bundle2caps(remote):
1654 """return the bundle capabilities of a peer as dict"""
1654 """return the bundle capabilities of a peer as dict"""
1655 raw = remote.capable(b'bundle2')
1655 raw = remote.capable(b'bundle2')
1656 if not raw and raw != b'':
1656 if not raw and raw != b'':
1657 return {}
1657 return {}
1658 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1658 capsblob = urlreq.unquote(remote.capable(b'bundle2'))
1659 return decodecaps(capsblob)
1659 return decodecaps(capsblob)
1660
1660
1661
1661
1662 def obsmarkersversion(caps):
1662 def obsmarkersversion(caps):
1663 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1663 """extract the list of supported obsmarkers versions from a bundle2caps dict"""
1664 obscaps = caps.get(b'obsmarkers', ())
1664 obscaps = caps.get(b'obsmarkers', ())
1665 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1665 return [int(c[1:]) for c in obscaps if c.startswith(b'V')]
1666
1666
1667
1667
1668 def writenewbundle(
1668 def writenewbundle(
1669 ui,
1669 ui,
1670 repo,
1670 repo,
1671 source,
1671 source,
1672 filename,
1672 filename,
1673 bundletype,
1673 bundletype,
1674 outgoing,
1674 outgoing,
1675 opts,
1675 opts,
1676 vfs=None,
1676 vfs=None,
1677 compression=None,
1677 compression=None,
1678 compopts=None,
1678 compopts=None,
1679 ):
1679 ):
1680 if bundletype.startswith(b'HG10'):
1680 if bundletype.startswith(b'HG10'):
1681 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1681 cg = changegroup.makechangegroup(repo, outgoing, b'01', source)
1682 return writebundle(
1682 return writebundle(
1683 ui,
1683 ui,
1684 cg,
1684 cg,
1685 filename,
1685 filename,
1686 bundletype,
1686 bundletype,
1687 vfs=vfs,
1687 vfs=vfs,
1688 compression=compression,
1688 compression=compression,
1689 compopts=compopts,
1689 compopts=compopts,
1690 )
1690 )
1691 elif not bundletype.startswith(b'HG20'):
1691 elif not bundletype.startswith(b'HG20'):
1692 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1692 raise error.ProgrammingError(b'unknown bundle type: %s' % bundletype)
1693
1693
1694 caps = {}
1694 caps = {}
1695 if b'obsolescence' in opts:
1695 if b'obsolescence' in opts:
1696 caps[b'obsmarkers'] = (b'V1',)
1696 caps[b'obsmarkers'] = (b'V1',)
1697 bundle = bundle20(ui, caps)
1697 bundle = bundle20(ui, caps)
1698 bundle.setcompression(compression, compopts)
1698 bundle.setcompression(compression, compopts)
1699 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1699 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1700 chunkiter = bundle.getchunks()
1700 chunkiter = bundle.getchunks()
1701
1701
1702 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1702 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1703
1703
1704
1704
1705 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1705 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1706 # We should eventually reconcile this logic with the one behind
1706 # We should eventually reconcile this logic with the one behind
1707 # 'exchange.getbundle2partsgenerator'.
1707 # 'exchange.getbundle2partsgenerator'.
1708 #
1708 #
1709 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1709 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1710 # different right now. So we keep them separated for now for the sake of
1710 # different right now. So we keep them separated for now for the sake of
1711 # simplicity.
1711 # simplicity.
1712
1712
1713 # we might not always want a changegroup in such bundle, for example in
1713 # we might not always want a changegroup in such bundle, for example in
1714 # stream bundles
1714 # stream bundles
1715 if opts.get(b'changegroup', True):
1715 if opts.get(b'changegroup', True):
1716 cgversion = opts.get(b'cg.version')
1716 cgversion = opts.get(b'cg.version')
1717 if cgversion is None:
1717 if cgversion is None:
1718 cgversion = changegroup.safeversion(repo)
1718 cgversion = changegroup.safeversion(repo)
1719 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1719 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1720 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1720 part = bundler.newpart(b'changegroup', data=cg.getchunks())
1721 part.addparam(b'version', cg.version)
1721 part.addparam(b'version', cg.version)
1722 if b'clcount' in cg.extras:
1722 if b'clcount' in cg.extras:
1723 part.addparam(
1723 part.addparam(
1724 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1724 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1725 )
1725 )
1726 if opts.get(b'phases') and repo.revs(
1726 if opts.get(b'phases') and repo.revs(
1727 b'%ln and secret()', outgoing.ancestorsof
1727 b'%ln and secret()', outgoing.ancestorsof
1728 ):
1728 ):
1729 part.addparam(
1729 part.addparam(
1730 b'targetphase', b'%d' % phases.secret, mandatory=False
1730 b'targetphase', b'%d' % phases.secret, mandatory=False
1731 )
1731 )
1732 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1732 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
1733 part.addparam(b'exp-sidedata', b'1')
1733 part.addparam(b'exp-sidedata', b'1')
1734
1734
1735 if opts.get(b'streamv2', False):
1735 if opts.get(b'streamv2', False):
1736 addpartbundlestream2(bundler, repo, stream=True)
1736 addpartbundlestream2(bundler, repo, stream=True)
1737
1737
1738 if opts.get(b'tagsfnodescache', True):
1738 if opts.get(b'tagsfnodescache', True):
1739 addparttagsfnodescache(repo, bundler, outgoing)
1739 addparttagsfnodescache(repo, bundler, outgoing)
1740
1740
1741 if opts.get(b'revbranchcache', True):
1741 if opts.get(b'revbranchcache', True):
1742 addpartrevbranchcache(repo, bundler, outgoing)
1742 addpartrevbranchcache(repo, bundler, outgoing)
1743
1743
1744 if opts.get(b'obsolescence', False):
1744 if opts.get(b'obsolescence', False):
1745 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1745 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1746 buildobsmarkerspart(
1746 buildobsmarkerspart(
1747 bundler,
1747 bundler,
1748 obsmarkers,
1748 obsmarkers,
1749 mandatory=opts.get(b'obsolescence-mandatory', True),
1749 mandatory=opts.get(b'obsolescence-mandatory', True),
1750 )
1750 )
1751
1751
1752 if opts.get(b'phases', False):
1752 if opts.get(b'phases', False):
1753 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1753 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1754 phasedata = phases.binaryencode(headsbyphase)
1754 phasedata = phases.binaryencode(headsbyphase)
1755 bundler.newpart(b'phase-heads', data=phasedata)
1755 bundler.newpart(b'phase-heads', data=phasedata)
1756
1756
1757
1757
1758 def addparttagsfnodescache(repo, bundler, outgoing):
1758 def addparttagsfnodescache(repo, bundler, outgoing):
1759 # we include the tags fnode cache for the bundle changeset
1759 # we include the tags fnode cache for the bundle changeset
1760 # (as an optional parts)
1760 # (as an optional parts)
1761 cache = tags.hgtagsfnodescache(repo.unfiltered())
1761 cache = tags.hgtagsfnodescache(repo.unfiltered())
1762 chunks = []
1762 chunks = []
1763
1763
1764 # .hgtags fnodes are only relevant for head changesets. While we could
1764 # .hgtags fnodes are only relevant for head changesets. While we could
1765 # transfer values for all known nodes, there will likely be little to
1765 # transfer values for all known nodes, there will likely be little to
1766 # no benefit.
1766 # no benefit.
1767 #
1767 #
1768 # We don't bother using a generator to produce output data because
1768 # We don't bother using a generator to produce output data because
1769 # a) we only have 40 bytes per head and even esoteric numbers of heads
1769 # a) we only have 40 bytes per head and even esoteric numbers of heads
1770 # consume little memory (1M heads is 40MB) b) we don't want to send the
1770 # consume little memory (1M heads is 40MB) b) we don't want to send the
1771 # part if we don't have entries and knowing if we have entries requires
1771 # part if we don't have entries and knowing if we have entries requires
1772 # cache lookups.
1772 # cache lookups.
1773 for node in outgoing.ancestorsof:
1773 for node in outgoing.ancestorsof:
1774 # Don't compute missing, as this may slow down serving.
1774 # Don't compute missing, as this may slow down serving.
1775 fnode = cache.getfnode(node, computemissing=False)
1775 fnode = cache.getfnode(node, computemissing=False)
1776 if fnode:
1776 if fnode:
1777 chunks.extend([node, fnode])
1777 chunks.extend([node, fnode])
1778
1778
1779 if chunks:
1779 if chunks:
1780 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1780 bundler.newpart(b'hgtagsfnodes', data=b''.join(chunks))
1781
1781
1782
1782
1783 def addpartrevbranchcache(repo, bundler, outgoing):
1783 def addpartrevbranchcache(repo, bundler, outgoing):
1784 # we include the rev branch cache for the bundle changeset
1784 # we include the rev branch cache for the bundle changeset
1785 # (as an optional parts)
1785 # (as an optional parts)
1786 cache = repo.revbranchcache()
1786 cache = repo.revbranchcache()
1787 cl = repo.unfiltered().changelog
1787 cl = repo.unfiltered().changelog
1788 branchesdata = collections.defaultdict(lambda: (set(), set()))
1788 branchesdata = collections.defaultdict(lambda: (set(), set()))
1789 for node in outgoing.missing:
1789 for node in outgoing.missing:
1790 branch, close = cache.branchinfo(cl.rev(node))
1790 branch, close = cache.branchinfo(cl.rev(node))
1791 branchesdata[branch][close].add(node)
1791 branchesdata[branch][close].add(node)
1792
1792
1793 def generate():
1793 def generate():
1794 for branch, (nodes, closed) in sorted(branchesdata.items()):
1794 for branch, (nodes, closed) in sorted(branchesdata.items()):
1795 utf8branch = encoding.fromlocal(branch)
1795 utf8branch = encoding.fromlocal(branch)
1796 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1796 yield rbcstruct.pack(len(utf8branch), len(nodes), len(closed))
1797 yield utf8branch
1797 yield utf8branch
1798 for n in sorted(nodes):
1798 for n in sorted(nodes):
1799 yield n
1799 yield n
1800 for n in sorted(closed):
1800 for n in sorted(closed):
1801 yield n
1801 yield n
1802
1802
1803 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1803 bundler.newpart(b'cache:rev-branch-cache', data=generate(), mandatory=False)
1804
1804
1805
1805
1806 def _formatrequirementsspec(requirements):
1806 def _formatrequirementsspec(requirements):
1807 requirements = [req for req in requirements if req != b"shared"]
1807 requirements = [req for req in requirements if req != b"shared"]
1808 return urlreq.quote(b','.join(sorted(requirements)))
1808 return urlreq.quote(b','.join(sorted(requirements)))
1809
1809
1810
1810
1811 def _formatrequirementsparams(requirements):
1811 def _formatrequirementsparams(requirements):
1812 requirements = _formatrequirementsspec(requirements)
1812 requirements = _formatrequirementsspec(requirements)
1813 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1813 params = b"%s%s" % (urlreq.quote(b"requirements="), requirements)
1814 return params
1814 return params
1815
1815
1816
1816
1817 def format_remote_wanted_sidedata(repo):
1817 def format_remote_wanted_sidedata(repo):
1818 """Formats a repo's wanted sidedata categories into a bytestring for
1818 """Formats a repo's wanted sidedata categories into a bytestring for
1819 capabilities exchange."""
1819 capabilities exchange."""
1820 wanted = b""
1820 wanted = b""
1821 if repo._wanted_sidedata:
1821 if repo._wanted_sidedata:
1822 wanted = b','.join(
1822 wanted = b','.join(
1823 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1823 pycompat.bytestr(c) for c in sorted(repo._wanted_sidedata)
1824 )
1824 )
1825 return wanted
1825 return wanted
1826
1826
1827
1827
1828 def read_remote_wanted_sidedata(remote):
1828 def read_remote_wanted_sidedata(remote):
1829 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1829 sidedata_categories = remote.capable(b'exp-wanted-sidedata')
1830 return read_wanted_sidedata(sidedata_categories)
1830 return read_wanted_sidedata(sidedata_categories)
1831
1831
1832
1832
1833 def read_wanted_sidedata(formatted):
1833 def read_wanted_sidedata(formatted):
1834 if formatted:
1834 if formatted:
1835 return set(formatted.split(b','))
1835 return set(formatted.split(b','))
1836 return set()
1836 return set()
1837
1837
1838
1838
1839 def addpartbundlestream2(bundler, repo, **kwargs):
1839 def addpartbundlestream2(bundler, repo, **kwargs):
1840 if not kwargs.get('stream', False):
1840 if not kwargs.get('stream', False):
1841 return
1841 return
1842
1842
1843 if not streamclone.allowservergeneration(repo):
1843 if not streamclone.allowservergeneration(repo):
1844 raise error.Abort(
1844 raise error.Abort(
1845 _(
1845 _(
1846 b'stream data requested but server does not allow '
1846 b'stream data requested but server does not allow '
1847 b'this feature'
1847 b'this feature'
1848 ),
1848 ),
1849 hint=_(
1849 hint=_(
1850 b'well-behaved clients should not be '
1850 b'well-behaved clients should not be '
1851 b'requesting stream data from servers not '
1851 b'requesting stream data from servers not '
1852 b'advertising it; the client may be buggy'
1852 b'advertising it; the client may be buggy'
1853 ),
1853 ),
1854 )
1854 )
1855
1855
1856 # Stream clones don't compress well. And compression undermines a
1856 # Stream clones don't compress well. And compression undermines a
1857 # goal of stream clones, which is to be fast. Communicate the desire
1857 # goal of stream clones, which is to be fast. Communicate the desire
1858 # to avoid compression to consumers of the bundle.
1858 # to avoid compression to consumers of the bundle.
1859 bundler.prefercompressed = False
1859 bundler.prefercompressed = False
1860
1860
1861 # get the includes and excludes
1861 # get the includes and excludes
1862 includepats = kwargs.get('includepats')
1862 includepats = kwargs.get('includepats')
1863 excludepats = kwargs.get('excludepats')
1863 excludepats = kwargs.get('excludepats')
1864
1864
1865 narrowstream = repo.ui.configbool(
1865 narrowstream = repo.ui.configbool(
1866 b'experimental', b'server.stream-narrow-clones'
1866 b'experimental', b'server.stream-narrow-clones'
1867 )
1867 )
1868
1868
1869 if (includepats or excludepats) and not narrowstream:
1869 if (includepats or excludepats) and not narrowstream:
1870 raise error.Abort(_(b'server does not support narrow stream clones'))
1870 raise error.Abort(_(b'server does not support narrow stream clones'))
1871
1871
1872 includeobsmarkers = False
1872 includeobsmarkers = False
1873 if repo.obsstore:
1873 if repo.obsstore:
1874 remoteversions = obsmarkersversion(bundler.capabilities)
1874 remoteversions = obsmarkersversion(bundler.capabilities)
1875 if not remoteversions:
1875 if not remoteversions:
1876 raise error.Abort(
1876 raise error.Abort(
1877 _(
1877 _(
1878 b'server has obsolescence markers, but client '
1878 b'server has obsolescence markers, but client '
1879 b'cannot receive them via stream clone'
1879 b'cannot receive them via stream clone'
1880 )
1880 )
1881 )
1881 )
1882 elif repo.obsstore._version in remoteversions:
1882 elif repo.obsstore._version in remoteversions:
1883 includeobsmarkers = True
1883 includeobsmarkers = True
1884
1884
1885 filecount, bytecount, it = streamclone.generatev2(
1885 filecount, bytecount, it = streamclone.generatev2(
1886 repo, includepats, excludepats, includeobsmarkers
1886 repo, includepats, excludepats, includeobsmarkers
1887 )
1887 )
1888 requirements = streamclone.streamed_requirements(repo)
1888 requirements = streamclone.streamed_requirements(repo)
1889 requirements = _formatrequirementsspec(requirements)
1889 requirements = _formatrequirementsspec(requirements)
1890 part = bundler.newpart(b'stream2', data=it)
1890 part = bundler.newpart(b'stream2', data=it)
1891 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1891 part.addparam(b'bytecount', b'%d' % bytecount, mandatory=True)
1892 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1892 part.addparam(b'filecount', b'%d' % filecount, mandatory=True)
1893 part.addparam(b'requirements', requirements, mandatory=True)
1893 part.addparam(b'requirements', requirements, mandatory=True)
1894
1894
1895
1895
1896 def buildobsmarkerspart(bundler, markers, mandatory=True):
1896 def buildobsmarkerspart(bundler, markers, mandatory=True):
1897 """add an obsmarker part to the bundler with <markers>
1897 """add an obsmarker part to the bundler with <markers>
1898
1898
1899 No part is created if markers is empty.
1899 No part is created if markers is empty.
1900 Raises ValueError if the bundler doesn't support any known obsmarker format.
1900 Raises ValueError if the bundler doesn't support any known obsmarker format.
1901 """
1901 """
1902 if not markers:
1902 if not markers:
1903 return None
1903 return None
1904
1904
1905 remoteversions = obsmarkersversion(bundler.capabilities)
1905 remoteversions = obsmarkersversion(bundler.capabilities)
1906 version = obsolete.commonversion(remoteversions)
1906 version = obsolete.commonversion(remoteversions)
1907 if version is None:
1907 if version is None:
1908 raise ValueError(b'bundler does not support common obsmarker format')
1908 raise ValueError(b'bundler does not support common obsmarker format')
1909 stream = obsolete.encodemarkers(markers, True, version=version)
1909 stream = obsolete.encodemarkers(markers, True, version=version)
1910 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1910 return bundler.newpart(b'obsmarkers', data=stream, mandatory=mandatory)
1911
1911
1912
1912
1913 def writebundle(
1913 def writebundle(
1914 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1914 ui, cg, filename, bundletype, vfs=None, compression=None, compopts=None
1915 ):
1915 ):
1916 """Write a bundle file and return its filename.
1916 """Write a bundle file and return its filename.
1917
1917
1918 Existing files will not be overwritten.
1918 Existing files will not be overwritten.
1919 If no filename is specified, a temporary file is created.
1919 If no filename is specified, a temporary file is created.
1920 bz2 compression can be turned off.
1920 bz2 compression can be turned off.
1921 The bundle file will be deleted in case of errors.
1921 The bundle file will be deleted in case of errors.
1922 """
1922 """
1923
1923
1924 if bundletype == b"HG20":
1924 if bundletype == b"HG20":
1925 bundle = bundle20(ui)
1925 bundle = bundle20(ui)
1926 bundle.setcompression(compression, compopts)
1926 bundle.setcompression(compression, compopts)
1927 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1927 part = bundle.newpart(b'changegroup', data=cg.getchunks())
1928 part.addparam(b'version', cg.version)
1928 part.addparam(b'version', cg.version)
1929 if b'clcount' in cg.extras:
1929 if b'clcount' in cg.extras:
1930 part.addparam(
1930 part.addparam(
1931 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1931 b'nbchanges', b'%d' % cg.extras[b'clcount'], mandatory=False
1932 )
1932 )
1933 chunkiter = bundle.getchunks()
1933 chunkiter = bundle.getchunks()
1934 else:
1934 else:
1935 # compression argument is only for the bundle2 case
1935 # compression argument is only for the bundle2 case
1936 assert compression is None
1936 assert compression is None
1937 if cg.version != b'01':
1937 if cg.version != b'01':
1938 raise error.Abort(
1938 raise error.Abort(
1939 _(b'old bundle types only supports v1 changegroups')
1939 _(b'old bundle types only supports v1 changegroups')
1940 )
1940 )
1941 header, comp = bundletypes[bundletype]
1941 header, comp = bundletypes[bundletype]
1942 if comp not in util.compengines.supportedbundletypes:
1942 if comp not in util.compengines.supportedbundletypes:
1943 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1943 raise error.Abort(_(b'unknown stream compression type: %s') % comp)
1944 compengine = util.compengines.forbundletype(comp)
1944 compengine = util.compengines.forbundletype(comp)
1945
1945
1946 def chunkiter():
1946 def chunkiter():
1947 yield header
1947 yield header
1948 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1948 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1949 yield chunk
1949 yield chunk
1950
1950
1951 chunkiter = chunkiter()
1951 chunkiter = chunkiter()
1952
1952
1953 # parse the changegroup data, otherwise we will block
1953 # parse the changegroup data, otherwise we will block
1954 # in case of sshrepo because we don't know the end of the stream
1954 # in case of sshrepo because we don't know the end of the stream
1955 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1955 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1956
1956
1957
1957
1958 def combinechangegroupresults(op):
1958 def combinechangegroupresults(op):
1959 """logic to combine 0 or more addchangegroup results into one"""
1959 """logic to combine 0 or more addchangegroup results into one"""
1960 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1960 results = [r.get(b'return', 0) for r in op.records[b'changegroup']]
1961 changedheads = 0
1961 changedheads = 0
1962 result = 1
1962 result = 1
1963 for ret in results:
1963 for ret in results:
1964 # If any changegroup result is 0, return 0
1964 # If any changegroup result is 0, return 0
1965 if ret == 0:
1965 if ret == 0:
1966 result = 0
1966 result = 0
1967 break
1967 break
1968 if ret < -1:
1968 if ret < -1:
1969 changedheads += ret + 1
1969 changedheads += ret + 1
1970 elif ret > 1:
1970 elif ret > 1:
1971 changedheads += ret - 1
1971 changedheads += ret - 1
1972 if changedheads > 0:
1972 if changedheads > 0:
1973 result = 1 + changedheads
1973 result = 1 + changedheads
1974 elif changedheads < 0:
1974 elif changedheads < 0:
1975 result = -1 + changedheads
1975 result = -1 + changedheads
1976 return result
1976 return result
1977
1977
1978
1978
1979 @parthandler(
1979 @parthandler(
1980 b'changegroup',
1980 b'changegroup',
1981 (
1981 (
1982 b'version',
1982 b'version',
1983 b'nbchanges',
1983 b'nbchanges',
1984 b'exp-sidedata',
1984 b'exp-sidedata',
1985 b'exp-wanted-sidedata',
1985 b'exp-wanted-sidedata',
1986 b'treemanifest',
1986 b'treemanifest',
1987 b'targetphase',
1987 b'targetphase',
1988 ),
1988 ),
1989 )
1989 )
1990 def handlechangegroup(op, inpart):
1990 def handlechangegroup(op, inpart):
1991 """apply a changegroup part on the repo"""
1991 """apply a changegroup part on the repo"""
1992 from . import localrepo
1992 from . import localrepo
1993
1993
1994 tr = op.gettransaction()
1994 tr = op.gettransaction()
1995 unpackerversion = inpart.params.get(b'version', b'01')
1995 unpackerversion = inpart.params.get(b'version', b'01')
1996 # We should raise an appropriate exception here
1996 # We should raise an appropriate exception here
1997 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1997 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1998 # the source and url passed here are overwritten by the one contained in
1998 # the source and url passed here are overwritten by the one contained in
1999 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1999 # the transaction.hookargs argument. So 'bundle2' is a placeholder
2000 nbchangesets = None
2000 nbchangesets = None
2001 if b'nbchanges' in inpart.params:
2001 if b'nbchanges' in inpart.params:
2002 nbchangesets = int(inpart.params.get(b'nbchanges'))
2002 nbchangesets = int(inpart.params.get(b'nbchanges'))
2003 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2003 if b'treemanifest' in inpart.params and not scmutil.istreemanifest(op.repo):
2004 if len(op.repo.changelog) != 0:
2004 if len(op.repo.changelog) != 0:
2005 raise error.Abort(
2005 raise error.Abort(
2006 _(
2006 _(
2007 b"bundle contains tree manifests, but local repo is "
2007 b"bundle contains tree manifests, but local repo is "
2008 b"non-empty and does not use tree manifests"
2008 b"non-empty and does not use tree manifests"
2009 )
2009 )
2010 )
2010 )
2011 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2011 op.repo.requirements.add(requirements.TREEMANIFEST_REQUIREMENT)
2012 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2012 op.repo.svfs.options = localrepo.resolvestorevfsoptions(
2013 op.repo.ui, op.repo.requirements, op.repo.features
2013 op.repo.ui, op.repo.requirements, op.repo.features
2014 )
2014 )
2015 scmutil.writereporequirements(op.repo)
2015 scmutil.writereporequirements(op.repo)
2016
2016
2017 extrakwargs = {}
2017 extrakwargs = {}
2018 targetphase = inpart.params.get(b'targetphase')
2018 targetphase = inpart.params.get(b'targetphase')
2019 if targetphase is not None:
2019 if targetphase is not None:
2020 extrakwargs['targetphase'] = int(targetphase)
2020 extrakwargs['targetphase'] = int(targetphase)
2021
2021
2022 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2022 remote_sidedata = inpart.params.get(b'exp-wanted-sidedata')
2023 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2023 extrakwargs['sidedata_categories'] = read_wanted_sidedata(remote_sidedata)
2024
2024
2025 ret = _processchangegroup(
2025 ret = _processchangegroup(
2026 op,
2026 op,
2027 cg,
2027 cg,
2028 tr,
2028 tr,
2029 op.source,
2029 op.source,
2030 b'bundle2',
2030 b'bundle2',
2031 expectedtotal=nbchangesets,
2031 expectedtotal=nbchangesets,
2032 **extrakwargs
2032 **extrakwargs
2033 )
2033 )
2034 if op.reply is not None:
2034 if op.reply is not None:
2035 # This is definitely not the final form of this
2035 # This is definitely not the final form of this
2036 # return. But one need to start somewhere.
2036 # return. But one need to start somewhere.
2037 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2037 part = op.reply.newpart(b'reply:changegroup', mandatory=False)
2038 part.addparam(
2038 part.addparam(
2039 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2039 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2040 )
2040 )
2041 part.addparam(b'return', b'%i' % ret, mandatory=False)
2041 part.addparam(b'return', b'%i' % ret, mandatory=False)
2042 assert not inpart.read()
2042 assert not inpart.read()
2043
2043
2044
2044
2045 _remotechangegroupparams = tuple(
2045 _remotechangegroupparams = tuple(
2046 [b'url', b'size', b'digests']
2046 [b'url', b'size', b'digests']
2047 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2047 + [b'digest:%s' % k for k in util.DIGESTS.keys()]
2048 )
2048 )
2049
2049
2050
2050
2051 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2051 @parthandler(b'remote-changegroup', _remotechangegroupparams)
2052 def handleremotechangegroup(op, inpart):
2052 def handleremotechangegroup(op, inpart):
2053 """apply a bundle10 on the repo, given an url and validation information
2053 """apply a bundle10 on the repo, given an url and validation information
2054
2054
2055 All the information about the remote bundle to import are given as
2055 All the information about the remote bundle to import are given as
2056 parameters. The parameters include:
2056 parameters. The parameters include:
2057 - url: the url to the bundle10.
2057 - url: the url to the bundle10.
2058 - size: the bundle10 file size. It is used to validate what was
2058 - size: the bundle10 file size. It is used to validate what was
2059 retrieved by the client matches the server knowledge about the bundle.
2059 retrieved by the client matches the server knowledge about the bundle.
2060 - digests: a space separated list of the digest types provided as
2060 - digests: a space separated list of the digest types provided as
2061 parameters.
2061 parameters.
2062 - digest:<digest-type>: the hexadecimal representation of the digest with
2062 - digest:<digest-type>: the hexadecimal representation of the digest with
2063 that name. Like the size, it is used to validate what was retrieved by
2063 that name. Like the size, it is used to validate what was retrieved by
2064 the client matches what the server knows about the bundle.
2064 the client matches what the server knows about the bundle.
2065
2065
2066 When multiple digest types are given, all of them are checked.
2066 When multiple digest types are given, all of them are checked.
2067 """
2067 """
2068 try:
2068 try:
2069 raw_url = inpart.params[b'url']
2069 raw_url = inpart.params[b'url']
2070 except KeyError:
2070 except KeyError:
2071 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2071 raise error.Abort(_(b'remote-changegroup: missing "%s" param') % b'url')
2072 parsed_url = urlutil.url(raw_url)
2072 parsed_url = urlutil.url(raw_url)
2073 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2073 if parsed_url.scheme not in capabilities[b'remote-changegroup']:
2074 raise error.Abort(
2074 raise error.Abort(
2075 _(b'remote-changegroup does not support %s urls')
2075 _(b'remote-changegroup does not support %s urls')
2076 % parsed_url.scheme
2076 % parsed_url.scheme
2077 )
2077 )
2078
2078
2079 try:
2079 try:
2080 size = int(inpart.params[b'size'])
2080 size = int(inpart.params[b'size'])
2081 except ValueError:
2081 except ValueError:
2082 raise error.Abort(
2082 raise error.Abort(
2083 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2083 _(b'remote-changegroup: invalid value for param "%s"') % b'size'
2084 )
2084 )
2085 except KeyError:
2085 except KeyError:
2086 raise error.Abort(
2086 raise error.Abort(
2087 _(b'remote-changegroup: missing "%s" param') % b'size'
2087 _(b'remote-changegroup: missing "%s" param') % b'size'
2088 )
2088 )
2089
2089
2090 digests = {}
2090 digests = {}
2091 for typ in inpart.params.get(b'digests', b'').split():
2091 for typ in inpart.params.get(b'digests', b'').split():
2092 param = b'digest:%s' % typ
2092 param = b'digest:%s' % typ
2093 try:
2093 try:
2094 value = inpart.params[param]
2094 value = inpart.params[param]
2095 except KeyError:
2095 except KeyError:
2096 raise error.Abort(
2096 raise error.Abort(
2097 _(b'remote-changegroup: missing "%s" param') % param
2097 _(b'remote-changegroup: missing "%s" param') % param
2098 )
2098 )
2099 digests[typ] = value
2099 digests[typ] = value
2100
2100
2101 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2101 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
2102
2102
2103 tr = op.gettransaction()
2103 tr = op.gettransaction()
2104 from . import exchange
2104 from . import exchange
2105
2105
2106 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2106 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
2107 if not isinstance(cg, changegroup.cg1unpacker):
2107 if not isinstance(cg, changegroup.cg1unpacker):
2108 raise error.Abort(
2108 raise error.Abort(
2109 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2109 _(b'%s: not a bundle version 1.0') % urlutil.hidepassword(raw_url)
2110 )
2110 )
2111 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2111 ret = _processchangegroup(op, cg, tr, op.source, b'bundle2')
2112 if op.reply is not None:
2112 if op.reply is not None:
2113 # This is definitely not the final form of this
2113 # This is definitely not the final form of this
2114 # return. But one need to start somewhere.
2114 # return. But one need to start somewhere.
2115 part = op.reply.newpart(b'reply:changegroup')
2115 part = op.reply.newpart(b'reply:changegroup')
2116 part.addparam(
2116 part.addparam(
2117 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2117 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2118 )
2118 )
2119 part.addparam(b'return', b'%i' % ret, mandatory=False)
2119 part.addparam(b'return', b'%i' % ret, mandatory=False)
2120 try:
2120 try:
2121 real_part.validate()
2121 real_part.validate()
2122 except error.Abort as e:
2122 except error.Abort as e:
2123 raise error.Abort(
2123 raise error.Abort(
2124 _(b'bundle at %s is corrupted:\n%s')
2124 _(b'bundle at %s is corrupted:\n%s')
2125 % (urlutil.hidepassword(raw_url), e.message)
2125 % (urlutil.hidepassword(raw_url), e.message)
2126 )
2126 )
2127 assert not inpart.read()
2127 assert not inpart.read()
2128
2128
2129
2129
2130 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2130 @parthandler(b'reply:changegroup', (b'return', b'in-reply-to'))
2131 def handlereplychangegroup(op, inpart):
2131 def handlereplychangegroup(op, inpart):
2132 ret = int(inpart.params[b'return'])
2132 ret = int(inpart.params[b'return'])
2133 replyto = int(inpart.params[b'in-reply-to'])
2133 replyto = int(inpart.params[b'in-reply-to'])
2134 op.records.add(b'changegroup', {b'return': ret}, replyto)
2134 op.records.add(b'changegroup', {b'return': ret}, replyto)
2135
2135
2136
2136
2137 @parthandler(b'check:bookmarks')
2137 @parthandler(b'check:bookmarks')
2138 def handlecheckbookmarks(op, inpart):
2138 def handlecheckbookmarks(op, inpart):
2139 """check location of bookmarks
2139 """check location of bookmarks
2140
2140
2141 This part is to be used to detect push race regarding bookmark, it
2141 This part is to be used to detect push race regarding bookmark, it
2142 contains binary encoded (bookmark, node) tuple. If the local state does
2142 contains binary encoded (bookmark, node) tuple. If the local state does
2143 not marks the one in the part, a PushRaced exception is raised
2143 not marks the one in the part, a PushRaced exception is raised
2144 """
2144 """
2145 bookdata = bookmarks.binarydecode(op.repo, inpart)
2145 bookdata = bookmarks.binarydecode(op.repo, inpart)
2146
2146
2147 msgstandard = (
2147 msgstandard = (
2148 b'remote repository changed while pushing - please try again '
2148 b'remote repository changed while pushing - please try again '
2149 b'(bookmark "%s" move from %s to %s)'
2149 b'(bookmark "%s" move from %s to %s)'
2150 )
2150 )
2151 msgmissing = (
2151 msgmissing = (
2152 b'remote repository changed while pushing - please try again '
2152 b'remote repository changed while pushing - please try again '
2153 b'(bookmark "%s" is missing, expected %s)'
2153 b'(bookmark "%s" is missing, expected %s)'
2154 )
2154 )
2155 msgexist = (
2155 msgexist = (
2156 b'remote repository changed while pushing - please try again '
2156 b'remote repository changed while pushing - please try again '
2157 b'(bookmark "%s" set on %s, expected missing)'
2157 b'(bookmark "%s" set on %s, expected missing)'
2158 )
2158 )
2159 for book, node in bookdata:
2159 for book, node in bookdata:
2160 currentnode = op.repo._bookmarks.get(book)
2160 currentnode = op.repo._bookmarks.get(book)
2161 if currentnode != node:
2161 if currentnode != node:
2162 if node is None:
2162 if node is None:
2163 finalmsg = msgexist % (book, short(currentnode))
2163 finalmsg = msgexist % (book, short(currentnode))
2164 elif currentnode is None:
2164 elif currentnode is None:
2165 finalmsg = msgmissing % (book, short(node))
2165 finalmsg = msgmissing % (book, short(node))
2166 else:
2166 else:
2167 finalmsg = msgstandard % (
2167 finalmsg = msgstandard % (
2168 book,
2168 book,
2169 short(node),
2169 short(node),
2170 short(currentnode),
2170 short(currentnode),
2171 )
2171 )
2172 raise error.PushRaced(finalmsg)
2172 raise error.PushRaced(finalmsg)
2173
2173
2174
2174
2175 @parthandler(b'check:heads')
2175 @parthandler(b'check:heads')
2176 def handlecheckheads(op, inpart):
2176 def handlecheckheads(op, inpart):
2177 """check that head of the repo did not change
2177 """check that head of the repo did not change
2178
2178
2179 This is used to detect a push race when using unbundle.
2179 This is used to detect a push race when using unbundle.
2180 This replaces the "heads" argument of unbundle."""
2180 This replaces the "heads" argument of unbundle."""
2181 h = inpart.read(20)
2181 h = inpart.read(20)
2182 heads = []
2182 heads = []
2183 while len(h) == 20:
2183 while len(h) == 20:
2184 heads.append(h)
2184 heads.append(h)
2185 h = inpart.read(20)
2185 h = inpart.read(20)
2186 assert not h
2186 assert not h
2187 # Trigger a transaction so that we are guaranteed to have the lock now.
2187 # Trigger a transaction so that we are guaranteed to have the lock now.
2188 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2188 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2189 op.gettransaction()
2189 op.gettransaction()
2190 if sorted(heads) != sorted(op.repo.heads()):
2190 if sorted(heads) != sorted(op.repo.heads()):
2191 raise error.PushRaced(
2191 raise error.PushRaced(
2192 b'remote repository changed while pushing - please try again'
2192 b'remote repository changed while pushing - please try again'
2193 )
2193 )
2194
2194
2195
2195
2196 @parthandler(b'check:updated-heads')
2196 @parthandler(b'check:updated-heads')
2197 def handlecheckupdatedheads(op, inpart):
2197 def handlecheckupdatedheads(op, inpart):
2198 """check for race on the heads touched by a push
2198 """check for race on the heads touched by a push
2199
2199
2200 This is similar to 'check:heads' but focus on the heads actually updated
2200 This is similar to 'check:heads' but focus on the heads actually updated
2201 during the push. If other activities happen on unrelated heads, it is
2201 during the push. If other activities happen on unrelated heads, it is
2202 ignored.
2202 ignored.
2203
2203
2204 This allow server with high traffic to avoid push contention as long as
2204 This allow server with high traffic to avoid push contention as long as
2205 unrelated parts of the graph are involved."""
2205 unrelated parts of the graph are involved."""
2206 h = inpart.read(20)
2206 h = inpart.read(20)
2207 heads = []
2207 heads = []
2208 while len(h) == 20:
2208 while len(h) == 20:
2209 heads.append(h)
2209 heads.append(h)
2210 h = inpart.read(20)
2210 h = inpart.read(20)
2211 assert not h
2211 assert not h
2212 # trigger a transaction so that we are guaranteed to have the lock now.
2212 # trigger a transaction so that we are guaranteed to have the lock now.
2213 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2213 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2214 op.gettransaction()
2214 op.gettransaction()
2215
2215
2216 currentheads = set()
2216 currentheads = set()
2217 for ls in op.repo.branchmap().iterheads():
2217 for ls in op.repo.branchmap().iterheads():
2218 currentheads.update(ls)
2218 currentheads.update(ls)
2219
2219
2220 for h in heads:
2220 for h in heads:
2221 if h not in currentheads:
2221 if h not in currentheads:
2222 raise error.PushRaced(
2222 raise error.PushRaced(
2223 b'remote repository changed while pushing - '
2223 b'remote repository changed while pushing - '
2224 b'please try again'
2224 b'please try again'
2225 )
2225 )
2226
2226
2227
2227
2228 @parthandler(b'check:phases')
2228 @parthandler(b'check:phases')
2229 def handlecheckphases(op, inpart):
2229 def handlecheckphases(op, inpart):
2230 """check that phase boundaries of the repository did not change
2230 """check that phase boundaries of the repository did not change
2231
2231
2232 This is used to detect a push race.
2232 This is used to detect a push race.
2233 """
2233 """
2234 phasetonodes = phases.binarydecode(inpart)
2234 phasetonodes = phases.binarydecode(inpart)
2235 unfi = op.repo.unfiltered()
2235 unfi = op.repo.unfiltered()
2236 cl = unfi.changelog
2236 cl = unfi.changelog
2237 phasecache = unfi._phasecache
2237 phasecache = unfi._phasecache
2238 msg = (
2238 msg = (
2239 b'remote repository changed while pushing - please try again '
2239 b'remote repository changed while pushing - please try again '
2240 b'(%s is %s expected %s)'
2240 b'(%s is %s expected %s)'
2241 )
2241 )
2242 for expectedphase, nodes in phasetonodes.items():
2242 for expectedphase, nodes in phasetonodes.items():
2243 for n in nodes:
2243 for n in nodes:
2244 actualphase = phasecache.phase(unfi, cl.rev(n))
2244 actualphase = phasecache.phase(unfi, cl.rev(n))
2245 if actualphase != expectedphase:
2245 if actualphase != expectedphase:
2246 finalmsg = msg % (
2246 finalmsg = msg % (
2247 short(n),
2247 short(n),
2248 phases.phasenames[actualphase],
2248 phases.phasenames[actualphase],
2249 phases.phasenames[expectedphase],
2249 phases.phasenames[expectedphase],
2250 )
2250 )
2251 raise error.PushRaced(finalmsg)
2251 raise error.PushRaced(finalmsg)
2252
2252
2253
2253
2254 @parthandler(b'output')
2254 @parthandler(b'output')
2255 def handleoutput(op, inpart):
2255 def handleoutput(op, inpart):
2256 """forward output captured on the server to the client"""
2256 """forward output captured on the server to the client"""
2257 for line in inpart.read().splitlines():
2257 for line in inpart.read().splitlines():
2258 op.ui.status(_(b'remote: %s\n') % line)
2258 op.ui.status(_(b'remote: %s\n') % line)
2259
2259
2260
2260
2261 @parthandler(b'replycaps')
2261 @parthandler(b'replycaps')
2262 def handlereplycaps(op, inpart):
2262 def handlereplycaps(op, inpart):
2263 """Notify that a reply bundle should be created
2263 """Notify that a reply bundle should be created
2264
2264
2265 The payload contains the capabilities information for the reply"""
2265 The payload contains the capabilities information for the reply"""
2266 caps = decodecaps(inpart.read())
2266 caps = decodecaps(inpart.read())
2267 if op.reply is None:
2267 if op.reply is None:
2268 op.reply = bundle20(op.ui, caps)
2268 op.reply = bundle20(op.ui, caps)
2269
2269
2270
2270
2271 class AbortFromPart(error.Abort):
2271 class AbortFromPart(error.Abort):
2272 """Sub-class of Abort that denotes an error from a bundle2 part."""
2272 """Sub-class of Abort that denotes an error from a bundle2 part."""
2273
2273
2274
2274
2275 @parthandler(b'error:abort', (b'message', b'hint'))
2275 @parthandler(b'error:abort', (b'message', b'hint'))
2276 def handleerrorabort(op, inpart):
2276 def handleerrorabort(op, inpart):
2277 """Used to transmit abort error over the wire"""
2277 """Used to transmit abort error over the wire"""
2278 raise AbortFromPart(
2278 raise AbortFromPart(
2279 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2279 inpart.params[b'message'], hint=inpart.params.get(b'hint')
2280 )
2280 )
2281
2281
2282
2282
2283 @parthandler(
2283 @parthandler(
2284 b'error:pushkey',
2284 b'error:pushkey',
2285 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2285 (b'namespace', b'key', b'new', b'old', b'ret', b'in-reply-to'),
2286 )
2286 )
2287 def handleerrorpushkey(op, inpart):
2287 def handleerrorpushkey(op, inpart):
2288 """Used to transmit failure of a mandatory pushkey over the wire"""
2288 """Used to transmit failure of a mandatory pushkey over the wire"""
2289 kwargs = {}
2289 kwargs = {}
2290 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2290 for name in (b'namespace', b'key', b'new', b'old', b'ret'):
2291 value = inpart.params.get(name)
2291 value = inpart.params.get(name)
2292 if value is not None:
2292 if value is not None:
2293 kwargs[name] = value
2293 kwargs[name] = value
2294 raise error.PushkeyFailed(
2294 raise error.PushkeyFailed(
2295 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2295 inpart.params[b'in-reply-to'], **pycompat.strkwargs(kwargs)
2296 )
2296 )
2297
2297
2298
2298
2299 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2299 @parthandler(b'error:unsupportedcontent', (b'parttype', b'params'))
2300 def handleerrorunsupportedcontent(op, inpart):
2300 def handleerrorunsupportedcontent(op, inpart):
2301 """Used to transmit unknown content error over the wire"""
2301 """Used to transmit unknown content error over the wire"""
2302 kwargs = {}
2302 kwargs = {}
2303 parttype = inpart.params.get(b'parttype')
2303 parttype = inpart.params.get(b'parttype')
2304 if parttype is not None:
2304 if parttype is not None:
2305 kwargs[b'parttype'] = parttype
2305 kwargs[b'parttype'] = parttype
2306 params = inpart.params.get(b'params')
2306 params = inpart.params.get(b'params')
2307 if params is not None:
2307 if params is not None:
2308 kwargs[b'params'] = params.split(b'\0')
2308 kwargs[b'params'] = params.split(b'\0')
2309
2309
2310 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2310 raise error.BundleUnknownFeatureError(**pycompat.strkwargs(kwargs))
2311
2311
2312
2312
2313 @parthandler(b'error:pushraced', (b'message',))
2313 @parthandler(b'error:pushraced', (b'message',))
2314 def handleerrorpushraced(op, inpart):
2314 def handleerrorpushraced(op, inpart):
2315 """Used to transmit push race error over the wire"""
2315 """Used to transmit push race error over the wire"""
2316 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2316 raise error.ResponseError(_(b'push failed:'), inpart.params[b'message'])
2317
2317
2318
2318
2319 @parthandler(b'listkeys', (b'namespace',))
2319 @parthandler(b'listkeys', (b'namespace',))
2320 def handlelistkeys(op, inpart):
2320 def handlelistkeys(op, inpart):
2321 """retrieve pushkey namespace content stored in a bundle2"""
2321 """retrieve pushkey namespace content stored in a bundle2"""
2322 namespace = inpart.params[b'namespace']
2322 namespace = inpart.params[b'namespace']
2323 r = pushkey.decodekeys(inpart.read())
2323 r = pushkey.decodekeys(inpart.read())
2324 op.records.add(b'listkeys', (namespace, r))
2324 op.records.add(b'listkeys', (namespace, r))
2325
2325
2326
2326
2327 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2327 @parthandler(b'pushkey', (b'namespace', b'key', b'old', b'new'))
2328 def handlepushkey(op, inpart):
2328 def handlepushkey(op, inpart):
2329 """process a pushkey request"""
2329 """process a pushkey request"""
2330 dec = pushkey.decode
2330 dec = pushkey.decode
2331 namespace = dec(inpart.params[b'namespace'])
2331 namespace = dec(inpart.params[b'namespace'])
2332 key = dec(inpart.params[b'key'])
2332 key = dec(inpart.params[b'key'])
2333 old = dec(inpart.params[b'old'])
2333 old = dec(inpart.params[b'old'])
2334 new = dec(inpart.params[b'new'])
2334 new = dec(inpart.params[b'new'])
2335 # Grab the transaction to ensure that we have the lock before performing the
2335 # Grab the transaction to ensure that we have the lock before performing the
2336 # pushkey.
2336 # pushkey.
2337 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2337 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2338 op.gettransaction()
2338 op.gettransaction()
2339 ret = op.repo.pushkey(namespace, key, old, new)
2339 ret = op.repo.pushkey(namespace, key, old, new)
2340 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2340 record = {b'namespace': namespace, b'key': key, b'old': old, b'new': new}
2341 op.records.add(b'pushkey', record)
2341 op.records.add(b'pushkey', record)
2342 if op.reply is not None:
2342 if op.reply is not None:
2343 rpart = op.reply.newpart(b'reply:pushkey')
2343 rpart = op.reply.newpart(b'reply:pushkey')
2344 rpart.addparam(
2344 rpart.addparam(
2345 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2345 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2346 )
2346 )
2347 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2347 rpart.addparam(b'return', b'%i' % ret, mandatory=False)
2348 if inpart.mandatory and not ret:
2348 if inpart.mandatory and not ret:
2349 kwargs = {}
2349 kwargs = {}
2350 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2350 for key in (b'namespace', b'key', b'new', b'old', b'ret'):
2351 if key in inpart.params:
2351 if key in inpart.params:
2352 kwargs[key] = inpart.params[key]
2352 kwargs[key] = inpart.params[key]
2353 raise error.PushkeyFailed(
2353 raise error.PushkeyFailed(
2354 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2354 partid=b'%d' % inpart.id, **pycompat.strkwargs(kwargs)
2355 )
2355 )
2356
2356
2357
2357
2358 @parthandler(b'bookmarks')
2358 @parthandler(b'bookmarks')
2359 def handlebookmark(op, inpart):
2359 def handlebookmark(op, inpart):
2360 """transmit bookmark information
2360 """transmit bookmark information
2361
2361
2362 The part contains binary encoded bookmark information.
2362 The part contains binary encoded bookmark information.
2363
2363
2364 The exact behavior of this part can be controlled by the 'bookmarks' mode
2364 The exact behavior of this part can be controlled by the 'bookmarks' mode
2365 on the bundle operation.
2365 on the bundle operation.
2366
2366
2367 When mode is 'apply' (the default) the bookmark information is applied as
2367 When mode is 'apply' (the default) the bookmark information is applied as
2368 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2368 is to the unbundling repository. Make sure a 'check:bookmarks' part is
2369 issued earlier to check for push races in such update. This behavior is
2369 issued earlier to check for push races in such update. This behavior is
2370 suitable for pushing.
2370 suitable for pushing.
2371
2371
2372 When mode is 'records', the information is recorded into the 'bookmarks'
2372 When mode is 'records', the information is recorded into the 'bookmarks'
2373 records of the bundle operation. This behavior is suitable for pulling.
2373 records of the bundle operation. This behavior is suitable for pulling.
2374 """
2374 """
2375 changes = bookmarks.binarydecode(op.repo, inpart)
2375 changes = bookmarks.binarydecode(op.repo, inpart)
2376
2376
2377 pushkeycompat = op.repo.ui.configbool(
2377 pushkeycompat = op.repo.ui.configbool(
2378 b'server', b'bookmarks-pushkey-compat'
2378 b'server', b'bookmarks-pushkey-compat'
2379 )
2379 )
2380 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2380 bookmarksmode = op.modes.get(b'bookmarks', b'apply')
2381
2381
2382 if bookmarksmode == b'apply':
2382 if bookmarksmode == b'apply':
2383 tr = op.gettransaction()
2383 tr = op.gettransaction()
2384 bookstore = op.repo._bookmarks
2384 bookstore = op.repo._bookmarks
2385 if pushkeycompat:
2385 if pushkeycompat:
2386 allhooks = []
2386 allhooks = []
2387 for book, node in changes:
2387 for book, node in changes:
2388 hookargs = tr.hookargs.copy()
2388 hookargs = tr.hookargs.copy()
2389 hookargs[b'pushkeycompat'] = b'1'
2389 hookargs[b'pushkeycompat'] = b'1'
2390 hookargs[b'namespace'] = b'bookmarks'
2390 hookargs[b'namespace'] = b'bookmarks'
2391 hookargs[b'key'] = book
2391 hookargs[b'key'] = book
2392 hookargs[b'old'] = hex(bookstore.get(book, b''))
2392 hookargs[b'old'] = hex(bookstore.get(book, b''))
2393 hookargs[b'new'] = hex(node if node is not None else b'')
2393 hookargs[b'new'] = hex(node if node is not None else b'')
2394 allhooks.append(hookargs)
2394 allhooks.append(hookargs)
2395
2395
2396 for hookargs in allhooks:
2396 for hookargs in allhooks:
2397 op.repo.hook(
2397 op.repo.hook(
2398 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2398 b'prepushkey', throw=True, **pycompat.strkwargs(hookargs)
2399 )
2399 )
2400
2400
2401 for book, node in changes:
2401 for book, node in changes:
2402 if bookmarks.isdivergent(book):
2402 if bookmarks.isdivergent(book):
2403 msg = _(b'cannot accept divergent bookmark %s!') % book
2403 msg = _(b'cannot accept divergent bookmark %s!') % book
2404 raise error.Abort(msg)
2404 raise error.Abort(msg)
2405
2405
2406 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2406 bookstore.applychanges(op.repo, op.gettransaction(), changes)
2407
2407
2408 if pushkeycompat:
2408 if pushkeycompat:
2409
2409
2410 def runhook(unused_success):
2410 def runhook(unused_success):
2411 for hookargs in allhooks:
2411 for hookargs in allhooks:
2412 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2412 op.repo.hook(b'pushkey', **pycompat.strkwargs(hookargs))
2413
2413
2414 op.repo._afterlock(runhook)
2414 op.repo._afterlock(runhook)
2415
2415
2416 elif bookmarksmode == b'records':
2416 elif bookmarksmode == b'records':
2417 for book, node in changes:
2417 for book, node in changes:
2418 record = {b'bookmark': book, b'node': node}
2418 record = {b'bookmark': book, b'node': node}
2419 op.records.add(b'bookmarks', record)
2419 op.records.add(b'bookmarks', record)
2420 else:
2420 else:
2421 raise error.ProgrammingError(
2421 raise error.ProgrammingError(
2422 b'unknown bookmark mode: %s' % bookmarksmode
2422 b'unknown bookmark mode: %s' % bookmarksmode
2423 )
2423 )
2424
2424
2425
2425
2426 @parthandler(b'phase-heads')
2426 @parthandler(b'phase-heads')
2427 def handlephases(op, inpart):
2427 def handlephases(op, inpart):
2428 """apply phases from bundle part to repo"""
2428 """apply phases from bundle part to repo"""
2429 headsbyphase = phases.binarydecode(inpart)
2429 headsbyphase = phases.binarydecode(inpart)
2430 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2430 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
2431
2431
2432
2432
2433 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2433 @parthandler(b'reply:pushkey', (b'return', b'in-reply-to'))
2434 def handlepushkeyreply(op, inpart):
2434 def handlepushkeyreply(op, inpart):
2435 """retrieve the result of a pushkey request"""
2435 """retrieve the result of a pushkey request"""
2436 ret = int(inpart.params[b'return'])
2436 ret = int(inpart.params[b'return'])
2437 partid = int(inpart.params[b'in-reply-to'])
2437 partid = int(inpart.params[b'in-reply-to'])
2438 op.records.add(b'pushkey', {b'return': ret}, partid)
2438 op.records.add(b'pushkey', {b'return': ret}, partid)
2439
2439
2440
2440
2441 @parthandler(b'obsmarkers')
2441 @parthandler(b'obsmarkers')
2442 def handleobsmarker(op, inpart):
2442 def handleobsmarker(op, inpart):
2443 """add a stream of obsmarkers to the repo"""
2443 """add a stream of obsmarkers to the repo"""
2444 tr = op.gettransaction()
2444 tr = op.gettransaction()
2445 markerdata = inpart.read()
2445 markerdata = inpart.read()
2446 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2446 if op.ui.config(b'experimental', b'obsmarkers-exchange-debug'):
2447 op.ui.writenoi18n(
2447 op.ui.writenoi18n(
2448 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2448 b'obsmarker-exchange: %i bytes received\n' % len(markerdata)
2449 )
2449 )
2450 # The mergemarkers call will crash if marker creation is not enabled.
2450 # The mergemarkers call will crash if marker creation is not enabled.
2451 # we want to avoid this if the part is advisory.
2451 # we want to avoid this if the part is advisory.
2452 if not inpart.mandatory and op.repo.obsstore.readonly:
2452 if not inpart.mandatory and op.repo.obsstore.readonly:
2453 op.repo.ui.debug(
2453 op.repo.ui.debug(
2454 b'ignoring obsolescence markers, feature not enabled\n'
2454 b'ignoring obsolescence markers, feature not enabled\n'
2455 )
2455 )
2456 return
2456 return
2457 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2457 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2458 op.repo.invalidatevolatilesets()
2458 op.repo.invalidatevolatilesets()
2459 op.records.add(b'obsmarkers', {b'new': new})
2459 op.records.add(b'obsmarkers', {b'new': new})
2460 if op.reply is not None:
2460 if op.reply is not None:
2461 rpart = op.reply.newpart(b'reply:obsmarkers')
2461 rpart = op.reply.newpart(b'reply:obsmarkers')
2462 rpart.addparam(
2462 rpart.addparam(
2463 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2463 b'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False
2464 )
2464 )
2465 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2465 rpart.addparam(b'new', b'%i' % new, mandatory=False)
2466
2466
2467
2467
2468 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2468 @parthandler(b'reply:obsmarkers', (b'new', b'in-reply-to'))
2469 def handleobsmarkerreply(op, inpart):
2469 def handleobsmarkerreply(op, inpart):
2470 """retrieve the result of a pushkey request"""
2470 """retrieve the result of a pushkey request"""
2471 ret = int(inpart.params[b'new'])
2471 ret = int(inpart.params[b'new'])
2472 partid = int(inpart.params[b'in-reply-to'])
2472 partid = int(inpart.params[b'in-reply-to'])
2473 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2473 op.records.add(b'obsmarkers', {b'new': ret}, partid)
2474
2474
2475
2475
2476 @parthandler(b'hgtagsfnodes')
2476 @parthandler(b'hgtagsfnodes')
2477 def handlehgtagsfnodes(op, inpart):
2477 def handlehgtagsfnodes(op, inpart):
2478 """Applies .hgtags fnodes cache entries to the local repo.
2478 """Applies .hgtags fnodes cache entries to the local repo.
2479
2479
2480 Payload is pairs of 20 byte changeset nodes and filenodes.
2480 Payload is pairs of 20 byte changeset nodes and filenodes.
2481 """
2481 """
2482 # Grab the transaction so we ensure that we have the lock at this point.
2482 # Grab the transaction so we ensure that we have the lock at this point.
2483 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2483 if op.ui.configbool(b'experimental', b'bundle2lazylocking'):
2484 op.gettransaction()
2484 op.gettransaction()
2485 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2485 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2486
2486
2487 count = 0
2487 count = 0
2488 while True:
2488 while True:
2489 node = inpart.read(20)
2489 node = inpart.read(20)
2490 fnode = inpart.read(20)
2490 fnode = inpart.read(20)
2491 if len(node) < 20 or len(fnode) < 20:
2491 if len(node) < 20 or len(fnode) < 20:
2492 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2492 op.ui.debug(b'ignoring incomplete received .hgtags fnodes data\n')
2493 break
2493 break
2494 cache.setfnode(node, fnode)
2494 cache.setfnode(node, fnode)
2495 count += 1
2495 count += 1
2496
2496
2497 cache.write()
2497 cache.write()
2498 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2498 op.ui.debug(b'applied %i hgtags fnodes cache entries\n' % count)
2499
2499
2500
2500
2501 rbcstruct = struct.Struct(b'>III')
2501 rbcstruct = struct.Struct(b'>III')
2502
2502
2503
2503
2504 @parthandler(b'cache:rev-branch-cache')
2504 @parthandler(b'cache:rev-branch-cache')
2505 def handlerbc(op, inpart):
2505 def handlerbc(op, inpart):
2506 """Legacy part, ignored for compatibility with bundles from or
2506 """Legacy part, ignored for compatibility with bundles from or
2507 for Mercurial before 5.7. Newer Mercurial computes the cache
2507 for Mercurial before 5.7. Newer Mercurial computes the cache
2508 efficiently enough during unbundling that the additional transfer
2508 efficiently enough during unbundling that the additional transfer
2509 is unnecessary."""
2509 is unnecessary."""
2510
2510
2511
2511
2512 @parthandler(b'pushvars')
2512 @parthandler(b'pushvars')
2513 def bundle2getvars(op, part):
2513 def bundle2getvars(op, part):
2514 '''unbundle a bundle2 containing shellvars on the server'''
2514 '''unbundle a bundle2 containing shellvars on the server'''
2515 # An option to disable unbundling on server-side for security reasons
2515 # An option to disable unbundling on server-side for security reasons
2516 if op.ui.configbool(b'push', b'pushvars.server'):
2516 if op.ui.configbool(b'push', b'pushvars.server'):
2517 hookargs = {}
2517 hookargs = {}
2518 for key, value in part.advisoryparams:
2518 for key, value in part.advisoryparams:
2519 key = key.upper()
2519 key = key.upper()
2520 # We want pushed variables to have USERVAR_ prepended so we know
2520 # We want pushed variables to have USERVAR_ prepended so we know
2521 # they came from the --pushvar flag.
2521 # they came from the --pushvar flag.
2522 key = b"USERVAR_" + key
2522 key = b"USERVAR_" + key
2523 hookargs[key] = value
2523 hookargs[key] = value
2524 op.addhookargs(hookargs)
2524 op.addhookargs(hookargs)
2525
2525
2526
2526
2527 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2527 @parthandler(b'stream2', (b'requirements', b'filecount', b'bytecount'))
2528 def handlestreamv2bundle(op, part):
2528 def handlestreamv2bundle(op, part):
2529
2529
2530 requirements = urlreq.unquote(part.params[b'requirements']).split(b',')
2530 requirements = urlreq.unquote(part.params[b'requirements'])
2531 requirements = requirements.split(b',') if requirements else []
2531 filecount = int(part.params[b'filecount'])
2532 filecount = int(part.params[b'filecount'])
2532 bytecount = int(part.params[b'bytecount'])
2533 bytecount = int(part.params[b'bytecount'])
2533
2534
2534 repo = op.repo
2535 repo = op.repo
2535 if len(repo):
2536 if len(repo):
2536 msg = _(b'cannot apply stream clone to non empty repository')
2537 msg = _(b'cannot apply stream clone to non empty repository')
2537 raise error.Abort(msg)
2538 raise error.Abort(msg)
2538
2539
2539 repo.ui.debug(b'applying stream bundle\n')
2540 repo.ui.debug(b'applying stream bundle\n')
2540 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2541 streamclone.applybundlev2(repo, part, filecount, bytecount, requirements)
2541
2542
2542
2543
2543 def widen_bundle(
2544 def widen_bundle(
2544 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2545 bundler, repo, oldmatcher, newmatcher, common, known, cgversion, ellipses
2545 ):
2546 ):
2546 """generates bundle2 for widening a narrow clone
2547 """generates bundle2 for widening a narrow clone
2547
2548
2548 bundler is the bundle to which data should be added
2549 bundler is the bundle to which data should be added
2549 repo is the localrepository instance
2550 repo is the localrepository instance
2550 oldmatcher matches what the client already has
2551 oldmatcher matches what the client already has
2551 newmatcher matches what the client needs (including what it already has)
2552 newmatcher matches what the client needs (including what it already has)
2552 common is set of common heads between server and client
2553 common is set of common heads between server and client
2553 known is a set of revs known on the client side (used in ellipses)
2554 known is a set of revs known on the client side (used in ellipses)
2554 cgversion is the changegroup version to send
2555 cgversion is the changegroup version to send
2555 ellipses is boolean value telling whether to send ellipses data or not
2556 ellipses is boolean value telling whether to send ellipses data or not
2556
2557
2557 returns bundle2 of the data required for extending
2558 returns bundle2 of the data required for extending
2558 """
2559 """
2559 commonnodes = set()
2560 commonnodes = set()
2560 cl = repo.changelog
2561 cl = repo.changelog
2561 for r in repo.revs(b"::%ln", common):
2562 for r in repo.revs(b"::%ln", common):
2562 commonnodes.add(cl.node(r))
2563 commonnodes.add(cl.node(r))
2563 if commonnodes:
2564 if commonnodes:
2564 packer = changegroup.getbundler(
2565 packer = changegroup.getbundler(
2565 cgversion,
2566 cgversion,
2566 repo,
2567 repo,
2567 oldmatcher=oldmatcher,
2568 oldmatcher=oldmatcher,
2568 matcher=newmatcher,
2569 matcher=newmatcher,
2569 fullnodes=commonnodes,
2570 fullnodes=commonnodes,
2570 )
2571 )
2571 cgdata = packer.generate(
2572 cgdata = packer.generate(
2572 {repo.nullid},
2573 {repo.nullid},
2573 list(commonnodes),
2574 list(commonnodes),
2574 False,
2575 False,
2575 b'narrow_widen',
2576 b'narrow_widen',
2576 changelog=False,
2577 changelog=False,
2577 )
2578 )
2578
2579
2579 part = bundler.newpart(b'changegroup', data=cgdata)
2580 part = bundler.newpart(b'changegroup', data=cgdata)
2580 part.addparam(b'version', cgversion)
2581 part.addparam(b'version', cgversion)
2581 if scmutil.istreemanifest(repo):
2582 if scmutil.istreemanifest(repo):
2582 part.addparam(b'treemanifest', b'1')
2583 part.addparam(b'treemanifest', b'1')
2583 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2584 if repository.REPO_FEATURE_SIDE_DATA in repo.features:
2584 part.addparam(b'exp-sidedata', b'1')
2585 part.addparam(b'exp-sidedata', b'1')
2585 wanted = format_remote_wanted_sidedata(repo)
2586 wanted = format_remote_wanted_sidedata(repo)
2586 part.addparam(b'exp-wanted-sidedata', wanted)
2587 part.addparam(b'exp-wanted-sidedata', wanted)
2587
2588
2588 return bundler
2589 return bundler
@@ -1,1304 +1,1306 b''
1 /*
1 /*
2 parsers.c - efficient content parsing
2 parsers.c - efficient content parsing
3
3
4 Copyright 2008 Olivia Mackall <olivia@selenic.com> and others
4 Copyright 2008 Olivia Mackall <olivia@selenic.com> and others
5
5
6 This software may be used and distributed according to the terms of
6 This software may be used and distributed according to the terms of
7 the GNU General Public License, incorporated herein by reference.
7 the GNU General Public License, incorporated herein by reference.
8 */
8 */
9
9
10 #define PY_SSIZE_T_CLEAN
10 #define PY_SSIZE_T_CLEAN
11 #include <Python.h>
11 #include <Python.h>
12 #include <ctype.h>
12 #include <ctype.h>
13 #include <stddef.h>
13 #include <stddef.h>
14 #include <string.h>
14 #include <string.h>
15
15
16 #include "bitmanipulation.h"
16 #include "bitmanipulation.h"
17 #include "charencode.h"
17 #include "charencode.h"
18 #include "util.h"
18 #include "util.h"
19
19
20 static const char *const versionerrortext = "Python minor version mismatch";
20 static const char *const versionerrortext = "Python minor version mismatch";
21
21
22 static const int dirstate_v1_from_p2 = -2;
22 static const int dirstate_v1_from_p2 = -2;
23 static const int dirstate_v1_nonnormal = -1;
23 static const int dirstate_v1_nonnormal = -1;
24 static const int ambiguous_time = -1;
24 static const int ambiguous_time = -1;
25
25
26 static PyObject *dict_new_presized(PyObject *self, PyObject *args)
26 static PyObject *dict_new_presized(PyObject *self, PyObject *args)
27 {
27 {
28 Py_ssize_t expected_size;
28 Py_ssize_t expected_size;
29
29
30 if (!PyArg_ParseTuple(args, "n:make_presized_dict", &expected_size)) {
30 if (!PyArg_ParseTuple(args, "n:make_presized_dict", &expected_size)) {
31 return NULL;
31 return NULL;
32 }
32 }
33
33
34 return _dict_new_presized(expected_size);
34 return _dict_new_presized(expected_size);
35 }
35 }
36
36
37 static PyObject *dirstate_item_new(PyTypeObject *subtype, PyObject *args,
37 static PyObject *dirstate_item_new(PyTypeObject *subtype, PyObject *args,
38 PyObject *kwds)
38 PyObject *kwds)
39 {
39 {
40 /* We do all the initialization here and not a tp_init function because
40 /* We do all the initialization here and not a tp_init function because
41 * dirstate_item is immutable. */
41 * dirstate_item is immutable. */
42 dirstateItemObject *t;
42 dirstateItemObject *t;
43 int wc_tracked;
43 int wc_tracked;
44 int p1_tracked;
44 int p1_tracked;
45 int p2_info;
45 int p2_info;
46 int has_meaningful_data;
46 int has_meaningful_data;
47 int has_meaningful_mtime;
47 int has_meaningful_mtime;
48 int mtime_second_ambiguous;
48 int mtime_second_ambiguous;
49 int mode;
49 int mode;
50 int size;
50 int size;
51 int mtime_s;
51 int mtime_s;
52 int mtime_ns;
52 int mtime_ns;
53 PyObject *parentfiledata;
53 PyObject *parentfiledata;
54 PyObject *mtime;
54 PyObject *mtime;
55 PyObject *fallback_exec;
55 PyObject *fallback_exec;
56 PyObject *fallback_symlink;
56 PyObject *fallback_symlink;
57 static char *keywords_name[] = {
57 static char *keywords_name[] = {
58 "wc_tracked", "p1_tracked", "p2_info",
58 "wc_tracked", "p1_tracked", "p2_info",
59 "has_meaningful_data", "has_meaningful_mtime", "parentfiledata",
59 "has_meaningful_data", "has_meaningful_mtime", "parentfiledata",
60 "fallback_exec", "fallback_symlink", NULL,
60 "fallback_exec", "fallback_symlink", NULL,
61 };
61 };
62 wc_tracked = 0;
62 wc_tracked = 0;
63 p1_tracked = 0;
63 p1_tracked = 0;
64 p2_info = 0;
64 p2_info = 0;
65 has_meaningful_mtime = 1;
65 has_meaningful_mtime = 1;
66 has_meaningful_data = 1;
66 has_meaningful_data = 1;
67 mtime_second_ambiguous = 0;
67 mtime_second_ambiguous = 0;
68 parentfiledata = Py_None;
68 parentfiledata = Py_None;
69 fallback_exec = Py_None;
69 fallback_exec = Py_None;
70 fallback_symlink = Py_None;
70 fallback_symlink = Py_None;
71 if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iiiiiOOO", keywords_name,
71 if (!PyArg_ParseTupleAndKeywords(args, kwds, "|iiiiiOOO", keywords_name,
72 &wc_tracked, &p1_tracked, &p2_info,
72 &wc_tracked, &p1_tracked, &p2_info,
73 &has_meaningful_data,
73 &has_meaningful_data,
74 &has_meaningful_mtime, &parentfiledata,
74 &has_meaningful_mtime, &parentfiledata,
75 &fallback_exec, &fallback_symlink)) {
75 &fallback_exec, &fallback_symlink)) {
76 return NULL;
76 return NULL;
77 }
77 }
78 t = (dirstateItemObject *)subtype->tp_alloc(subtype, 1);
78 t = (dirstateItemObject *)subtype->tp_alloc(subtype, 1);
79 if (!t) {
79 if (!t) {
80 return NULL;
80 return NULL;
81 }
81 }
82
82
83 t->flags = 0;
83 t->flags = 0;
84 if (wc_tracked) {
84 if (wc_tracked) {
85 t->flags |= dirstate_flag_wc_tracked;
85 t->flags |= dirstate_flag_wc_tracked;
86 }
86 }
87 if (p1_tracked) {
87 if (p1_tracked) {
88 t->flags |= dirstate_flag_p1_tracked;
88 t->flags |= dirstate_flag_p1_tracked;
89 }
89 }
90 if (p2_info) {
90 if (p2_info) {
91 t->flags |= dirstate_flag_p2_info;
91 t->flags |= dirstate_flag_p2_info;
92 }
92 }
93
93
94 if (fallback_exec != Py_None) {
94 if (fallback_exec != Py_None) {
95 t->flags |= dirstate_flag_has_fallback_exec;
95 t->flags |= dirstate_flag_has_fallback_exec;
96 if (PyObject_IsTrue(fallback_exec)) {
96 if (PyObject_IsTrue(fallback_exec)) {
97 t->flags |= dirstate_flag_fallback_exec;
97 t->flags |= dirstate_flag_fallback_exec;
98 }
98 }
99 }
99 }
100 if (fallback_symlink != Py_None) {
100 if (fallback_symlink != Py_None) {
101 t->flags |= dirstate_flag_has_fallback_symlink;
101 t->flags |= dirstate_flag_has_fallback_symlink;
102 if (PyObject_IsTrue(fallback_symlink)) {
102 if (PyObject_IsTrue(fallback_symlink)) {
103 t->flags |= dirstate_flag_fallback_symlink;
103 t->flags |= dirstate_flag_fallback_symlink;
104 }
104 }
105 }
105 }
106
106
107 if (parentfiledata != Py_None) {
107 if (parentfiledata != Py_None) {
108 if (!PyArg_ParseTuple(parentfiledata, "iiO", &mode, &size,
108 if (!PyArg_ParseTuple(parentfiledata, "iiO", &mode, &size,
109 &mtime)) {
109 &mtime)) {
110 return NULL;
110 return NULL;
111 }
111 }
112 if (mtime != Py_None) {
112 if (mtime != Py_None) {
113 if (!PyArg_ParseTuple(mtime, "iii", &mtime_s, &mtime_ns,
113 if (!PyArg_ParseTuple(mtime, "iii", &mtime_s, &mtime_ns,
114 &mtime_second_ambiguous)) {
114 &mtime_second_ambiguous)) {
115 return NULL;
115 return NULL;
116 }
116 }
117 } else {
117 } else {
118 has_meaningful_mtime = 0;
118 has_meaningful_mtime = 0;
119 }
119 }
120 } else {
120 } else {
121 has_meaningful_data = 0;
121 has_meaningful_data = 0;
122 has_meaningful_mtime = 0;
122 has_meaningful_mtime = 0;
123 }
123 }
124 if (has_meaningful_data) {
124 if (has_meaningful_data) {
125 t->flags |= dirstate_flag_has_meaningful_data;
125 t->flags |= dirstate_flag_has_meaningful_data;
126 t->mode = mode;
126 t->mode = mode;
127 t->size = size;
127 t->size = size;
128 if (mtime_second_ambiguous) {
128 if (mtime_second_ambiguous) {
129 t->flags |= dirstate_flag_mtime_second_ambiguous;
129 t->flags |= dirstate_flag_mtime_second_ambiguous;
130 }
130 }
131 } else {
131 } else {
132 t->mode = 0;
132 t->mode = 0;
133 t->size = 0;
133 t->size = 0;
134 }
134 }
135 if (has_meaningful_mtime) {
135 if (has_meaningful_mtime) {
136 t->flags |= dirstate_flag_has_mtime;
136 t->flags |= dirstate_flag_has_mtime;
137 t->mtime_s = mtime_s;
137 t->mtime_s = mtime_s;
138 t->mtime_ns = mtime_ns;
138 t->mtime_ns = mtime_ns;
139 } else {
139 } else {
140 t->mtime_s = 0;
140 t->mtime_s = 0;
141 t->mtime_ns = 0;
141 t->mtime_ns = 0;
142 }
142 }
143 return (PyObject *)t;
143 return (PyObject *)t;
144 }
144 }
145
145
146 static void dirstate_item_dealloc(PyObject *o)
146 static void dirstate_item_dealloc(PyObject *o)
147 {
147 {
148 PyObject_Del(o);
148 PyObject_Del(o);
149 }
149 }
150
150
151 static inline bool dirstate_item_c_tracked(dirstateItemObject *self)
151 static inline bool dirstate_item_c_tracked(dirstateItemObject *self)
152 {
152 {
153 return (self->flags & dirstate_flag_wc_tracked);
153 return (self->flags & dirstate_flag_wc_tracked);
154 }
154 }
155
155
156 static inline bool dirstate_item_c_any_tracked(dirstateItemObject *self)
156 static inline bool dirstate_item_c_any_tracked(dirstateItemObject *self)
157 {
157 {
158 const int mask = dirstate_flag_wc_tracked | dirstate_flag_p1_tracked |
158 const int mask = dirstate_flag_wc_tracked | dirstate_flag_p1_tracked |
159 dirstate_flag_p2_info;
159 dirstate_flag_p2_info;
160 return (self->flags & mask);
160 return (self->flags & mask);
161 }
161 }
162
162
163 static inline bool dirstate_item_c_added(dirstateItemObject *self)
163 static inline bool dirstate_item_c_added(dirstateItemObject *self)
164 {
164 {
165 const int mask = (dirstate_flag_wc_tracked | dirstate_flag_p1_tracked |
165 const int mask = (dirstate_flag_wc_tracked | dirstate_flag_p1_tracked |
166 dirstate_flag_p2_info);
166 dirstate_flag_p2_info);
167 const int target = dirstate_flag_wc_tracked;
167 const int target = dirstate_flag_wc_tracked;
168 return (self->flags & mask) == target;
168 return (self->flags & mask) == target;
169 }
169 }
170
170
171 static inline bool dirstate_item_c_removed(dirstateItemObject *self)
171 static inline bool dirstate_item_c_removed(dirstateItemObject *self)
172 {
172 {
173 if (self->flags & dirstate_flag_wc_tracked) {
173 if (self->flags & dirstate_flag_wc_tracked) {
174 return false;
174 return false;
175 }
175 }
176 return (self->flags &
176 return (self->flags &
177 (dirstate_flag_p1_tracked | dirstate_flag_p2_info));
177 (dirstate_flag_p1_tracked | dirstate_flag_p2_info));
178 }
178 }
179
179
180 static inline bool dirstate_item_c_merged(dirstateItemObject *self)
180 static inline bool dirstate_item_c_merged(dirstateItemObject *self)
181 {
181 {
182 return ((self->flags & dirstate_flag_wc_tracked) &&
182 return ((self->flags & dirstate_flag_wc_tracked) &&
183 (self->flags & dirstate_flag_p1_tracked) &&
183 (self->flags & dirstate_flag_p1_tracked) &&
184 (self->flags & dirstate_flag_p2_info));
184 (self->flags & dirstate_flag_p2_info));
185 }
185 }
186
186
187 static inline bool dirstate_item_c_from_p2(dirstateItemObject *self)
187 static inline bool dirstate_item_c_from_p2(dirstateItemObject *self)
188 {
188 {
189 return ((self->flags & dirstate_flag_wc_tracked) &&
189 return ((self->flags & dirstate_flag_wc_tracked) &&
190 !(self->flags & dirstate_flag_p1_tracked) &&
190 !(self->flags & dirstate_flag_p1_tracked) &&
191 (self->flags & dirstate_flag_p2_info));
191 (self->flags & dirstate_flag_p2_info));
192 }
192 }
193
193
194 static inline char dirstate_item_c_v1_state(dirstateItemObject *self)
194 static inline char dirstate_item_c_v1_state(dirstateItemObject *self)
195 {
195 {
196 if (dirstate_item_c_removed(self)) {
196 if (dirstate_item_c_removed(self)) {
197 return 'r';
197 return 'r';
198 } else if (dirstate_item_c_merged(self)) {
198 } else if (dirstate_item_c_merged(self)) {
199 return 'm';
199 return 'm';
200 } else if (dirstate_item_c_added(self)) {
200 } else if (dirstate_item_c_added(self)) {
201 return 'a';
201 return 'a';
202 } else {
202 } else {
203 return 'n';
203 return 'n';
204 }
204 }
205 }
205 }
206
206
207 static inline bool dirstate_item_c_has_fallback_exec(dirstateItemObject *self)
207 static inline bool dirstate_item_c_has_fallback_exec(dirstateItemObject *self)
208 {
208 {
209 return (bool)self->flags & dirstate_flag_has_fallback_exec;
209 return (bool)self->flags & dirstate_flag_has_fallback_exec;
210 }
210 }
211
211
212 static inline bool
212 static inline bool
213 dirstate_item_c_has_fallback_symlink(dirstateItemObject *self)
213 dirstate_item_c_has_fallback_symlink(dirstateItemObject *self)
214 {
214 {
215 return (bool)self->flags & dirstate_flag_has_fallback_symlink;
215 return (bool)self->flags & dirstate_flag_has_fallback_symlink;
216 }
216 }
217
217
218 static inline int dirstate_item_c_v1_mode(dirstateItemObject *self)
218 static inline int dirstate_item_c_v1_mode(dirstateItemObject *self)
219 {
219 {
220 if (self->flags & dirstate_flag_has_meaningful_data) {
220 if (self->flags & dirstate_flag_has_meaningful_data) {
221 return self->mode;
221 return self->mode;
222 } else {
222 } else {
223 return 0;
223 return 0;
224 }
224 }
225 }
225 }
226
226
227 static inline int dirstate_item_c_v1_size(dirstateItemObject *self)
227 static inline int dirstate_item_c_v1_size(dirstateItemObject *self)
228 {
228 {
229 if (!(self->flags & dirstate_flag_wc_tracked) &&
229 if (!(self->flags & dirstate_flag_wc_tracked) &&
230 (self->flags & dirstate_flag_p2_info)) {
230 (self->flags & dirstate_flag_p2_info)) {
231 if (self->flags & dirstate_flag_p1_tracked) {
231 if (self->flags & dirstate_flag_p1_tracked) {
232 return dirstate_v1_nonnormal;
232 return dirstate_v1_nonnormal;
233 } else {
233 } else {
234 return dirstate_v1_from_p2;
234 return dirstate_v1_from_p2;
235 }
235 }
236 } else if (dirstate_item_c_removed(self)) {
236 } else if (dirstate_item_c_removed(self)) {
237 return 0;
237 return 0;
238 } else if (self->flags & dirstate_flag_p2_info) {
238 } else if (self->flags & dirstate_flag_p2_info) {
239 return dirstate_v1_from_p2;
239 return dirstate_v1_from_p2;
240 } else if (dirstate_item_c_added(self)) {
240 } else if (dirstate_item_c_added(self)) {
241 return dirstate_v1_nonnormal;
241 return dirstate_v1_nonnormal;
242 } else if (self->flags & dirstate_flag_has_meaningful_data) {
242 } else if (self->flags & dirstate_flag_has_meaningful_data) {
243 return self->size;
243 return self->size;
244 } else {
244 } else {
245 return dirstate_v1_nonnormal;
245 return dirstate_v1_nonnormal;
246 }
246 }
247 }
247 }
248
248
249 static inline int dirstate_item_c_v1_mtime(dirstateItemObject *self)
249 static inline int dirstate_item_c_v1_mtime(dirstateItemObject *self)
250 {
250 {
251 if (dirstate_item_c_removed(self)) {
251 if (dirstate_item_c_removed(self)) {
252 return 0;
252 return 0;
253 } else if (!(self->flags & dirstate_flag_has_mtime) ||
253 } else if (!(self->flags & dirstate_flag_has_mtime) ||
254 !(self->flags & dirstate_flag_p1_tracked) ||
254 !(self->flags & dirstate_flag_p1_tracked) ||
255 !(self->flags & dirstate_flag_wc_tracked) ||
255 !(self->flags & dirstate_flag_wc_tracked) ||
256 (self->flags & dirstate_flag_p2_info) ||
256 (self->flags & dirstate_flag_p2_info) ||
257 (self->flags & dirstate_flag_mtime_second_ambiguous)) {
257 (self->flags & dirstate_flag_mtime_second_ambiguous)) {
258 return ambiguous_time;
258 return ambiguous_time;
259 } else {
259 } else {
260 return self->mtime_s;
260 return self->mtime_s;
261 }
261 }
262 }
262 }
263
263
264 static PyObject *dirstate_item_v2_data(dirstateItemObject *self)
264 static PyObject *dirstate_item_v2_data(dirstateItemObject *self)
265 {
265 {
266 int flags = self->flags;
266 int flags = self->flags;
267 int mode = dirstate_item_c_v1_mode(self);
267 int mode = dirstate_item_c_v1_mode(self);
268 #ifdef S_IXUSR
268 #ifdef S_IXUSR
269 /* This is for platforms with an exec bit */
269 /* This is for platforms with an exec bit */
270 if ((mode & S_IXUSR) != 0) {
270 if ((mode & S_IXUSR) != 0) {
271 flags |= dirstate_flag_mode_exec_perm;
271 flags |= dirstate_flag_mode_exec_perm;
272 } else {
272 } else {
273 flags &= ~dirstate_flag_mode_exec_perm;
273 flags &= ~dirstate_flag_mode_exec_perm;
274 }
274 }
275 #else
275 #else
276 flags &= ~dirstate_flag_mode_exec_perm;
276 flags &= ~dirstate_flag_mode_exec_perm;
277 #endif
277 #endif
278 #ifdef S_ISLNK
278 #ifdef S_ISLNK
279 /* This is for platforms with support for symlinks */
279 /* This is for platforms with support for symlinks */
280 if (S_ISLNK(mode)) {
280 if (S_ISLNK(mode)) {
281 flags |= dirstate_flag_mode_is_symlink;
281 flags |= dirstate_flag_mode_is_symlink;
282 } else {
282 } else {
283 flags &= ~dirstate_flag_mode_is_symlink;
283 flags &= ~dirstate_flag_mode_is_symlink;
284 }
284 }
285 #else
285 #else
286 flags &= ~dirstate_flag_mode_is_symlink;
286 flags &= ~dirstate_flag_mode_is_symlink;
287 #endif
287 #endif
288 return Py_BuildValue("iiii", flags, self->size, self->mtime_s,
288 return Py_BuildValue("iiii", flags, self->size, self->mtime_s,
289 self->mtime_ns);
289 self->mtime_ns);
290 };
290 };
291
291
292 static PyObject *dirstate_item_v1_state(dirstateItemObject *self)
292 static PyObject *dirstate_item_v1_state(dirstateItemObject *self)
293 {
293 {
294 char state = dirstate_item_c_v1_state(self);
294 char state = dirstate_item_c_v1_state(self);
295 return PyBytes_FromStringAndSize(&state, 1);
295 return PyBytes_FromStringAndSize(&state, 1);
296 };
296 };
297
297
298 static PyObject *dirstate_item_v1_mode(dirstateItemObject *self)
298 static PyObject *dirstate_item_v1_mode(dirstateItemObject *self)
299 {
299 {
300 return PyLong_FromLong(dirstate_item_c_v1_mode(self));
300 return PyLong_FromLong(dirstate_item_c_v1_mode(self));
301 };
301 };
302
302
303 static PyObject *dirstate_item_v1_size(dirstateItemObject *self)
303 static PyObject *dirstate_item_v1_size(dirstateItemObject *self)
304 {
304 {
305 return PyLong_FromLong(dirstate_item_c_v1_size(self));
305 return PyLong_FromLong(dirstate_item_c_v1_size(self));
306 };
306 };
307
307
308 static PyObject *dirstate_item_v1_mtime(dirstateItemObject *self)
308 static PyObject *dirstate_item_v1_mtime(dirstateItemObject *self)
309 {
309 {
310 return PyLong_FromLong(dirstate_item_c_v1_mtime(self));
310 return PyLong_FromLong(dirstate_item_c_v1_mtime(self));
311 };
311 };
312
312
313 static PyObject *dirstate_item_mtime_likely_equal_to(dirstateItemObject *self,
313 static PyObject *dirstate_item_mtime_likely_equal_to(dirstateItemObject *self,
314 PyObject *other)
314 PyObject *other)
315 {
315 {
316 int other_s;
316 int other_s;
317 int other_ns;
317 int other_ns;
318 int other_second_ambiguous;
318 int other_second_ambiguous;
319 if (!PyArg_ParseTuple(other, "iii", &other_s, &other_ns,
319 if (!PyArg_ParseTuple(other, "iii", &other_s, &other_ns,
320 &other_second_ambiguous)) {
320 &other_second_ambiguous)) {
321 return NULL;
321 return NULL;
322 }
322 }
323 if (!(self->flags & dirstate_flag_has_mtime)) {
323 if (!(self->flags & dirstate_flag_has_mtime)) {
324 Py_RETURN_FALSE;
324 Py_RETURN_FALSE;
325 }
325 }
326 if (self->mtime_s != other_s) {
326 if (self->mtime_s != other_s) {
327 Py_RETURN_FALSE;
327 Py_RETURN_FALSE;
328 }
328 }
329 if (self->mtime_ns == 0 || other_ns == 0) {
329 if (self->mtime_ns == 0 || other_ns == 0) {
330 if (self->flags & dirstate_flag_mtime_second_ambiguous) {
330 if (self->flags & dirstate_flag_mtime_second_ambiguous) {
331 Py_RETURN_FALSE;
331 Py_RETURN_FALSE;
332 } else {
332 } else {
333 Py_RETURN_TRUE;
333 Py_RETURN_TRUE;
334 }
334 }
335 }
335 }
336 if (self->mtime_ns == other_ns) {
336 if (self->mtime_ns == other_ns) {
337 Py_RETURN_TRUE;
337 Py_RETURN_TRUE;
338 } else {
338 } else {
339 Py_RETURN_FALSE;
339 Py_RETURN_FALSE;
340 }
340 }
341 };
341 };
342
342
343 /* This will never change since it's bound to V1
343 /* This will never change since it's bound to V1
344 */
344 */
345 static inline dirstateItemObject *
345 static inline dirstateItemObject *
346 dirstate_item_from_v1_data(char state, int mode, int size, int mtime)
346 dirstate_item_from_v1_data(char state, int mode, int size, int mtime)
347 {
347 {
348 dirstateItemObject *t =
348 dirstateItemObject *t =
349 PyObject_New(dirstateItemObject, &dirstateItemType);
349 PyObject_New(dirstateItemObject, &dirstateItemType);
350 if (!t) {
350 if (!t) {
351 return NULL;
351 return NULL;
352 }
352 }
353 t->flags = 0;
353 t->flags = 0;
354 t->mode = 0;
354 t->mode = 0;
355 t->size = 0;
355 t->size = 0;
356 t->mtime_s = 0;
356 t->mtime_s = 0;
357 t->mtime_ns = 0;
357 t->mtime_ns = 0;
358
358
359 if (state == 'm') {
359 if (state == 'm') {
360 t->flags = (dirstate_flag_wc_tracked |
360 t->flags = (dirstate_flag_wc_tracked |
361 dirstate_flag_p1_tracked | dirstate_flag_p2_info);
361 dirstate_flag_p1_tracked | dirstate_flag_p2_info);
362 } else if (state == 'a') {
362 } else if (state == 'a') {
363 t->flags = dirstate_flag_wc_tracked;
363 t->flags = dirstate_flag_wc_tracked;
364 } else if (state == 'r') {
364 } else if (state == 'r') {
365 if (size == dirstate_v1_nonnormal) {
365 if (size == dirstate_v1_nonnormal) {
366 t->flags =
366 t->flags =
367 dirstate_flag_p1_tracked | dirstate_flag_p2_info;
367 dirstate_flag_p1_tracked | dirstate_flag_p2_info;
368 } else if (size == dirstate_v1_from_p2) {
368 } else if (size == dirstate_v1_from_p2) {
369 t->flags = dirstate_flag_p2_info;
369 t->flags = dirstate_flag_p2_info;
370 } else {
370 } else {
371 t->flags = dirstate_flag_p1_tracked;
371 t->flags = dirstate_flag_p1_tracked;
372 }
372 }
373 } else if (state == 'n') {
373 } else if (state == 'n') {
374 if (size == dirstate_v1_from_p2) {
374 if (size == dirstate_v1_from_p2) {
375 t->flags =
375 t->flags =
376 dirstate_flag_wc_tracked | dirstate_flag_p2_info;
376 dirstate_flag_wc_tracked | dirstate_flag_p2_info;
377 } else if (size == dirstate_v1_nonnormal) {
377 } else if (size == dirstate_v1_nonnormal) {
378 t->flags =
378 t->flags =
379 dirstate_flag_wc_tracked | dirstate_flag_p1_tracked;
379 dirstate_flag_wc_tracked | dirstate_flag_p1_tracked;
380 } else if (mtime == ambiguous_time) {
380 } else if (mtime == ambiguous_time) {
381 t->flags = (dirstate_flag_wc_tracked |
381 t->flags = (dirstate_flag_wc_tracked |
382 dirstate_flag_p1_tracked |
382 dirstate_flag_p1_tracked |
383 dirstate_flag_has_meaningful_data);
383 dirstate_flag_has_meaningful_data);
384 t->mode = mode;
384 t->mode = mode;
385 t->size = size;
385 t->size = size;
386 } else {
386 } else {
387 t->flags = (dirstate_flag_wc_tracked |
387 t->flags = (dirstate_flag_wc_tracked |
388 dirstate_flag_p1_tracked |
388 dirstate_flag_p1_tracked |
389 dirstate_flag_has_meaningful_data |
389 dirstate_flag_has_meaningful_data |
390 dirstate_flag_has_mtime);
390 dirstate_flag_has_mtime);
391 t->mode = mode;
391 t->mode = mode;
392 t->size = size;
392 t->size = size;
393 t->mtime_s = mtime;
393 t->mtime_s = mtime;
394 }
394 }
395 } else {
395 } else {
396 PyErr_Format(PyExc_RuntimeError,
396 PyErr_Format(PyExc_RuntimeError,
397 "unknown state: `%c` (%d, %d, %d)", state, mode,
397 "unknown state: `%c` (%d, %d, %d)", state, mode,
398 size, mtime, NULL);
398 size, mtime, NULL);
399 Py_DECREF(t);
399 Py_DECREF(t);
400 return NULL;
400 return NULL;
401 }
401 }
402
402
403 return t;
403 return t;
404 }
404 }
405
405
406 /* This will never change since it's bound to V1, unlike `dirstate_item_new` */
406 /* This will never change since it's bound to V1, unlike `dirstate_item_new` */
407 static PyObject *dirstate_item_from_v1_meth(PyTypeObject *subtype,
407 static PyObject *dirstate_item_from_v1_meth(PyTypeObject *subtype,
408 PyObject *args)
408 PyObject *args)
409 {
409 {
410 /* We do all the initialization here and not a tp_init function because
410 /* We do all the initialization here and not a tp_init function because
411 * dirstate_item is immutable. */
411 * dirstate_item is immutable. */
412 char state;
412 char state;
413 int size, mode, mtime;
413 int size, mode, mtime;
414 if (!PyArg_ParseTuple(args, "ciii", &state, &mode, &size, &mtime)) {
414 if (!PyArg_ParseTuple(args, "ciii", &state, &mode, &size, &mtime)) {
415 return NULL;
415 return NULL;
416 }
416 }
417 return (PyObject *)dirstate_item_from_v1_data(state, mode, size, mtime);
417 return (PyObject *)dirstate_item_from_v1_data(state, mode, size, mtime);
418 };
418 };
419
419
420 static PyObject *dirstate_item_from_v2_meth(PyTypeObject *subtype,
420 static PyObject *dirstate_item_from_v2_meth(PyTypeObject *subtype,
421 PyObject *args)
421 PyObject *args)
422 {
422 {
423 dirstateItemObject *t =
423 dirstateItemObject *t =
424 PyObject_New(dirstateItemObject, &dirstateItemType);
424 PyObject_New(dirstateItemObject, &dirstateItemType);
425 if (!t) {
425 if (!t) {
426 return NULL;
426 return NULL;
427 }
427 }
428 if (!PyArg_ParseTuple(args, "iiii", &t->flags, &t->size, &t->mtime_s,
428 if (!PyArg_ParseTuple(args, "iiii", &t->flags, &t->size, &t->mtime_s,
429 &t->mtime_ns)) {
429 &t->mtime_ns)) {
430 return NULL;
430 return NULL;
431 }
431 }
432 if (t->flags & dirstate_flag_expected_state_is_modified) {
432 if (t->flags & dirstate_flag_expected_state_is_modified) {
433 t->flags &= ~(dirstate_flag_expected_state_is_modified |
433 t->flags &= ~(dirstate_flag_expected_state_is_modified |
434 dirstate_flag_has_meaningful_data |
434 dirstate_flag_has_meaningful_data |
435 dirstate_flag_has_mtime);
435 dirstate_flag_has_mtime);
436 }
436 }
437 t->mode = 0;
437 t->mode = 0;
438 if (t->flags & dirstate_flag_has_meaningful_data) {
438 if (t->flags & dirstate_flag_has_meaningful_data) {
439 if (t->flags & dirstate_flag_mode_exec_perm) {
439 if (t->flags & dirstate_flag_mode_exec_perm) {
440 t->mode = 0755;
440 t->mode = 0755;
441 } else {
441 } else {
442 t->mode = 0644;
442 t->mode = 0644;
443 }
443 }
444 if (t->flags & dirstate_flag_mode_is_symlink) {
444 if (t->flags & dirstate_flag_mode_is_symlink) {
445 t->mode |= S_IFLNK;
445 t->mode |= S_IFLNK;
446 } else {
446 } else {
447 t->mode |= S_IFREG;
447 t->mode |= S_IFREG;
448 }
448 }
449 }
449 }
450 return (PyObject *)t;
450 return (PyObject *)t;
451 };
451 };
452
452
453 /* This means the next status call will have to actually check its content
453 /* This means the next status call will have to actually check its content
454 to make sure it is correct. */
454 to make sure it is correct. */
455 static PyObject *dirstate_item_set_possibly_dirty(dirstateItemObject *self)
455 static PyObject *dirstate_item_set_possibly_dirty(dirstateItemObject *self)
456 {
456 {
457 self->flags &= ~dirstate_flag_has_mtime;
457 self->flags &= ~dirstate_flag_has_mtime;
458 Py_RETURN_NONE;
458 Py_RETURN_NONE;
459 }
459 }
460
460
461 /* See docstring of the python implementation for details */
461 /* See docstring of the python implementation for details */
462 static PyObject *dirstate_item_set_clean(dirstateItemObject *self,
462 static PyObject *dirstate_item_set_clean(dirstateItemObject *self,
463 PyObject *args)
463 PyObject *args)
464 {
464 {
465 int size, mode, mtime_s, mtime_ns, mtime_second_ambiguous;
465 int size, mode, mtime_s, mtime_ns, mtime_second_ambiguous;
466 PyObject *mtime;
466 PyObject *mtime;
467 mtime_s = 0;
467 mtime_s = 0;
468 mtime_ns = 0;
468 mtime_ns = 0;
469 mtime_second_ambiguous = 0;
469 mtime_second_ambiguous = 0;
470 if (!PyArg_ParseTuple(args, "iiO", &mode, &size, &mtime)) {
470 if (!PyArg_ParseTuple(args, "iiO", &mode, &size, &mtime)) {
471 return NULL;
471 return NULL;
472 }
472 }
473 if (mtime != Py_None) {
473 if (mtime != Py_None) {
474 if (!PyArg_ParseTuple(mtime, "iii", &mtime_s, &mtime_ns,
474 if (!PyArg_ParseTuple(mtime, "iii", &mtime_s, &mtime_ns,
475 &mtime_second_ambiguous)) {
475 &mtime_second_ambiguous)) {
476 return NULL;
476 return NULL;
477 }
477 }
478 } else {
478 } else {
479 self->flags &= ~dirstate_flag_has_mtime;
479 self->flags &= ~dirstate_flag_has_mtime;
480 }
480 }
481 self->flags = dirstate_flag_wc_tracked | dirstate_flag_p1_tracked |
481 self->flags = dirstate_flag_wc_tracked | dirstate_flag_p1_tracked |
482 dirstate_flag_has_meaningful_data |
482 dirstate_flag_has_meaningful_data |
483 dirstate_flag_has_mtime;
483 dirstate_flag_has_mtime;
484 if (mtime_second_ambiguous) {
484 if (mtime_second_ambiguous) {
485 self->flags |= dirstate_flag_mtime_second_ambiguous;
485 self->flags |= dirstate_flag_mtime_second_ambiguous;
486 }
486 }
487 self->mode = mode;
487 self->mode = mode;
488 self->size = size;
488 self->size = size;
489 self->mtime_s = mtime_s;
489 self->mtime_s = mtime_s;
490 self->mtime_ns = mtime_ns;
490 self->mtime_ns = mtime_ns;
491 Py_RETURN_NONE;
491 Py_RETURN_NONE;
492 }
492 }
493
493
494 static PyObject *dirstate_item_set_tracked(dirstateItemObject *self)
494 static PyObject *dirstate_item_set_tracked(dirstateItemObject *self)
495 {
495 {
496 self->flags |= dirstate_flag_wc_tracked;
496 self->flags |= dirstate_flag_wc_tracked;
497 self->flags &= ~dirstate_flag_has_mtime;
497 self->flags &= ~dirstate_flag_has_mtime;
498 Py_RETURN_NONE;
498 Py_RETURN_NONE;
499 }
499 }
500
500
501 static PyObject *dirstate_item_set_untracked(dirstateItemObject *self)
501 static PyObject *dirstate_item_set_untracked(dirstateItemObject *self)
502 {
502 {
503 self->flags &= ~dirstate_flag_wc_tracked;
503 self->flags &= ~dirstate_flag_wc_tracked;
504 self->flags &= ~dirstate_flag_has_meaningful_data;
505 self->flags &= ~dirstate_flag_has_mtime;
504 self->mode = 0;
506 self->mode = 0;
505 self->size = 0;
507 self->size = 0;
506 self->mtime_s = 0;
508 self->mtime_s = 0;
507 self->mtime_ns = 0;
509 self->mtime_ns = 0;
508 Py_RETURN_NONE;
510 Py_RETURN_NONE;
509 }
511 }
510
512
511 static PyObject *dirstate_item_drop_merge_data(dirstateItemObject *self)
513 static PyObject *dirstate_item_drop_merge_data(dirstateItemObject *self)
512 {
514 {
513 if (self->flags & dirstate_flag_p2_info) {
515 if (self->flags & dirstate_flag_p2_info) {
514 self->flags &= ~(dirstate_flag_p2_info |
516 self->flags &= ~(dirstate_flag_p2_info |
515 dirstate_flag_has_meaningful_data |
517 dirstate_flag_has_meaningful_data |
516 dirstate_flag_has_mtime);
518 dirstate_flag_has_mtime);
517 self->mode = 0;
519 self->mode = 0;
518 self->size = 0;
520 self->size = 0;
519 self->mtime_s = 0;
521 self->mtime_s = 0;
520 self->mtime_ns = 0;
522 self->mtime_ns = 0;
521 }
523 }
522 Py_RETURN_NONE;
524 Py_RETURN_NONE;
523 }
525 }
524 static PyMethodDef dirstate_item_methods[] = {
526 static PyMethodDef dirstate_item_methods[] = {
525 {"v2_data", (PyCFunction)dirstate_item_v2_data, METH_NOARGS,
527 {"v2_data", (PyCFunction)dirstate_item_v2_data, METH_NOARGS,
526 "return data suitable for v2 serialization"},
528 "return data suitable for v2 serialization"},
527 {"v1_state", (PyCFunction)dirstate_item_v1_state, METH_NOARGS,
529 {"v1_state", (PyCFunction)dirstate_item_v1_state, METH_NOARGS,
528 "return a \"state\" suitable for v1 serialization"},
530 "return a \"state\" suitable for v1 serialization"},
529 {"v1_mode", (PyCFunction)dirstate_item_v1_mode, METH_NOARGS,
531 {"v1_mode", (PyCFunction)dirstate_item_v1_mode, METH_NOARGS,
530 "return a \"mode\" suitable for v1 serialization"},
532 "return a \"mode\" suitable for v1 serialization"},
531 {"v1_size", (PyCFunction)dirstate_item_v1_size, METH_NOARGS,
533 {"v1_size", (PyCFunction)dirstate_item_v1_size, METH_NOARGS,
532 "return a \"size\" suitable for v1 serialization"},
534 "return a \"size\" suitable for v1 serialization"},
533 {"v1_mtime", (PyCFunction)dirstate_item_v1_mtime, METH_NOARGS,
535 {"v1_mtime", (PyCFunction)dirstate_item_v1_mtime, METH_NOARGS,
534 "return a \"mtime\" suitable for v1 serialization"},
536 "return a \"mtime\" suitable for v1 serialization"},
535 {"mtime_likely_equal_to", (PyCFunction)dirstate_item_mtime_likely_equal_to,
537 {"mtime_likely_equal_to", (PyCFunction)dirstate_item_mtime_likely_equal_to,
536 METH_O, "True if the stored mtime is likely equal to the given mtime"},
538 METH_O, "True if the stored mtime is likely equal to the given mtime"},
537 {"from_v1_data", (PyCFunction)dirstate_item_from_v1_meth,
539 {"from_v1_data", (PyCFunction)dirstate_item_from_v1_meth,
538 METH_VARARGS | METH_CLASS, "build a new DirstateItem object from V1 data"},
540 METH_VARARGS | METH_CLASS, "build a new DirstateItem object from V1 data"},
539 {"from_v2_data", (PyCFunction)dirstate_item_from_v2_meth,
541 {"from_v2_data", (PyCFunction)dirstate_item_from_v2_meth,
540 METH_VARARGS | METH_CLASS, "build a new DirstateItem object from V2 data"},
542 METH_VARARGS | METH_CLASS, "build a new DirstateItem object from V2 data"},
541 {"set_possibly_dirty", (PyCFunction)dirstate_item_set_possibly_dirty,
543 {"set_possibly_dirty", (PyCFunction)dirstate_item_set_possibly_dirty,
542 METH_NOARGS, "mark a file as \"possibly dirty\""},
544 METH_NOARGS, "mark a file as \"possibly dirty\""},
543 {"set_clean", (PyCFunction)dirstate_item_set_clean, METH_VARARGS,
545 {"set_clean", (PyCFunction)dirstate_item_set_clean, METH_VARARGS,
544 "mark a file as \"clean\""},
546 "mark a file as \"clean\""},
545 {"set_tracked", (PyCFunction)dirstate_item_set_tracked, METH_NOARGS,
547 {"set_tracked", (PyCFunction)dirstate_item_set_tracked, METH_NOARGS,
546 "mark a file as \"tracked\""},
548 "mark a file as \"tracked\""},
547 {"set_untracked", (PyCFunction)dirstate_item_set_untracked, METH_NOARGS,
549 {"set_untracked", (PyCFunction)dirstate_item_set_untracked, METH_NOARGS,
548 "mark a file as \"untracked\""},
550 "mark a file as \"untracked\""},
549 {"drop_merge_data", (PyCFunction)dirstate_item_drop_merge_data, METH_NOARGS,
551 {"drop_merge_data", (PyCFunction)dirstate_item_drop_merge_data, METH_NOARGS,
550 "remove all \"merge-only\" from a DirstateItem"},
552 "remove all \"merge-only\" from a DirstateItem"},
551 {NULL} /* Sentinel */
553 {NULL} /* Sentinel */
552 };
554 };
553
555
554 static PyObject *dirstate_item_get_mode(dirstateItemObject *self)
556 static PyObject *dirstate_item_get_mode(dirstateItemObject *self)
555 {
557 {
556 return PyLong_FromLong(dirstate_item_c_v1_mode(self));
558 return PyLong_FromLong(dirstate_item_c_v1_mode(self));
557 };
559 };
558
560
559 static PyObject *dirstate_item_get_size(dirstateItemObject *self)
561 static PyObject *dirstate_item_get_size(dirstateItemObject *self)
560 {
562 {
561 return PyLong_FromLong(dirstate_item_c_v1_size(self));
563 return PyLong_FromLong(dirstate_item_c_v1_size(self));
562 };
564 };
563
565
564 static PyObject *dirstate_item_get_mtime(dirstateItemObject *self)
566 static PyObject *dirstate_item_get_mtime(dirstateItemObject *self)
565 {
567 {
566 return PyLong_FromLong(dirstate_item_c_v1_mtime(self));
568 return PyLong_FromLong(dirstate_item_c_v1_mtime(self));
567 };
569 };
568
570
569 static PyObject *dirstate_item_get_state(dirstateItemObject *self)
571 static PyObject *dirstate_item_get_state(dirstateItemObject *self)
570 {
572 {
571 char state = dirstate_item_c_v1_state(self);
573 char state = dirstate_item_c_v1_state(self);
572 return PyBytes_FromStringAndSize(&state, 1);
574 return PyBytes_FromStringAndSize(&state, 1);
573 };
575 };
574
576
575 static PyObject *dirstate_item_get_has_fallback_exec(dirstateItemObject *self)
577 static PyObject *dirstate_item_get_has_fallback_exec(dirstateItemObject *self)
576 {
578 {
577 if (dirstate_item_c_has_fallback_exec(self)) {
579 if (dirstate_item_c_has_fallback_exec(self)) {
578 Py_RETURN_TRUE;
580 Py_RETURN_TRUE;
579 } else {
581 } else {
580 Py_RETURN_FALSE;
582 Py_RETURN_FALSE;
581 }
583 }
582 };
584 };
583
585
584 static PyObject *dirstate_item_get_fallback_exec(dirstateItemObject *self)
586 static PyObject *dirstate_item_get_fallback_exec(dirstateItemObject *self)
585 {
587 {
586 if (dirstate_item_c_has_fallback_exec(self)) {
588 if (dirstate_item_c_has_fallback_exec(self)) {
587 if (self->flags & dirstate_flag_fallback_exec) {
589 if (self->flags & dirstate_flag_fallback_exec) {
588 Py_RETURN_TRUE;
590 Py_RETURN_TRUE;
589 } else {
591 } else {
590 Py_RETURN_FALSE;
592 Py_RETURN_FALSE;
591 }
593 }
592 } else {
594 } else {
593 Py_RETURN_NONE;
595 Py_RETURN_NONE;
594 }
596 }
595 };
597 };
596
598
597 static int dirstate_item_set_fallback_exec(dirstateItemObject *self,
599 static int dirstate_item_set_fallback_exec(dirstateItemObject *self,
598 PyObject *value)
600 PyObject *value)
599 {
601 {
600 if ((value == Py_None) || (value == NULL)) {
602 if ((value == Py_None) || (value == NULL)) {
601 self->flags &= ~dirstate_flag_has_fallback_exec;
603 self->flags &= ~dirstate_flag_has_fallback_exec;
602 } else {
604 } else {
603 self->flags |= dirstate_flag_has_fallback_exec;
605 self->flags |= dirstate_flag_has_fallback_exec;
604 if (PyObject_IsTrue(value)) {
606 if (PyObject_IsTrue(value)) {
605 self->flags |= dirstate_flag_fallback_exec;
607 self->flags |= dirstate_flag_fallback_exec;
606 } else {
608 } else {
607 self->flags &= ~dirstate_flag_fallback_exec;
609 self->flags &= ~dirstate_flag_fallback_exec;
608 }
610 }
609 }
611 }
610 return 0;
612 return 0;
611 };
613 };
612
614
613 static PyObject *
615 static PyObject *
614 dirstate_item_get_has_fallback_symlink(dirstateItemObject *self)
616 dirstate_item_get_has_fallback_symlink(dirstateItemObject *self)
615 {
617 {
616 if (dirstate_item_c_has_fallback_symlink(self)) {
618 if (dirstate_item_c_has_fallback_symlink(self)) {
617 Py_RETURN_TRUE;
619 Py_RETURN_TRUE;
618 } else {
620 } else {
619 Py_RETURN_FALSE;
621 Py_RETURN_FALSE;
620 }
622 }
621 };
623 };
622
624
623 static PyObject *dirstate_item_get_fallback_symlink(dirstateItemObject *self)
625 static PyObject *dirstate_item_get_fallback_symlink(dirstateItemObject *self)
624 {
626 {
625 if (dirstate_item_c_has_fallback_symlink(self)) {
627 if (dirstate_item_c_has_fallback_symlink(self)) {
626 if (self->flags & dirstate_flag_fallback_symlink) {
628 if (self->flags & dirstate_flag_fallback_symlink) {
627 Py_RETURN_TRUE;
629 Py_RETURN_TRUE;
628 } else {
630 } else {
629 Py_RETURN_FALSE;
631 Py_RETURN_FALSE;
630 }
632 }
631 } else {
633 } else {
632 Py_RETURN_NONE;
634 Py_RETURN_NONE;
633 }
635 }
634 };
636 };
635
637
636 static int dirstate_item_set_fallback_symlink(dirstateItemObject *self,
638 static int dirstate_item_set_fallback_symlink(dirstateItemObject *self,
637 PyObject *value)
639 PyObject *value)
638 {
640 {
639 if ((value == Py_None) || (value == NULL)) {
641 if ((value == Py_None) || (value == NULL)) {
640 self->flags &= ~dirstate_flag_has_fallback_symlink;
642 self->flags &= ~dirstate_flag_has_fallback_symlink;
641 } else {
643 } else {
642 self->flags |= dirstate_flag_has_fallback_symlink;
644 self->flags |= dirstate_flag_has_fallback_symlink;
643 if (PyObject_IsTrue(value)) {
645 if (PyObject_IsTrue(value)) {
644 self->flags |= dirstate_flag_fallback_symlink;
646 self->flags |= dirstate_flag_fallback_symlink;
645 } else {
647 } else {
646 self->flags &= ~dirstate_flag_fallback_symlink;
648 self->flags &= ~dirstate_flag_fallback_symlink;
647 }
649 }
648 }
650 }
649 return 0;
651 return 0;
650 };
652 };
651
653
652 static PyObject *dirstate_item_get_tracked(dirstateItemObject *self)
654 static PyObject *dirstate_item_get_tracked(dirstateItemObject *self)
653 {
655 {
654 if (dirstate_item_c_tracked(self)) {
656 if (dirstate_item_c_tracked(self)) {
655 Py_RETURN_TRUE;
657 Py_RETURN_TRUE;
656 } else {
658 } else {
657 Py_RETURN_FALSE;
659 Py_RETURN_FALSE;
658 }
660 }
659 };
661 };
660 static PyObject *dirstate_item_get_p1_tracked(dirstateItemObject *self)
662 static PyObject *dirstate_item_get_p1_tracked(dirstateItemObject *self)
661 {
663 {
662 if (self->flags & dirstate_flag_p1_tracked) {
664 if (self->flags & dirstate_flag_p1_tracked) {
663 Py_RETURN_TRUE;
665 Py_RETURN_TRUE;
664 } else {
666 } else {
665 Py_RETURN_FALSE;
667 Py_RETURN_FALSE;
666 }
668 }
667 };
669 };
668
670
669 static PyObject *dirstate_item_get_added(dirstateItemObject *self)
671 static PyObject *dirstate_item_get_added(dirstateItemObject *self)
670 {
672 {
671 if (dirstate_item_c_added(self)) {
673 if (dirstate_item_c_added(self)) {
672 Py_RETURN_TRUE;
674 Py_RETURN_TRUE;
673 } else {
675 } else {
674 Py_RETURN_FALSE;
676 Py_RETURN_FALSE;
675 }
677 }
676 };
678 };
677
679
678 static PyObject *dirstate_item_get_p2_info(dirstateItemObject *self)
680 static PyObject *dirstate_item_get_p2_info(dirstateItemObject *self)
679 {
681 {
680 if (self->flags & dirstate_flag_wc_tracked &&
682 if (self->flags & dirstate_flag_wc_tracked &&
681 self->flags & dirstate_flag_p2_info) {
683 self->flags & dirstate_flag_p2_info) {
682 Py_RETURN_TRUE;
684 Py_RETURN_TRUE;
683 } else {
685 } else {
684 Py_RETURN_FALSE;
686 Py_RETURN_FALSE;
685 }
687 }
686 };
688 };
687
689
688 static PyObject *dirstate_item_get_merged(dirstateItemObject *self)
690 static PyObject *dirstate_item_get_merged(dirstateItemObject *self)
689 {
691 {
690 if (dirstate_item_c_merged(self)) {
692 if (dirstate_item_c_merged(self)) {
691 Py_RETURN_TRUE;
693 Py_RETURN_TRUE;
692 } else {
694 } else {
693 Py_RETURN_FALSE;
695 Py_RETURN_FALSE;
694 }
696 }
695 };
697 };
696
698
697 static PyObject *dirstate_item_get_from_p2(dirstateItemObject *self)
699 static PyObject *dirstate_item_get_from_p2(dirstateItemObject *self)
698 {
700 {
699 if (dirstate_item_c_from_p2(self)) {
701 if (dirstate_item_c_from_p2(self)) {
700 Py_RETURN_TRUE;
702 Py_RETURN_TRUE;
701 } else {
703 } else {
702 Py_RETURN_FALSE;
704 Py_RETURN_FALSE;
703 }
705 }
704 };
706 };
705
707
706 static PyObject *dirstate_item_get_maybe_clean(dirstateItemObject *self)
708 static PyObject *dirstate_item_get_maybe_clean(dirstateItemObject *self)
707 {
709 {
708 if (!(self->flags & dirstate_flag_wc_tracked)) {
710 if (!(self->flags & dirstate_flag_wc_tracked)) {
709 Py_RETURN_FALSE;
711 Py_RETURN_FALSE;
710 } else if (!(self->flags & dirstate_flag_p1_tracked)) {
712 } else if (!(self->flags & dirstate_flag_p1_tracked)) {
711 Py_RETURN_FALSE;
713 Py_RETURN_FALSE;
712 } else if (self->flags & dirstate_flag_p2_info) {
714 } else if (self->flags & dirstate_flag_p2_info) {
713 Py_RETURN_FALSE;
715 Py_RETURN_FALSE;
714 } else {
716 } else {
715 Py_RETURN_TRUE;
717 Py_RETURN_TRUE;
716 }
718 }
717 };
719 };
718
720
719 static PyObject *dirstate_item_get_any_tracked(dirstateItemObject *self)
721 static PyObject *dirstate_item_get_any_tracked(dirstateItemObject *self)
720 {
722 {
721 if (dirstate_item_c_any_tracked(self)) {
723 if (dirstate_item_c_any_tracked(self)) {
722 Py_RETURN_TRUE;
724 Py_RETURN_TRUE;
723 } else {
725 } else {
724 Py_RETURN_FALSE;
726 Py_RETURN_FALSE;
725 }
727 }
726 };
728 };
727
729
728 static PyObject *dirstate_item_get_removed(dirstateItemObject *self)
730 static PyObject *dirstate_item_get_removed(dirstateItemObject *self)
729 {
731 {
730 if (dirstate_item_c_removed(self)) {
732 if (dirstate_item_c_removed(self)) {
731 Py_RETURN_TRUE;
733 Py_RETURN_TRUE;
732 } else {
734 } else {
733 Py_RETURN_FALSE;
735 Py_RETURN_FALSE;
734 }
736 }
735 };
737 };
736
738
737 static PyGetSetDef dirstate_item_getset[] = {
739 static PyGetSetDef dirstate_item_getset[] = {
738 {"mode", (getter)dirstate_item_get_mode, NULL, "mode", NULL},
740 {"mode", (getter)dirstate_item_get_mode, NULL, "mode", NULL},
739 {"size", (getter)dirstate_item_get_size, NULL, "size", NULL},
741 {"size", (getter)dirstate_item_get_size, NULL, "size", NULL},
740 {"mtime", (getter)dirstate_item_get_mtime, NULL, "mtime", NULL},
742 {"mtime", (getter)dirstate_item_get_mtime, NULL, "mtime", NULL},
741 {"state", (getter)dirstate_item_get_state, NULL, "state", NULL},
743 {"state", (getter)dirstate_item_get_state, NULL, "state", NULL},
742 {"has_fallback_exec", (getter)dirstate_item_get_has_fallback_exec, NULL,
744 {"has_fallback_exec", (getter)dirstate_item_get_has_fallback_exec, NULL,
743 "has_fallback_exec", NULL},
745 "has_fallback_exec", NULL},
744 {"fallback_exec", (getter)dirstate_item_get_fallback_exec,
746 {"fallback_exec", (getter)dirstate_item_get_fallback_exec,
745 (setter)dirstate_item_set_fallback_exec, "fallback_exec", NULL},
747 (setter)dirstate_item_set_fallback_exec, "fallback_exec", NULL},
746 {"has_fallback_symlink", (getter)dirstate_item_get_has_fallback_symlink,
748 {"has_fallback_symlink", (getter)dirstate_item_get_has_fallback_symlink,
747 NULL, "has_fallback_symlink", NULL},
749 NULL, "has_fallback_symlink", NULL},
748 {"fallback_symlink", (getter)dirstate_item_get_fallback_symlink,
750 {"fallback_symlink", (getter)dirstate_item_get_fallback_symlink,
749 (setter)dirstate_item_set_fallback_symlink, "fallback_symlink", NULL},
751 (setter)dirstate_item_set_fallback_symlink, "fallback_symlink", NULL},
750 {"tracked", (getter)dirstate_item_get_tracked, NULL, "tracked", NULL},
752 {"tracked", (getter)dirstate_item_get_tracked, NULL, "tracked", NULL},
751 {"p1_tracked", (getter)dirstate_item_get_p1_tracked, NULL, "p1_tracked",
753 {"p1_tracked", (getter)dirstate_item_get_p1_tracked, NULL, "p1_tracked",
752 NULL},
754 NULL},
753 {"added", (getter)dirstate_item_get_added, NULL, "added", NULL},
755 {"added", (getter)dirstate_item_get_added, NULL, "added", NULL},
754 {"p2_info", (getter)dirstate_item_get_p2_info, NULL, "p2_info", NULL},
756 {"p2_info", (getter)dirstate_item_get_p2_info, NULL, "p2_info", NULL},
755 {"merged", (getter)dirstate_item_get_merged, NULL, "merged", NULL},
757 {"merged", (getter)dirstate_item_get_merged, NULL, "merged", NULL},
756 {"from_p2", (getter)dirstate_item_get_from_p2, NULL, "from_p2", NULL},
758 {"from_p2", (getter)dirstate_item_get_from_p2, NULL, "from_p2", NULL},
757 {"maybe_clean", (getter)dirstate_item_get_maybe_clean, NULL, "maybe_clean",
759 {"maybe_clean", (getter)dirstate_item_get_maybe_clean, NULL, "maybe_clean",
758 NULL},
760 NULL},
759 {"any_tracked", (getter)dirstate_item_get_any_tracked, NULL, "any_tracked",
761 {"any_tracked", (getter)dirstate_item_get_any_tracked, NULL, "any_tracked",
760 NULL},
762 NULL},
761 {"removed", (getter)dirstate_item_get_removed, NULL, "removed", NULL},
763 {"removed", (getter)dirstate_item_get_removed, NULL, "removed", NULL},
762 {NULL} /* Sentinel */
764 {NULL} /* Sentinel */
763 };
765 };
764
766
765 PyTypeObject dirstateItemType = {
767 PyTypeObject dirstateItemType = {
766 PyVarObject_HEAD_INIT(NULL, 0) /* header */
768 PyVarObject_HEAD_INIT(NULL, 0) /* header */
767 "dirstate_tuple", /* tp_name */
769 "dirstate_tuple", /* tp_name */
768 sizeof(dirstateItemObject), /* tp_basicsize */
770 sizeof(dirstateItemObject), /* tp_basicsize */
769 0, /* tp_itemsize */
771 0, /* tp_itemsize */
770 (destructor)dirstate_item_dealloc, /* tp_dealloc */
772 (destructor)dirstate_item_dealloc, /* tp_dealloc */
771 0, /* tp_print */
773 0, /* tp_print */
772 0, /* tp_getattr */
774 0, /* tp_getattr */
773 0, /* tp_setattr */
775 0, /* tp_setattr */
774 0, /* tp_compare */
776 0, /* tp_compare */
775 0, /* tp_repr */
777 0, /* tp_repr */
776 0, /* tp_as_number */
778 0, /* tp_as_number */
777 0, /* tp_as_sequence */
779 0, /* tp_as_sequence */
778 0, /* tp_as_mapping */
780 0, /* tp_as_mapping */
779 0, /* tp_hash */
781 0, /* tp_hash */
780 0, /* tp_call */
782 0, /* tp_call */
781 0, /* tp_str */
783 0, /* tp_str */
782 0, /* tp_getattro */
784 0, /* tp_getattro */
783 0, /* tp_setattro */
785 0, /* tp_setattro */
784 0, /* tp_as_buffer */
786 0, /* tp_as_buffer */
785 Py_TPFLAGS_DEFAULT, /* tp_flags */
787 Py_TPFLAGS_DEFAULT, /* tp_flags */
786 "dirstate tuple", /* tp_doc */
788 "dirstate tuple", /* tp_doc */
787 0, /* tp_traverse */
789 0, /* tp_traverse */
788 0, /* tp_clear */
790 0, /* tp_clear */
789 0, /* tp_richcompare */
791 0, /* tp_richcompare */
790 0, /* tp_weaklistoffset */
792 0, /* tp_weaklistoffset */
791 0, /* tp_iter */
793 0, /* tp_iter */
792 0, /* tp_iternext */
794 0, /* tp_iternext */
793 dirstate_item_methods, /* tp_methods */
795 dirstate_item_methods, /* tp_methods */
794 0, /* tp_members */
796 0, /* tp_members */
795 dirstate_item_getset, /* tp_getset */
797 dirstate_item_getset, /* tp_getset */
796 0, /* tp_base */
798 0, /* tp_base */
797 0, /* tp_dict */
799 0, /* tp_dict */
798 0, /* tp_descr_get */
800 0, /* tp_descr_get */
799 0, /* tp_descr_set */
801 0, /* tp_descr_set */
800 0, /* tp_dictoffset */
802 0, /* tp_dictoffset */
801 0, /* tp_init */
803 0, /* tp_init */
802 0, /* tp_alloc */
804 0, /* tp_alloc */
803 dirstate_item_new, /* tp_new */
805 dirstate_item_new, /* tp_new */
804 };
806 };
805
807
806 static PyObject *parse_dirstate(PyObject *self, PyObject *args)
808 static PyObject *parse_dirstate(PyObject *self, PyObject *args)
807 {
809 {
808 PyObject *dmap, *cmap, *parents = NULL, *ret = NULL;
810 PyObject *dmap, *cmap, *parents = NULL, *ret = NULL;
809 PyObject *fname = NULL, *cname = NULL, *entry = NULL;
811 PyObject *fname = NULL, *cname = NULL, *entry = NULL;
810 char state, *cur, *str, *cpos;
812 char state, *cur, *str, *cpos;
811 int mode, size, mtime;
813 int mode, size, mtime;
812 unsigned int flen, pos = 40;
814 unsigned int flen, pos = 40;
813 Py_ssize_t len = 40;
815 Py_ssize_t len = 40;
814 Py_ssize_t readlen;
816 Py_ssize_t readlen;
815
817
816 if (!PyArg_ParseTuple(args, "O!O!y#:parse_dirstate", &PyDict_Type,
818 if (!PyArg_ParseTuple(args, "O!O!y#:parse_dirstate", &PyDict_Type,
817 &dmap, &PyDict_Type, &cmap, &str, &readlen)) {
819 &dmap, &PyDict_Type, &cmap, &str, &readlen)) {
818 goto quit;
820 goto quit;
819 }
821 }
820
822
821 len = readlen;
823 len = readlen;
822
824
823 /* read parents */
825 /* read parents */
824 if (len < 40) {
826 if (len < 40) {
825 PyErr_SetString(PyExc_ValueError,
827 PyErr_SetString(PyExc_ValueError,
826 "too little data for parents");
828 "too little data for parents");
827 goto quit;
829 goto quit;
828 }
830 }
829
831
830 parents = Py_BuildValue("y#y#", str, (Py_ssize_t)20, str + 20,
832 parents = Py_BuildValue("y#y#", str, (Py_ssize_t)20, str + 20,
831 (Py_ssize_t)20);
833 (Py_ssize_t)20);
832 if (!parents) {
834 if (!parents) {
833 goto quit;
835 goto quit;
834 }
836 }
835
837
836 /* read filenames */
838 /* read filenames */
837 while (pos >= 40 && pos < len) {
839 while (pos >= 40 && pos < len) {
838 if (pos + 17 > len) {
840 if (pos + 17 > len) {
839 PyErr_SetString(PyExc_ValueError,
841 PyErr_SetString(PyExc_ValueError,
840 "overflow in dirstate");
842 "overflow in dirstate");
841 goto quit;
843 goto quit;
842 }
844 }
843 cur = str + pos;
845 cur = str + pos;
844 /* unpack header */
846 /* unpack header */
845 state = *cur;
847 state = *cur;
846 mode = getbe32(cur + 1);
848 mode = getbe32(cur + 1);
847 size = getbe32(cur + 5);
849 size = getbe32(cur + 5);
848 mtime = getbe32(cur + 9);
850 mtime = getbe32(cur + 9);
849 flen = getbe32(cur + 13);
851 flen = getbe32(cur + 13);
850 pos += 17;
852 pos += 17;
851 cur += 17;
853 cur += 17;
852 if (flen > len - pos) {
854 if (flen > len - pos) {
853 PyErr_SetString(PyExc_ValueError,
855 PyErr_SetString(PyExc_ValueError,
854 "overflow in dirstate");
856 "overflow in dirstate");
855 goto quit;
857 goto quit;
856 }
858 }
857
859
858 entry = (PyObject *)dirstate_item_from_v1_data(state, mode,
860 entry = (PyObject *)dirstate_item_from_v1_data(state, mode,
859 size, mtime);
861 size, mtime);
860 if (!entry)
862 if (!entry)
861 goto quit;
863 goto quit;
862 cpos = memchr(cur, 0, flen);
864 cpos = memchr(cur, 0, flen);
863 if (cpos) {
865 if (cpos) {
864 fname = PyBytes_FromStringAndSize(cur, cpos - cur);
866 fname = PyBytes_FromStringAndSize(cur, cpos - cur);
865 cname = PyBytes_FromStringAndSize(
867 cname = PyBytes_FromStringAndSize(
866 cpos + 1, flen - (cpos - cur) - 1);
868 cpos + 1, flen - (cpos - cur) - 1);
867 if (!fname || !cname ||
869 if (!fname || !cname ||
868 PyDict_SetItem(cmap, fname, cname) == -1 ||
870 PyDict_SetItem(cmap, fname, cname) == -1 ||
869 PyDict_SetItem(dmap, fname, entry) == -1) {
871 PyDict_SetItem(dmap, fname, entry) == -1) {
870 goto quit;
872 goto quit;
871 }
873 }
872 Py_DECREF(cname);
874 Py_DECREF(cname);
873 } else {
875 } else {
874 fname = PyBytes_FromStringAndSize(cur, flen);
876 fname = PyBytes_FromStringAndSize(cur, flen);
875 if (!fname ||
877 if (!fname ||
876 PyDict_SetItem(dmap, fname, entry) == -1) {
878 PyDict_SetItem(dmap, fname, entry) == -1) {
877 goto quit;
879 goto quit;
878 }
880 }
879 }
881 }
880 Py_DECREF(fname);
882 Py_DECREF(fname);
881 Py_DECREF(entry);
883 Py_DECREF(entry);
882 fname = cname = entry = NULL;
884 fname = cname = entry = NULL;
883 pos += flen;
885 pos += flen;
884 }
886 }
885
887
886 ret = parents;
888 ret = parents;
887 Py_INCREF(ret);
889 Py_INCREF(ret);
888 quit:
890 quit:
889 Py_XDECREF(fname);
891 Py_XDECREF(fname);
890 Py_XDECREF(cname);
892 Py_XDECREF(cname);
891 Py_XDECREF(entry);
893 Py_XDECREF(entry);
892 Py_XDECREF(parents);
894 Py_XDECREF(parents);
893 return ret;
895 return ret;
894 }
896 }
895
897
896 /*
898 /*
897 * Efficiently pack a dirstate object into its on-disk format.
899 * Efficiently pack a dirstate object into its on-disk format.
898 */
900 */
899 static PyObject *pack_dirstate(PyObject *self, PyObject *args)
901 static PyObject *pack_dirstate(PyObject *self, PyObject *args)
900 {
902 {
901 PyObject *packobj = NULL;
903 PyObject *packobj = NULL;
902 PyObject *map, *copymap, *pl, *mtime_unset = NULL;
904 PyObject *map, *copymap, *pl, *mtime_unset = NULL;
903 Py_ssize_t nbytes, pos, l;
905 Py_ssize_t nbytes, pos, l;
904 PyObject *k, *v = NULL, *pn;
906 PyObject *k, *v = NULL, *pn;
905 char *p, *s;
907 char *p, *s;
906
908
907 if (!PyArg_ParseTuple(args, "O!O!O!:pack_dirstate", &PyDict_Type, &map,
909 if (!PyArg_ParseTuple(args, "O!O!O!:pack_dirstate", &PyDict_Type, &map,
908 &PyDict_Type, &copymap, &PyTuple_Type, &pl)) {
910 &PyDict_Type, &copymap, &PyTuple_Type, &pl)) {
909 return NULL;
911 return NULL;
910 }
912 }
911
913
912 if (PyTuple_Size(pl) != 2) {
914 if (PyTuple_Size(pl) != 2) {
913 PyErr_SetString(PyExc_TypeError, "expected 2-element tuple");
915 PyErr_SetString(PyExc_TypeError, "expected 2-element tuple");
914 return NULL;
916 return NULL;
915 }
917 }
916
918
917 /* Figure out how much we need to allocate. */
919 /* Figure out how much we need to allocate. */
918 for (nbytes = 40, pos = 0; PyDict_Next(map, &pos, &k, &v);) {
920 for (nbytes = 40, pos = 0; PyDict_Next(map, &pos, &k, &v);) {
919 PyObject *c;
921 PyObject *c;
920 if (!PyBytes_Check(k)) {
922 if (!PyBytes_Check(k)) {
921 PyErr_SetString(PyExc_TypeError, "expected string key");
923 PyErr_SetString(PyExc_TypeError, "expected string key");
922 goto bail;
924 goto bail;
923 }
925 }
924 nbytes += PyBytes_GET_SIZE(k) + 17;
926 nbytes += PyBytes_GET_SIZE(k) + 17;
925 c = PyDict_GetItem(copymap, k);
927 c = PyDict_GetItem(copymap, k);
926 if (c) {
928 if (c) {
927 if (!PyBytes_Check(c)) {
929 if (!PyBytes_Check(c)) {
928 PyErr_SetString(PyExc_TypeError,
930 PyErr_SetString(PyExc_TypeError,
929 "expected string key");
931 "expected string key");
930 goto bail;
932 goto bail;
931 }
933 }
932 nbytes += PyBytes_GET_SIZE(c) + 1;
934 nbytes += PyBytes_GET_SIZE(c) + 1;
933 }
935 }
934 }
936 }
935
937
936 packobj = PyBytes_FromStringAndSize(NULL, nbytes);
938 packobj = PyBytes_FromStringAndSize(NULL, nbytes);
937 if (packobj == NULL) {
939 if (packobj == NULL) {
938 goto bail;
940 goto bail;
939 }
941 }
940
942
941 p = PyBytes_AS_STRING(packobj);
943 p = PyBytes_AS_STRING(packobj);
942
944
943 pn = PyTuple_GET_ITEM(pl, 0);
945 pn = PyTuple_GET_ITEM(pl, 0);
944 if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
946 if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
945 PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash");
947 PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash");
946 goto bail;
948 goto bail;
947 }
949 }
948 memcpy(p, s, l);
950 memcpy(p, s, l);
949 p += 20;
951 p += 20;
950 pn = PyTuple_GET_ITEM(pl, 1);
952 pn = PyTuple_GET_ITEM(pl, 1);
951 if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
953 if (PyBytes_AsStringAndSize(pn, &s, &l) == -1 || l != 20) {
952 PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash");
954 PyErr_SetString(PyExc_TypeError, "expected a 20-byte hash");
953 goto bail;
955 goto bail;
954 }
956 }
955 memcpy(p, s, l);
957 memcpy(p, s, l);
956 p += 20;
958 p += 20;
957
959
958 for (pos = 0; PyDict_Next(map, &pos, &k, &v);) {
960 for (pos = 0; PyDict_Next(map, &pos, &k, &v);) {
959 dirstateItemObject *tuple;
961 dirstateItemObject *tuple;
960 char state;
962 char state;
961 int mode, size, mtime;
963 int mode, size, mtime;
962 Py_ssize_t len, l;
964 Py_ssize_t len, l;
963 PyObject *o;
965 PyObject *o;
964 char *t;
966 char *t;
965
967
966 if (!dirstate_tuple_check(v)) {
968 if (!dirstate_tuple_check(v)) {
967 PyErr_SetString(PyExc_TypeError,
969 PyErr_SetString(PyExc_TypeError,
968 "expected a dirstate tuple");
970 "expected a dirstate tuple");
969 goto bail;
971 goto bail;
970 }
972 }
971 tuple = (dirstateItemObject *)v;
973 tuple = (dirstateItemObject *)v;
972
974
973 state = dirstate_item_c_v1_state(tuple);
975 state = dirstate_item_c_v1_state(tuple);
974 mode = dirstate_item_c_v1_mode(tuple);
976 mode = dirstate_item_c_v1_mode(tuple);
975 size = dirstate_item_c_v1_size(tuple);
977 size = dirstate_item_c_v1_size(tuple);
976 mtime = dirstate_item_c_v1_mtime(tuple);
978 mtime = dirstate_item_c_v1_mtime(tuple);
977 *p++ = state;
979 *p++ = state;
978 putbe32((uint32_t)mode, p);
980 putbe32((uint32_t)mode, p);
979 putbe32((uint32_t)size, p + 4);
981 putbe32((uint32_t)size, p + 4);
980 putbe32((uint32_t)mtime, p + 8);
982 putbe32((uint32_t)mtime, p + 8);
981 t = p + 12;
983 t = p + 12;
982 p += 16;
984 p += 16;
983 len = PyBytes_GET_SIZE(k);
985 len = PyBytes_GET_SIZE(k);
984 memcpy(p, PyBytes_AS_STRING(k), len);
986 memcpy(p, PyBytes_AS_STRING(k), len);
985 p += len;
987 p += len;
986 o = PyDict_GetItem(copymap, k);
988 o = PyDict_GetItem(copymap, k);
987 if (o) {
989 if (o) {
988 *p++ = '\0';
990 *p++ = '\0';
989 l = PyBytes_GET_SIZE(o);
991 l = PyBytes_GET_SIZE(o);
990 memcpy(p, PyBytes_AS_STRING(o), l);
992 memcpy(p, PyBytes_AS_STRING(o), l);
991 p += l;
993 p += l;
992 len += l + 1;
994 len += l + 1;
993 }
995 }
994 putbe32((uint32_t)len, t);
996 putbe32((uint32_t)len, t);
995 }
997 }
996
998
997 pos = p - PyBytes_AS_STRING(packobj);
999 pos = p - PyBytes_AS_STRING(packobj);
998 if (pos != nbytes) {
1000 if (pos != nbytes) {
999 PyErr_Format(PyExc_SystemError, "bad dirstate size: %ld != %ld",
1001 PyErr_Format(PyExc_SystemError, "bad dirstate size: %ld != %ld",
1000 (long)pos, (long)nbytes);
1002 (long)pos, (long)nbytes);
1001 goto bail;
1003 goto bail;
1002 }
1004 }
1003
1005
1004 return packobj;
1006 return packobj;
1005 bail:
1007 bail:
1006 Py_XDECREF(mtime_unset);
1008 Py_XDECREF(mtime_unset);
1007 Py_XDECREF(packobj);
1009 Py_XDECREF(packobj);
1008 Py_XDECREF(v);
1010 Py_XDECREF(v);
1009 return NULL;
1011 return NULL;
1010 }
1012 }
1011
1013
1012 #define BUMPED_FIX 1
1014 #define BUMPED_FIX 1
1013 #define USING_SHA_256 2
1015 #define USING_SHA_256 2
1014 #define FM1_HEADER_SIZE (4 + 8 + 2 + 2 + 1 + 1 + 1)
1016 #define FM1_HEADER_SIZE (4 + 8 + 2 + 2 + 1 + 1 + 1)
1015
1017
1016 static PyObject *readshas(const char *source, unsigned char num,
1018 static PyObject *readshas(const char *source, unsigned char num,
1017 Py_ssize_t hashwidth)
1019 Py_ssize_t hashwidth)
1018 {
1020 {
1019 int i;
1021 int i;
1020 PyObject *list = PyTuple_New(num);
1022 PyObject *list = PyTuple_New(num);
1021 if (list == NULL) {
1023 if (list == NULL) {
1022 return NULL;
1024 return NULL;
1023 }
1025 }
1024 for (i = 0; i < num; i++) {
1026 for (i = 0; i < num; i++) {
1025 PyObject *hash = PyBytes_FromStringAndSize(source, hashwidth);
1027 PyObject *hash = PyBytes_FromStringAndSize(source, hashwidth);
1026 if (hash == NULL) {
1028 if (hash == NULL) {
1027 Py_DECREF(list);
1029 Py_DECREF(list);
1028 return NULL;
1030 return NULL;
1029 }
1031 }
1030 PyTuple_SET_ITEM(list, i, hash);
1032 PyTuple_SET_ITEM(list, i, hash);
1031 source += hashwidth;
1033 source += hashwidth;
1032 }
1034 }
1033 return list;
1035 return list;
1034 }
1036 }
1035
1037
1036 static PyObject *fm1readmarker(const char *databegin, const char *dataend,
1038 static PyObject *fm1readmarker(const char *databegin, const char *dataend,
1037 uint32_t *msize)
1039 uint32_t *msize)
1038 {
1040 {
1039 const char *data = databegin;
1041 const char *data = databegin;
1040 const char *meta;
1042 const char *meta;
1041
1043
1042 double mtime;
1044 double mtime;
1043 int16_t tz;
1045 int16_t tz;
1044 uint16_t flags;
1046 uint16_t flags;
1045 unsigned char nsuccs, nparents, nmetadata;
1047 unsigned char nsuccs, nparents, nmetadata;
1046 Py_ssize_t hashwidth = 20;
1048 Py_ssize_t hashwidth = 20;
1047
1049
1048 PyObject *prec = NULL, *parents = NULL, *succs = NULL;
1050 PyObject *prec = NULL, *parents = NULL, *succs = NULL;
1049 PyObject *metadata = NULL, *ret = NULL;
1051 PyObject *metadata = NULL, *ret = NULL;
1050 int i;
1052 int i;
1051
1053
1052 if (data + FM1_HEADER_SIZE > dataend) {
1054 if (data + FM1_HEADER_SIZE > dataend) {
1053 goto overflow;
1055 goto overflow;
1054 }
1056 }
1055
1057
1056 *msize = getbe32(data);
1058 *msize = getbe32(data);
1057 data += 4;
1059 data += 4;
1058 mtime = getbefloat64(data);
1060 mtime = getbefloat64(data);
1059 data += 8;
1061 data += 8;
1060 tz = getbeint16(data);
1062 tz = getbeint16(data);
1061 data += 2;
1063 data += 2;
1062 flags = getbeuint16(data);
1064 flags = getbeuint16(data);
1063 data += 2;
1065 data += 2;
1064
1066
1065 if (flags & USING_SHA_256) {
1067 if (flags & USING_SHA_256) {
1066 hashwidth = 32;
1068 hashwidth = 32;
1067 }
1069 }
1068
1070
1069 nsuccs = (unsigned char)(*data++);
1071 nsuccs = (unsigned char)(*data++);
1070 nparents = (unsigned char)(*data++);
1072 nparents = (unsigned char)(*data++);
1071 nmetadata = (unsigned char)(*data++);
1073 nmetadata = (unsigned char)(*data++);
1072
1074
1073 if (databegin + *msize > dataend) {
1075 if (databegin + *msize > dataend) {
1074 goto overflow;
1076 goto overflow;
1075 }
1077 }
1076 dataend = databegin + *msize; /* narrow down to marker size */
1078 dataend = databegin + *msize; /* narrow down to marker size */
1077
1079
1078 if (data + hashwidth > dataend) {
1080 if (data + hashwidth > dataend) {
1079 goto overflow;
1081 goto overflow;
1080 }
1082 }
1081 prec = PyBytes_FromStringAndSize(data, hashwidth);
1083 prec = PyBytes_FromStringAndSize(data, hashwidth);
1082 data += hashwidth;
1084 data += hashwidth;
1083 if (prec == NULL) {
1085 if (prec == NULL) {
1084 goto bail;
1086 goto bail;
1085 }
1087 }
1086
1088
1087 if (data + nsuccs * hashwidth > dataend) {
1089 if (data + nsuccs * hashwidth > dataend) {
1088 goto overflow;
1090 goto overflow;
1089 }
1091 }
1090 succs = readshas(data, nsuccs, hashwidth);
1092 succs = readshas(data, nsuccs, hashwidth);
1091 if (succs == NULL) {
1093 if (succs == NULL) {
1092 goto bail;
1094 goto bail;
1093 }
1095 }
1094 data += nsuccs * hashwidth;
1096 data += nsuccs * hashwidth;
1095
1097
1096 if (nparents == 1 || nparents == 2) {
1098 if (nparents == 1 || nparents == 2) {
1097 if (data + nparents * hashwidth > dataend) {
1099 if (data + nparents * hashwidth > dataend) {
1098 goto overflow;
1100 goto overflow;
1099 }
1101 }
1100 parents = readshas(data, nparents, hashwidth);
1102 parents = readshas(data, nparents, hashwidth);
1101 if (parents == NULL) {
1103 if (parents == NULL) {
1102 goto bail;
1104 goto bail;
1103 }
1105 }
1104 data += nparents * hashwidth;
1106 data += nparents * hashwidth;
1105 } else {
1107 } else {
1106 parents = Py_None;
1108 parents = Py_None;
1107 Py_INCREF(parents);
1109 Py_INCREF(parents);
1108 }
1110 }
1109
1111
1110 if (data + 2 * nmetadata > dataend) {
1112 if (data + 2 * nmetadata > dataend) {
1111 goto overflow;
1113 goto overflow;
1112 }
1114 }
1113 meta = data + (2 * nmetadata);
1115 meta = data + (2 * nmetadata);
1114 metadata = PyTuple_New(nmetadata);
1116 metadata = PyTuple_New(nmetadata);
1115 if (metadata == NULL) {
1117 if (metadata == NULL) {
1116 goto bail;
1118 goto bail;
1117 }
1119 }
1118 for (i = 0; i < nmetadata; i++) {
1120 for (i = 0; i < nmetadata; i++) {
1119 PyObject *tmp, *left = NULL, *right = NULL;
1121 PyObject *tmp, *left = NULL, *right = NULL;
1120 Py_ssize_t leftsize = (unsigned char)(*data++);
1122 Py_ssize_t leftsize = (unsigned char)(*data++);
1121 Py_ssize_t rightsize = (unsigned char)(*data++);
1123 Py_ssize_t rightsize = (unsigned char)(*data++);
1122 if (meta + leftsize + rightsize > dataend) {
1124 if (meta + leftsize + rightsize > dataend) {
1123 goto overflow;
1125 goto overflow;
1124 }
1126 }
1125 left = PyBytes_FromStringAndSize(meta, leftsize);
1127 left = PyBytes_FromStringAndSize(meta, leftsize);
1126 meta += leftsize;
1128 meta += leftsize;
1127 right = PyBytes_FromStringAndSize(meta, rightsize);
1129 right = PyBytes_FromStringAndSize(meta, rightsize);
1128 meta += rightsize;
1130 meta += rightsize;
1129 tmp = PyTuple_New(2);
1131 tmp = PyTuple_New(2);
1130 if (!left || !right || !tmp) {
1132 if (!left || !right || !tmp) {
1131 Py_XDECREF(left);
1133 Py_XDECREF(left);
1132 Py_XDECREF(right);
1134 Py_XDECREF(right);
1133 Py_XDECREF(tmp);
1135 Py_XDECREF(tmp);
1134 goto bail;
1136 goto bail;
1135 }
1137 }
1136 PyTuple_SET_ITEM(tmp, 0, left);
1138 PyTuple_SET_ITEM(tmp, 0, left);
1137 PyTuple_SET_ITEM(tmp, 1, right);
1139 PyTuple_SET_ITEM(tmp, 1, right);
1138 PyTuple_SET_ITEM(metadata, i, tmp);
1140 PyTuple_SET_ITEM(metadata, i, tmp);
1139 }
1141 }
1140 ret = Py_BuildValue("(OOHO(di)O)", prec, succs, flags, metadata, mtime,
1142 ret = Py_BuildValue("(OOHO(di)O)", prec, succs, flags, metadata, mtime,
1141 (int)tz * 60, parents);
1143 (int)tz * 60, parents);
1142 goto bail; /* return successfully */
1144 goto bail; /* return successfully */
1143
1145
1144 overflow:
1146 overflow:
1145 PyErr_SetString(PyExc_ValueError, "overflow in obsstore");
1147 PyErr_SetString(PyExc_ValueError, "overflow in obsstore");
1146 bail:
1148 bail:
1147 Py_XDECREF(prec);
1149 Py_XDECREF(prec);
1148 Py_XDECREF(succs);
1150 Py_XDECREF(succs);
1149 Py_XDECREF(metadata);
1151 Py_XDECREF(metadata);
1150 Py_XDECREF(parents);
1152 Py_XDECREF(parents);
1151 return ret;
1153 return ret;
1152 }
1154 }
1153
1155
1154 static PyObject *fm1readmarkers(PyObject *self, PyObject *args)
1156 static PyObject *fm1readmarkers(PyObject *self, PyObject *args)
1155 {
1157 {
1156 const char *data, *dataend;
1158 const char *data, *dataend;
1157 Py_ssize_t datalen, offset, stop;
1159 Py_ssize_t datalen, offset, stop;
1158 PyObject *markers = NULL;
1160 PyObject *markers = NULL;
1159
1161
1160 if (!PyArg_ParseTuple(args, "y#nn", &data, &datalen, &offset, &stop)) {
1162 if (!PyArg_ParseTuple(args, "y#nn", &data, &datalen, &offset, &stop)) {
1161 return NULL;
1163 return NULL;
1162 }
1164 }
1163 if (offset < 0) {
1165 if (offset < 0) {
1164 PyErr_SetString(PyExc_ValueError,
1166 PyErr_SetString(PyExc_ValueError,
1165 "invalid negative offset in fm1readmarkers");
1167 "invalid negative offset in fm1readmarkers");
1166 return NULL;
1168 return NULL;
1167 }
1169 }
1168 if (stop > datalen) {
1170 if (stop > datalen) {
1169 PyErr_SetString(
1171 PyErr_SetString(
1170 PyExc_ValueError,
1172 PyExc_ValueError,
1171 "stop longer than data length in fm1readmarkers");
1173 "stop longer than data length in fm1readmarkers");
1172 return NULL;
1174 return NULL;
1173 }
1175 }
1174 dataend = data + datalen;
1176 dataend = data + datalen;
1175 data += offset;
1177 data += offset;
1176 markers = PyList_New(0);
1178 markers = PyList_New(0);
1177 if (!markers) {
1179 if (!markers) {
1178 return NULL;
1180 return NULL;
1179 }
1181 }
1180 while (offset < stop) {
1182 while (offset < stop) {
1181 uint32_t msize;
1183 uint32_t msize;
1182 int error;
1184 int error;
1183 PyObject *record = fm1readmarker(data, dataend, &msize);
1185 PyObject *record = fm1readmarker(data, dataend, &msize);
1184 if (!record) {
1186 if (!record) {
1185 goto bail;
1187 goto bail;
1186 }
1188 }
1187 error = PyList_Append(markers, record);
1189 error = PyList_Append(markers, record);
1188 Py_DECREF(record);
1190 Py_DECREF(record);
1189 if (error) {
1191 if (error) {
1190 goto bail;
1192 goto bail;
1191 }
1193 }
1192 data += msize;
1194 data += msize;
1193 offset += msize;
1195 offset += msize;
1194 }
1196 }
1195 return markers;
1197 return markers;
1196 bail:
1198 bail:
1197 Py_DECREF(markers);
1199 Py_DECREF(markers);
1198 return NULL;
1200 return NULL;
1199 }
1201 }
1200
1202
1201 static char parsers_doc[] = "Efficient content parsing.";
1203 static char parsers_doc[] = "Efficient content parsing.";
1202
1204
1203 PyObject *encodedir(PyObject *self, PyObject *args);
1205 PyObject *encodedir(PyObject *self, PyObject *args);
1204 PyObject *pathencode(PyObject *self, PyObject *args);
1206 PyObject *pathencode(PyObject *self, PyObject *args);
1205 PyObject *lowerencode(PyObject *self, PyObject *args);
1207 PyObject *lowerencode(PyObject *self, PyObject *args);
1206 PyObject *parse_index2(PyObject *self, PyObject *args, PyObject *kwargs);
1208 PyObject *parse_index2(PyObject *self, PyObject *args, PyObject *kwargs);
1207
1209
1208 static PyMethodDef methods[] = {
1210 static PyMethodDef methods[] = {
1209 {"pack_dirstate", pack_dirstate, METH_VARARGS, "pack a dirstate\n"},
1211 {"pack_dirstate", pack_dirstate, METH_VARARGS, "pack a dirstate\n"},
1210 {"parse_dirstate", parse_dirstate, METH_VARARGS, "parse a dirstate\n"},
1212 {"parse_dirstate", parse_dirstate, METH_VARARGS, "parse a dirstate\n"},
1211 {"parse_index2", (PyCFunction)parse_index2, METH_VARARGS | METH_KEYWORDS,
1213 {"parse_index2", (PyCFunction)parse_index2, METH_VARARGS | METH_KEYWORDS,
1212 "parse a revlog index\n"},
1214 "parse a revlog index\n"},
1213 {"isasciistr", isasciistr, METH_VARARGS, "check if an ASCII string\n"},
1215 {"isasciistr", isasciistr, METH_VARARGS, "check if an ASCII string\n"},
1214 {"asciilower", asciilower, METH_VARARGS, "lowercase an ASCII string\n"},
1216 {"asciilower", asciilower, METH_VARARGS, "lowercase an ASCII string\n"},
1215 {"asciiupper", asciiupper, METH_VARARGS, "uppercase an ASCII string\n"},
1217 {"asciiupper", asciiupper, METH_VARARGS, "uppercase an ASCII string\n"},
1216 {"dict_new_presized", dict_new_presized, METH_VARARGS,
1218 {"dict_new_presized", dict_new_presized, METH_VARARGS,
1217 "construct a dict with an expected size\n"},
1219 "construct a dict with an expected size\n"},
1218 {"make_file_foldmap", make_file_foldmap, METH_VARARGS,
1220 {"make_file_foldmap", make_file_foldmap, METH_VARARGS,
1219 "make file foldmap\n"},
1221 "make file foldmap\n"},
1220 {"jsonescapeu8fast", jsonescapeu8fast, METH_VARARGS,
1222 {"jsonescapeu8fast", jsonescapeu8fast, METH_VARARGS,
1221 "escape a UTF-8 byte string to JSON (fast path)\n"},
1223 "escape a UTF-8 byte string to JSON (fast path)\n"},
1222 {"encodedir", encodedir, METH_VARARGS, "encodedir a path\n"},
1224 {"encodedir", encodedir, METH_VARARGS, "encodedir a path\n"},
1223 {"pathencode", pathencode, METH_VARARGS, "fncache-encode a path\n"},
1225 {"pathencode", pathencode, METH_VARARGS, "fncache-encode a path\n"},
1224 {"lowerencode", lowerencode, METH_VARARGS, "lower-encode a path\n"},
1226 {"lowerencode", lowerencode, METH_VARARGS, "lower-encode a path\n"},
1225 {"fm1readmarkers", fm1readmarkers, METH_VARARGS,
1227 {"fm1readmarkers", fm1readmarkers, METH_VARARGS,
1226 "parse v1 obsolete markers\n"},
1228 "parse v1 obsolete markers\n"},
1227 {NULL, NULL}};
1229 {NULL, NULL}};
1228
1230
1229 void dirs_module_init(PyObject *mod);
1231 void dirs_module_init(PyObject *mod);
1230 void manifest_module_init(PyObject *mod);
1232 void manifest_module_init(PyObject *mod);
1231 void revlog_module_init(PyObject *mod);
1233 void revlog_module_init(PyObject *mod);
1232
1234
1233 static const int version = 20;
1235 static const int version = 20;
1234
1236
1235 static void module_init(PyObject *mod)
1237 static void module_init(PyObject *mod)
1236 {
1238 {
1237 PyModule_AddIntConstant(mod, "version", version);
1239 PyModule_AddIntConstant(mod, "version", version);
1238
1240
1239 /* This module constant has two purposes. First, it lets us unit test
1241 /* This module constant has two purposes. First, it lets us unit test
1240 * the ImportError raised without hard-coding any error text. This
1242 * the ImportError raised without hard-coding any error text. This
1241 * means we can change the text in the future without breaking tests,
1243 * means we can change the text in the future without breaking tests,
1242 * even across changesets without a recompile. Second, its presence
1244 * even across changesets without a recompile. Second, its presence
1243 * can be used to determine whether the version-checking logic is
1245 * can be used to determine whether the version-checking logic is
1244 * present, which also helps in testing across changesets without a
1246 * present, which also helps in testing across changesets without a
1245 * recompile. Note that this means the pure-Python version of parsers
1247 * recompile. Note that this means the pure-Python version of parsers
1246 * should not have this module constant. */
1248 * should not have this module constant. */
1247 PyModule_AddStringConstant(mod, "versionerrortext", versionerrortext);
1249 PyModule_AddStringConstant(mod, "versionerrortext", versionerrortext);
1248
1250
1249 dirs_module_init(mod);
1251 dirs_module_init(mod);
1250 manifest_module_init(mod);
1252 manifest_module_init(mod);
1251 revlog_module_init(mod);
1253 revlog_module_init(mod);
1252
1254
1253 if (PyType_Ready(&dirstateItemType) < 0) {
1255 if (PyType_Ready(&dirstateItemType) < 0) {
1254 return;
1256 return;
1255 }
1257 }
1256 Py_INCREF(&dirstateItemType);
1258 Py_INCREF(&dirstateItemType);
1257 PyModule_AddObject(mod, "DirstateItem", (PyObject *)&dirstateItemType);
1259 PyModule_AddObject(mod, "DirstateItem", (PyObject *)&dirstateItemType);
1258 }
1260 }
1259
1261
1260 static int check_python_version(void)
1262 static int check_python_version(void)
1261 {
1263 {
1262 PyObject *sys = PyImport_ImportModule("sys"), *ver;
1264 PyObject *sys = PyImport_ImportModule("sys"), *ver;
1263 long hexversion;
1265 long hexversion;
1264 if (!sys) {
1266 if (!sys) {
1265 return -1;
1267 return -1;
1266 }
1268 }
1267 ver = PyObject_GetAttrString(sys, "hexversion");
1269 ver = PyObject_GetAttrString(sys, "hexversion");
1268 Py_DECREF(sys);
1270 Py_DECREF(sys);
1269 if (!ver) {
1271 if (!ver) {
1270 return -1;
1272 return -1;
1271 }
1273 }
1272 hexversion = PyLong_AsLong(ver);
1274 hexversion = PyLong_AsLong(ver);
1273 Py_DECREF(ver);
1275 Py_DECREF(ver);
1274 /* sys.hexversion is a 32-bit number by default, so the -1 case
1276 /* sys.hexversion is a 32-bit number by default, so the -1 case
1275 * should only occur in unusual circumstances (e.g. if sys.hexversion
1277 * should only occur in unusual circumstances (e.g. if sys.hexversion
1276 * is manually set to an invalid value). */
1278 * is manually set to an invalid value). */
1277 if ((hexversion == -1) || (hexversion >> 16 != PY_VERSION_HEX >> 16)) {
1279 if ((hexversion == -1) || (hexversion >> 16 != PY_VERSION_HEX >> 16)) {
1278 PyErr_Format(PyExc_ImportError,
1280 PyErr_Format(PyExc_ImportError,
1279 "%s: The Mercurial extension "
1281 "%s: The Mercurial extension "
1280 "modules were compiled with Python " PY_VERSION
1282 "modules were compiled with Python " PY_VERSION
1281 ", but "
1283 ", but "
1282 "Mercurial is currently using Python with "
1284 "Mercurial is currently using Python with "
1283 "sys.hexversion=%ld: "
1285 "sys.hexversion=%ld: "
1284 "Python %s\n at: %s",
1286 "Python %s\n at: %s",
1285 versionerrortext, hexversion, Py_GetVersion(),
1287 versionerrortext, hexversion, Py_GetVersion(),
1286 Py_GetProgramFullPath());
1288 Py_GetProgramFullPath());
1287 return -1;
1289 return -1;
1288 }
1290 }
1289 return 0;
1291 return 0;
1290 }
1292 }
1291
1293
1292 static struct PyModuleDef parsers_module = {PyModuleDef_HEAD_INIT, "parsers",
1294 static struct PyModuleDef parsers_module = {PyModuleDef_HEAD_INIT, "parsers",
1293 parsers_doc, -1, methods};
1295 parsers_doc, -1, methods};
1294
1296
1295 PyMODINIT_FUNC PyInit_parsers(void)
1297 PyMODINIT_FUNC PyInit_parsers(void)
1296 {
1298 {
1297 PyObject *mod;
1299 PyObject *mod;
1298
1300
1299 if (check_python_version() == -1)
1301 if (check_python_version() == -1)
1300 return NULL;
1302 return NULL;
1301 mod = PyModule_Create(&parsers_module);
1303 mod = PyModule_Create(&parsers_module);
1302 module_init(mod);
1304 module_init(mod);
1303 return mod;
1305 return mod;
1304 }
1306 }
@@ -1,251 +1,257 b''
1 # rewriteutil.py - utility functions for rewriting changesets
1 # rewriteutil.py - utility functions for rewriting changesets
2 #
2 #
3 # Copyright 2017 Octobus <contact@octobus.net>
3 # Copyright 2017 Octobus <contact@octobus.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8
8
9 import re
9 import re
10
10
11 from .i18n import _
11 from .i18n import _
12 from .node import (
12 from .node import (
13 hex,
13 hex,
14 nullrev,
14 nullrev,
15 )
15 )
16
16
17 from . import (
17 from . import (
18 error,
18 error,
19 node,
19 node,
20 obsolete,
20 obsolete,
21 obsutil,
21 obsutil,
22 revset,
22 revset,
23 scmutil,
23 scmutil,
24 util,
24 util,
25 )
25 )
26
26
27
27
28 NODE_RE = re.compile(br'\b[0-9a-f]{6,64}\b')
28 NODE_RE = re.compile(br'\b[0-9a-f]{6,64}\b')
29
29
30
30
31 def _formatrevs(repo, revs, maxrevs=4):
31 def _formatrevs(repo, revs, maxrevs=4):
32 """returns a string summarizing revisions in a decent size
32 """returns a string summarizing revisions in a decent size
33
33
34 If there are few enough revisions, we list them all. Otherwise we display a
34 If there are few enough revisions, we list them all. Otherwise we display a
35 summary of the form:
35 summary of the form:
36
36
37 1ea73414a91b and 5 others
37 1ea73414a91b and 5 others
38 """
38 """
39 tonode = repo.changelog.node
39 tonode = repo.changelog.node
40 numrevs = len(revs)
40 numrevs = len(revs)
41 if numrevs < maxrevs:
41 if numrevs < maxrevs:
42 shorts = [node.short(tonode(r)) for r in revs]
42 shorts = [node.short(tonode(r)) for r in revs]
43 summary = b', '.join(shorts)
43 summary = b', '.join(shorts)
44 else:
44 else:
45 first = revs.first()
45 first = revs.first()
46 summary = _(b'%s and %d others')
46 summary = _(b'%s and %d others')
47 summary %= (node.short(tonode(first)), numrevs - 1)
47 summary %= (node.short(tonode(first)), numrevs - 1)
48 return summary
48 return summary
49
49
50
50
51 def precheck(repo, revs, action=b'rewrite'):
51 def precheck(repo, revs, action=b'rewrite', check_divergence=True):
52 """check if revs can be rewritten
52 """check if revs can be rewritten
53 action is used to control the error message.
53 action is used to control the error message.
54
54
55 check_divergence allows skipping the divergence checks in cases like adding
56 a prune marker (A, ()) to obsstore (which can't be diverging).
57
55 Make sure this function is called after taking the lock.
58 Make sure this function is called after taking the lock.
56 """
59 """
57 if nullrev in revs:
60 if nullrev in revs:
58 msg = _(b"cannot %s the null revision") % action
61 msg = _(b"cannot %s the null revision") % action
59 hint = _(b"no changeset checked out")
62 hint = _(b"no changeset checked out")
60 raise error.InputError(msg, hint=hint)
63 raise error.InputError(msg, hint=hint)
61
64
62 if any(util.safehasattr(r, 'rev') for r in revs):
65 if any(util.safehasattr(r, 'rev') for r in revs):
63 repo.ui.develwarn(b"rewriteutil.precheck called with ctx not revs")
66 repo.ui.develwarn(b"rewriteutil.precheck called with ctx not revs")
64 revs = (r.rev() for r in revs)
67 revs = (r.rev() for r in revs)
65
68
66 if len(repo[None].parents()) > 1:
69 if len(repo[None].parents()) > 1:
67 raise error.StateError(
70 raise error.StateError(
68 _(b"cannot %s changesets while merging") % action
71 _(b"cannot %s changesets while merging") % action
69 )
72 )
70
73
71 publicrevs = repo.revs(b'%ld and public()', revs)
74 publicrevs = repo.revs(b'%ld and public()', revs)
72 if publicrevs:
75 if publicrevs:
73 summary = _formatrevs(repo, publicrevs)
76 summary = _formatrevs(repo, publicrevs)
74 msg = _(b"cannot %s public changesets: %s") % (action, summary)
77 msg = _(b"cannot %s public changesets: %s") % (action, summary)
75 hint = _(b"see 'hg help phases' for details")
78 hint = _(b"see 'hg help phases' for details")
76 raise error.InputError(msg, hint=hint)
79 raise error.InputError(msg, hint=hint)
77
80
78 newunstable = disallowednewunstable(repo, revs)
81 newunstable = disallowednewunstable(repo, revs)
79 if newunstable:
82 if newunstable:
80 hint = _(b"see 'hg help evolution.instability'")
83 hint = _(b"see 'hg help evolution.instability'")
81 raise error.InputError(
84 raise error.InputError(
82 _(b"cannot %s changeset, as that will orphan %d descendants")
85 _(b"cannot %s changeset, as that will orphan %d descendants")
83 % (action, len(newunstable)),
86 % (action, len(newunstable)),
84 hint=hint,
87 hint=hint,
85 )
88 )
86
89
90 if not check_divergence:
91 return
92
87 if not obsolete.isenabled(repo, obsolete.allowdivergenceopt):
93 if not obsolete.isenabled(repo, obsolete.allowdivergenceopt):
88 new_divergence = _find_new_divergence(repo, revs)
94 new_divergence = _find_new_divergence(repo, revs)
89 if new_divergence:
95 if new_divergence:
90 local_ctx, other_ctx, base_ctx = new_divergence
96 local_ctx, other_ctx, base_ctx = new_divergence
91 msg = _(
97 msg = _(
92 b'cannot %s %s, as that creates content-divergence with %s'
98 b'cannot %s %s, as that creates content-divergence with %s'
93 ) % (
99 ) % (
94 action,
100 action,
95 local_ctx,
101 local_ctx,
96 other_ctx,
102 other_ctx,
97 )
103 )
98 if local_ctx.rev() != base_ctx.rev():
104 if local_ctx.rev() != base_ctx.rev():
99 msg += _(b', from %s') % base_ctx
105 msg += _(b', from %s') % base_ctx
100 if repo.ui.verbose:
106 if repo.ui.verbose:
101 if local_ctx.rev() != base_ctx.rev():
107 if local_ctx.rev() != base_ctx.rev():
102 msg += _(
108 msg += _(
103 b'\n changeset %s is a successor of ' b'changeset %s'
109 b'\n changeset %s is a successor of ' b'changeset %s'
104 ) % (local_ctx, base_ctx)
110 ) % (local_ctx, base_ctx)
105 msg += _(
111 msg += _(
106 b'\n changeset %s already has a successor in '
112 b'\n changeset %s already has a successor in '
107 b'changeset %s\n'
113 b'changeset %s\n'
108 b' rewriting changeset %s would create '
114 b' rewriting changeset %s would create '
109 b'"content-divergence"\n'
115 b'"content-divergence"\n'
110 b' set experimental.evolution.allowdivergence=True to '
116 b' set experimental.evolution.allowdivergence=True to '
111 b'skip this check'
117 b'skip this check'
112 ) % (base_ctx, other_ctx, local_ctx)
118 ) % (base_ctx, other_ctx, local_ctx)
113 raise error.InputError(
119 raise error.InputError(
114 msg,
120 msg,
115 hint=_(
121 hint=_(
116 b"see 'hg help evolution.instability' for details on content-divergence"
122 b"see 'hg help evolution.instability' for details on content-divergence"
117 ),
123 ),
118 )
124 )
119 else:
125 else:
120 raise error.InputError(
126 raise error.InputError(
121 msg,
127 msg,
122 hint=_(
128 hint=_(
123 b"add --verbose for details or see "
129 b"add --verbose for details or see "
124 b"'hg help evolution.instability'"
130 b"'hg help evolution.instability'"
125 ),
131 ),
126 )
132 )
127
133
128
134
129 def disallowednewunstable(repo, revs):
135 def disallowednewunstable(repo, revs):
130 """Checks whether editing the revs will create new unstable changesets and
136 """Checks whether editing the revs will create new unstable changesets and
131 are we allowed to create them.
137 are we allowed to create them.
132
138
133 To allow new unstable changesets, set the config:
139 To allow new unstable changesets, set the config:
134 `experimental.evolution.allowunstable=True`
140 `experimental.evolution.allowunstable=True`
135 """
141 """
136 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
142 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
137 if allowunstable:
143 if allowunstable:
138 return revset.baseset()
144 return revset.baseset()
139 return repo.revs(b"(%ld::) - %ld", revs, revs)
145 return repo.revs(b"(%ld::) - %ld", revs, revs)
140
146
141
147
142 def _find_new_divergence(repo, revs):
148 def _find_new_divergence(repo, revs):
143 obsrevs = repo.revs(b'%ld and obsolete()', revs)
149 obsrevs = repo.revs(b'%ld and obsolete()', revs)
144 for r in obsrevs:
150 for r in obsrevs:
145 div = find_new_divergence_from(repo, repo[r])
151 div = find_new_divergence_from(repo, repo[r])
146 if div:
152 if div:
147 return (repo[r], repo[div[0]], repo.unfiltered()[div[1]])
153 return (repo[r], repo[div[0]], repo.unfiltered()[div[1]])
148 return None
154 return None
149
155
150
156
151 def find_new_divergence_from(repo, ctx):
157 def find_new_divergence_from(repo, ctx):
152 """return divergent revision if rewriting an obsolete cset (ctx) will
158 """return divergent revision if rewriting an obsolete cset (ctx) will
153 create divergence
159 create divergence
154
160
155 Returns (<other node>, <common ancestor node>) or None
161 Returns (<other node>, <common ancestor node>) or None
156 """
162 """
157 if not ctx.obsolete():
163 if not ctx.obsolete():
158 return None
164 return None
159 # We need to check two cases that can cause divergence:
165 # We need to check two cases that can cause divergence:
160 # case 1: the rev being rewritten has a non-obsolete successor (easily
166 # case 1: the rev being rewritten has a non-obsolete successor (easily
161 # detected by successorssets)
167 # detected by successorssets)
162 sset = obsutil.successorssets(repo, ctx.node())
168 sset = obsutil.successorssets(repo, ctx.node())
163 if sset:
169 if sset:
164 return (sset[0][0], ctx.node())
170 return (sset[0][0], ctx.node())
165 else:
171 else:
166 # case 2: one of the precursors of the rev being revived has a
172 # case 2: one of the precursors of the rev being revived has a
167 # non-obsolete successor (we need divergentsets for this)
173 # non-obsolete successor (we need divergentsets for this)
168 divsets = obsutil.divergentsets(repo, ctx)
174 divsets = obsutil.divergentsets(repo, ctx)
169 if divsets:
175 if divsets:
170 nsuccset = divsets[0][b'divergentnodes']
176 nsuccset = divsets[0][b'divergentnodes']
171 prec = divsets[0][b'commonpredecessor']
177 prec = divsets[0][b'commonpredecessor']
172 return (nsuccset[0], prec)
178 return (nsuccset[0], prec)
173 return None
179 return None
174
180
175
181
176 def skip_empty_successor(ui, command):
182 def skip_empty_successor(ui, command):
177 empty_successor = ui.config(b'rewrite', b'empty-successor')
183 empty_successor = ui.config(b'rewrite', b'empty-successor')
178 if empty_successor == b'skip':
184 if empty_successor == b'skip':
179 return True
185 return True
180 elif empty_successor == b'keep':
186 elif empty_successor == b'keep':
181 return False
187 return False
182 else:
188 else:
183 raise error.ConfigError(
189 raise error.ConfigError(
184 _(
190 _(
185 b"%s doesn't know how to handle config "
191 b"%s doesn't know how to handle config "
186 b"rewrite.empty-successor=%s (only 'skip' and 'keep' are "
192 b"rewrite.empty-successor=%s (only 'skip' and 'keep' are "
187 b"supported)"
193 b"supported)"
188 )
194 )
189 % (command, empty_successor)
195 % (command, empty_successor)
190 )
196 )
191
197
192
198
193 def update_hash_refs(repo, commitmsg, pending=None):
199 def update_hash_refs(repo, commitmsg, pending=None):
194 """Replace all obsolete commit hashes in the message with the current hash.
200 """Replace all obsolete commit hashes in the message with the current hash.
195
201
196 If the obsolete commit was split or is divergent, the hash is not replaced
202 If the obsolete commit was split or is divergent, the hash is not replaced
197 as there's no way to know which successor to choose.
203 as there's no way to know which successor to choose.
198
204
199 For commands that update a series of commits in the current transaction, the
205 For commands that update a series of commits in the current transaction, the
200 new obsolete markers can be considered by setting ``pending`` to a mapping
206 new obsolete markers can be considered by setting ``pending`` to a mapping
201 of ``pending[oldnode] = [successor_node1, successor_node2,..]``.
207 of ``pending[oldnode] = [successor_node1, successor_node2,..]``.
202 """
208 """
203 if not pending:
209 if not pending:
204 pending = {}
210 pending = {}
205 cache = {}
211 cache = {}
206 hashes = re.findall(NODE_RE, commitmsg)
212 hashes = re.findall(NODE_RE, commitmsg)
207 unfi = repo.unfiltered()
213 unfi = repo.unfiltered()
208 for h in hashes:
214 for h in hashes:
209 try:
215 try:
210 fullnode = scmutil.resolvehexnodeidprefix(unfi, h)
216 fullnode = scmutil.resolvehexnodeidprefix(unfi, h)
211 except error.WdirUnsupported:
217 except error.WdirUnsupported:
212 # Someone has an fffff... in a commit message we're
218 # Someone has an fffff... in a commit message we're
213 # rewriting. Don't try rewriting that.
219 # rewriting. Don't try rewriting that.
214 continue
220 continue
215 if fullnode is None:
221 if fullnode is None:
216 continue
222 continue
217 ctx = unfi[fullnode]
223 ctx = unfi[fullnode]
218 if not ctx.obsolete():
224 if not ctx.obsolete():
219 successors = pending.get(fullnode)
225 successors = pending.get(fullnode)
220 if successors is None:
226 if successors is None:
221 continue
227 continue
222 # obsutil.successorssets() returns a list of list of nodes
228 # obsutil.successorssets() returns a list of list of nodes
223 successors = [successors]
229 successors = [successors]
224 else:
230 else:
225 successors = obsutil.successorssets(repo, ctx.node(), cache=cache)
231 successors = obsutil.successorssets(repo, ctx.node(), cache=cache)
226
232
227 # We can't make any assumptions about how to update the hash if the
233 # We can't make any assumptions about how to update the hash if the
228 # cset in question was split or diverged.
234 # cset in question was split or diverged.
229 if len(successors) == 1 and len(successors[0]) == 1:
235 if len(successors) == 1 and len(successors[0]) == 1:
230 successor = successors[0][0]
236 successor = successors[0][0]
231 if successor is not None:
237 if successor is not None:
232 newhash = hex(successor)
238 newhash = hex(successor)
233 commitmsg = commitmsg.replace(h, newhash[: len(h)])
239 commitmsg = commitmsg.replace(h, newhash[: len(h)])
234 else:
240 else:
235 repo.ui.note(
241 repo.ui.note(
236 _(
242 _(
237 b'The stale commit message reference to %s could '
243 b'The stale commit message reference to %s could '
238 b'not be updated\n(The referenced commit was dropped)\n'
244 b'not be updated\n(The referenced commit was dropped)\n'
239 )
245 )
240 % h
246 % h
241 )
247 )
242 else:
248 else:
243 repo.ui.note(
249 repo.ui.note(
244 _(
250 _(
245 b'The stale commit message reference to %s could '
251 b'The stale commit message reference to %s could '
246 b'not be updated\n'
252 b'not be updated\n'
247 )
253 )
248 % h
254 % h
249 )
255 )
250
256
251 return commitmsg
257 return commitmsg
@@ -1,913 +1,913 b''
1 # tags.py - read tag info from local repository
1 # tags.py - read tag info from local repository
2 #
2 #
3 # Copyright 2009 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2009 Olivia Mackall <olivia@selenic.com>
4 # Copyright 2009 Greg Ward <greg@gerg.ca>
4 # Copyright 2009 Greg Ward <greg@gerg.ca>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 # Currently this module only deals with reading and caching tags.
9 # Currently this module only deals with reading and caching tags.
10 # Eventually, it could take care of updating (adding/removing/moving)
10 # Eventually, it could take care of updating (adding/removing/moving)
11 # tags too.
11 # tags too.
12
12
13
13
14 import errno
14 import errno
15 import io
15 import io
16
16
17 from .node import (
17 from .node import (
18 bin,
18 bin,
19 hex,
19 hex,
20 nullrev,
20 nullrev,
21 short,
21 short,
22 )
22 )
23 from .i18n import _
23 from .i18n import _
24 from . import (
24 from . import (
25 encoding,
25 encoding,
26 error,
26 error,
27 match as matchmod,
27 match as matchmod,
28 scmutil,
28 scmutil,
29 util,
29 util,
30 )
30 )
31 from .utils import stringutil
31 from .utils import stringutil
32
32
33 # Tags computation can be expensive and caches exist to make it fast in
33 # Tags computation can be expensive and caches exist to make it fast in
34 # the common case.
34 # the common case.
35 #
35 #
36 # The "hgtagsfnodes1" cache file caches the .hgtags filenode values for
36 # The "hgtagsfnodes1" cache file caches the .hgtags filenode values for
37 # each revision in the repository. The file is effectively an array of
37 # each revision in the repository. The file is effectively an array of
38 # fixed length records. Read the docs for "hgtagsfnodescache" for technical
38 # fixed length records. Read the docs for "hgtagsfnodescache" for technical
39 # details.
39 # details.
40 #
40 #
41 # The .hgtags filenode cache grows in proportion to the length of the
41 # The .hgtags filenode cache grows in proportion to the length of the
42 # changelog. The file is truncated when the # changelog is stripped.
42 # changelog. The file is truncated when the # changelog is stripped.
43 #
43 #
44 # The purpose of the filenode cache is to avoid the most expensive part
44 # The purpose of the filenode cache is to avoid the most expensive part
45 # of finding global tags, which is looking up the .hgtags filenode in the
45 # of finding global tags, which is looking up the .hgtags filenode in the
46 # manifest for each head. This can take dozens or over 100ms for
46 # manifest for each head. This can take dozens or over 100ms for
47 # repositories with very large manifests. Multiplied by dozens or even
47 # repositories with very large manifests. Multiplied by dozens or even
48 # hundreds of heads and there is a significant performance concern.
48 # hundreds of heads and there is a significant performance concern.
49 #
49 #
50 # There also exist a separate cache file for each repository filter.
50 # There also exist a separate cache file for each repository filter.
51 # These "tags-*" files store information about the history of tags.
51 # These "tags-*" files store information about the history of tags.
52 #
52 #
53 # The tags cache files consists of a cache validation line followed by
53 # The tags cache files consists of a cache validation line followed by
54 # a history of tags.
54 # a history of tags.
55 #
55 #
56 # The cache validation line has the format:
56 # The cache validation line has the format:
57 #
57 #
58 # <tiprev> <tipnode> [<filteredhash>]
58 # <tiprev> <tipnode> [<filteredhash>]
59 #
59 #
60 # <tiprev> is an integer revision and <tipnode> is a 40 character hex
60 # <tiprev> is an integer revision and <tipnode> is a 40 character hex
61 # node for that changeset. These redundantly identify the repository
61 # node for that changeset. These redundantly identify the repository
62 # tip from the time the cache was written. In addition, <filteredhash>,
62 # tip from the time the cache was written. In addition, <filteredhash>,
63 # if present, is a 40 character hex hash of the contents of the filtered
63 # if present, is a 40 character hex hash of the contents of the filtered
64 # revisions for this filter. If the set of filtered revs changes, the
64 # revisions for this filter. If the set of filtered revs changes, the
65 # hash will change and invalidate the cache.
65 # hash will change and invalidate the cache.
66 #
66 #
67 # The history part of the tags cache consists of lines of the form:
67 # The history part of the tags cache consists of lines of the form:
68 #
68 #
69 # <node> <tag>
69 # <node> <tag>
70 #
70 #
71 # (This format is identical to that of .hgtags files.)
71 # (This format is identical to that of .hgtags files.)
72 #
72 #
73 # <tag> is the tag name and <node> is the 40 character hex changeset
73 # <tag> is the tag name and <node> is the 40 character hex changeset
74 # the tag is associated with.
74 # the tag is associated with.
75 #
75 #
76 # Tags are written sorted by tag name.
76 # Tags are written sorted by tag name.
77 #
77 #
78 # Tags associated with multiple changesets have an entry for each changeset.
78 # Tags associated with multiple changesets have an entry for each changeset.
79 # The most recent changeset (in terms of revlog ordering for the head
79 # The most recent changeset (in terms of revlog ordering for the head
80 # setting it) for each tag is last.
80 # setting it) for each tag is last.
81
81
82
82
83 def fnoderevs(ui, repo, revs):
83 def fnoderevs(ui, repo, revs):
84 """return the list of '.hgtags' fnodes used in a set revisions
84 """return the list of '.hgtags' fnodes used in a set revisions
85
85
86 This is returned as list of unique fnodes. We use a list instead of a set
86 This is returned as list of unique fnodes. We use a list instead of a set
87 because order matters when it comes to tags."""
87 because order matters when it comes to tags."""
88 unfi = repo.unfiltered()
88 unfi = repo.unfiltered()
89 tonode = unfi.changelog.node
89 tonode = unfi.changelog.node
90 nodes = [tonode(r) for r in revs]
90 nodes = [tonode(r) for r in revs]
91 fnodes = _getfnodes(ui, repo, nodes)
91 fnodes = _getfnodes(ui, repo, nodes)
92 fnodes = _filterfnodes(fnodes, nodes)
92 fnodes = _filterfnodes(fnodes, nodes)
93 return fnodes
93 return fnodes
94
94
95
95
96 def _nulltonone(repo, value):
96 def _nulltonone(repo, value):
97 """convert nullid to None
97 """convert nullid to None
98
98
99 For tag value, nullid means "deleted". This small utility function helps
99 For tag value, nullid means "deleted". This small utility function helps
100 translating that to None."""
100 translating that to None."""
101 if value == repo.nullid:
101 if value == repo.nullid:
102 return None
102 return None
103 return value
103 return value
104
104
105
105
106 def difftags(ui, repo, oldfnodes, newfnodes):
106 def difftags(ui, repo, oldfnodes, newfnodes):
107 """list differences between tags expressed in two set of file-nodes
107 """list differences between tags expressed in two set of file-nodes
108
108
109 The list contains entries in the form: (tagname, oldvalue, new value).
109 The list contains entries in the form: (tagname, oldvalue, new value).
110 None is used to expressed missing value:
110 None is used to expressed missing value:
111 ('foo', None, 'abcd') is a new tag,
111 ('foo', None, 'abcd') is a new tag,
112 ('bar', 'ef01', None) is a deletion,
112 ('bar', 'ef01', None) is a deletion,
113 ('baz', 'abcd', 'ef01') is a tag movement.
113 ('baz', 'abcd', 'ef01') is a tag movement.
114 """
114 """
115 if oldfnodes == newfnodes:
115 if oldfnodes == newfnodes:
116 return []
116 return []
117 oldtags = _tagsfromfnodes(ui, repo, oldfnodes)
117 oldtags = _tagsfromfnodes(ui, repo, oldfnodes)
118 newtags = _tagsfromfnodes(ui, repo, newfnodes)
118 newtags = _tagsfromfnodes(ui, repo, newfnodes)
119
119
120 # list of (tag, old, new): None means missing
120 # list of (tag, old, new): None means missing
121 entries = []
121 entries = []
122 for tag, (new, __) in newtags.items():
122 for tag, (new, __) in newtags.items():
123 new = _nulltonone(repo, new)
123 new = _nulltonone(repo, new)
124 old, __ = oldtags.pop(tag, (None, None))
124 old, __ = oldtags.pop(tag, (None, None))
125 old = _nulltonone(repo, old)
125 old = _nulltonone(repo, old)
126 if old != new:
126 if old != new:
127 entries.append((tag, old, new))
127 entries.append((tag, old, new))
128 # handle deleted tags
128 # handle deleted tags
129 for tag, (old, __) in oldtags.items():
129 for tag, (old, __) in oldtags.items():
130 old = _nulltonone(repo, old)
130 old = _nulltonone(repo, old)
131 if old is not None:
131 if old is not None:
132 entries.append((tag, old, None))
132 entries.append((tag, old, None))
133 entries.sort()
133 entries.sort()
134 return entries
134 return entries
135
135
136
136
137 def writediff(fp, difflist):
137 def writediff(fp, difflist):
138 """write tags diff information to a file.
138 """write tags diff information to a file.
139
139
140 Data are stored with a line based format:
140 Data are stored with a line based format:
141
141
142 <action> <hex-node> <tag-name>\n
142 <action> <hex-node> <tag-name>\n
143
143
144 Action are defined as follow:
144 Action are defined as follow:
145 -R tag is removed,
145 -R tag is removed,
146 +A tag is added,
146 +A tag is added,
147 -M tag is moved (old value),
147 -M tag is moved (old value),
148 +M tag is moved (new value),
148 +M tag is moved (new value),
149
149
150 Example:
150 Example:
151
151
152 +A 875517b4806a848f942811a315a5bce30804ae85 t5
152 +A 875517b4806a848f942811a315a5bce30804ae85 t5
153
153
154 See documentation of difftags output for details about the input.
154 See documentation of difftags output for details about the input.
155 """
155 """
156 add = b'+A %s %s\n'
156 add = b'+A %s %s\n'
157 remove = b'-R %s %s\n'
157 remove = b'-R %s %s\n'
158 updateold = b'-M %s %s\n'
158 updateold = b'-M %s %s\n'
159 updatenew = b'+M %s %s\n'
159 updatenew = b'+M %s %s\n'
160 for tag, old, new in difflist:
160 for tag, old, new in difflist:
161 # translate to hex
161 # translate to hex
162 if old is not None:
162 if old is not None:
163 old = hex(old)
163 old = hex(old)
164 if new is not None:
164 if new is not None:
165 new = hex(new)
165 new = hex(new)
166 # write to file
166 # write to file
167 if old is None:
167 if old is None:
168 fp.write(add % (new, tag))
168 fp.write(add % (new, tag))
169 elif new is None:
169 elif new is None:
170 fp.write(remove % (old, tag))
170 fp.write(remove % (old, tag))
171 else:
171 else:
172 fp.write(updateold % (old, tag))
172 fp.write(updateold % (old, tag))
173 fp.write(updatenew % (new, tag))
173 fp.write(updatenew % (new, tag))
174
174
175
175
176 def findglobaltags(ui, repo):
176 def findglobaltags(ui, repo):
177 """Find global tags in a repo: return a tagsmap
177 """Find global tags in a repo: return a tagsmap
178
178
179 tagsmap: tag name to (node, hist) 2-tuples.
179 tagsmap: tag name to (node, hist) 2-tuples.
180
180
181 The tags cache is read and updated as a side-effect of calling.
181 The tags cache is read and updated as a side-effect of calling.
182 """
182 """
183 (heads, tagfnode, valid, cachetags, shouldwrite) = _readtagcache(ui, repo)
183 (heads, tagfnode, valid, cachetags, shouldwrite) = _readtagcache(ui, repo)
184 if cachetags is not None:
184 if cachetags is not None:
185 assert not shouldwrite
185 assert not shouldwrite
186 # XXX is this really 100% correct? are there oddball special
186 # XXX is this really 100% correct? are there oddball special
187 # cases where a global tag should outrank a local tag but won't,
187 # cases where a global tag should outrank a local tag but won't,
188 # because cachetags does not contain rank info?
188 # because cachetags does not contain rank info?
189 alltags = {}
189 alltags = {}
190 _updatetags(cachetags, alltags)
190 _updatetags(cachetags, alltags)
191 return alltags
191 return alltags
192
192
193 for head in reversed(heads): # oldest to newest
193 for head in reversed(heads): # oldest to newest
194 assert repo.changelog.index.has_node(
194 assert repo.changelog.index.has_node(
195 head
195 head
196 ), b"tag cache returned bogus head %s" % short(head)
196 ), b"tag cache returned bogus head %s" % short(head)
197 fnodes = _filterfnodes(tagfnode, reversed(heads))
197 fnodes = _filterfnodes(tagfnode, reversed(heads))
198 alltags = _tagsfromfnodes(ui, repo, fnodes)
198 alltags = _tagsfromfnodes(ui, repo, fnodes)
199
199
200 # and update the cache (if necessary)
200 # and update the cache (if necessary)
201 if shouldwrite:
201 if shouldwrite:
202 _writetagcache(ui, repo, valid, alltags)
202 _writetagcache(ui, repo, valid, alltags)
203 return alltags
203 return alltags
204
204
205
205
206 def _filterfnodes(tagfnode, nodes):
206 def _filterfnodes(tagfnode, nodes):
207 """return a list of unique fnodes
207 """return a list of unique fnodes
208
208
209 The order of this list matches the order of "nodes". Preserving this order
209 The order of this list matches the order of "nodes". Preserving this order
210 is important as reading tags in different order provides different
210 is important as reading tags in different order provides different
211 results."""
211 results."""
212 seen = set() # set of fnode
212 seen = set() # set of fnode
213 fnodes = []
213 fnodes = []
214 for no in nodes: # oldest to newest
214 for no in nodes: # oldest to newest
215 fnode = tagfnode.get(no)
215 fnode = tagfnode.get(no)
216 if fnode and fnode not in seen:
216 if fnode and fnode not in seen:
217 seen.add(fnode)
217 seen.add(fnode)
218 fnodes.append(fnode)
218 fnodes.append(fnode)
219 return fnodes
219 return fnodes
220
220
221
221
222 def _tagsfromfnodes(ui, repo, fnodes):
222 def _tagsfromfnodes(ui, repo, fnodes):
223 """return a tagsmap from a list of file-node
223 """return a tagsmap from a list of file-node
224
224
225 tagsmap: tag name to (node, hist) 2-tuples.
225 tagsmap: tag name to (node, hist) 2-tuples.
226
226
227 The order of the list matters."""
227 The order of the list matters."""
228 alltags = {}
228 alltags = {}
229 fctx = None
229 fctx = None
230 for fnode in fnodes:
230 for fnode in fnodes:
231 if fctx is None:
231 if fctx is None:
232 fctx = repo.filectx(b'.hgtags', fileid=fnode)
232 fctx = repo.filectx(b'.hgtags', fileid=fnode)
233 else:
233 else:
234 fctx = fctx.filectx(fnode)
234 fctx = fctx.filectx(fnode)
235 filetags = _readtags(ui, repo, fctx.data().splitlines(), fctx)
235 filetags = _readtags(ui, repo, fctx.data().splitlines(), fctx)
236 _updatetags(filetags, alltags)
236 _updatetags(filetags, alltags)
237 return alltags
237 return alltags
238
238
239
239
240 def readlocaltags(ui, repo, alltags, tagtypes):
240 def readlocaltags(ui, repo, alltags, tagtypes):
241 '''Read local tags in repo. Update alltags and tagtypes.'''
241 '''Read local tags in repo. Update alltags and tagtypes.'''
242 try:
242 try:
243 data = repo.vfs.read(b"localtags")
243 data = repo.vfs.read(b"localtags")
244 except IOError as inst:
244 except IOError as inst:
245 if inst.errno != errno.ENOENT:
245 if inst.errno != errno.ENOENT:
246 raise
246 raise
247 return
247 return
248
248
249 # localtags is in the local encoding; re-encode to UTF-8 on
249 # localtags is in the local encoding; re-encode to UTF-8 on
250 # input for consistency with the rest of this module.
250 # input for consistency with the rest of this module.
251 filetags = _readtags(
251 filetags = _readtags(
252 ui, repo, data.splitlines(), b"localtags", recode=encoding.fromlocal
252 ui, repo, data.splitlines(), b"localtags", recode=encoding.fromlocal
253 )
253 )
254
254
255 # remove tags pointing to invalid nodes
255 # remove tags pointing to invalid nodes
256 cl = repo.changelog
256 cl = repo.changelog
257 for t in list(filetags):
257 for t in list(filetags):
258 try:
258 try:
259 cl.rev(filetags[t][0])
259 cl.rev(filetags[t][0])
260 except (LookupError, ValueError):
260 except (LookupError, ValueError):
261 del filetags[t]
261 del filetags[t]
262
262
263 _updatetags(filetags, alltags, b'local', tagtypes)
263 _updatetags(filetags, alltags, b'local', tagtypes)
264
264
265
265
266 def _readtaghist(ui, repo, lines, fn, recode=None, calcnodelines=False):
266 def _readtaghist(ui, repo, lines, fn, recode=None, calcnodelines=False):
267 """Read tag definitions from a file (or any source of lines).
267 """Read tag definitions from a file (or any source of lines).
268
268
269 This function returns two sortdicts with similar information:
269 This function returns two sortdicts with similar information:
270
270
271 - the first dict, bintaghist, contains the tag information as expected by
271 - the first dict, bintaghist, contains the tag information as expected by
272 the _readtags function, i.e. a mapping from tag name to (node, hist):
272 the _readtags function, i.e. a mapping from tag name to (node, hist):
273 - node is the node id from the last line read for that name,
273 - node is the node id from the last line read for that name,
274 - hist is the list of node ids previously associated with it (in file
274 - hist is the list of node ids previously associated with it (in file
275 order). All node ids are binary, not hex.
275 order). All node ids are binary, not hex.
276
276
277 - the second dict, hextaglines, is a mapping from tag name to a list of
277 - the second dict, hextaglines, is a mapping from tag name to a list of
278 [hexnode, line number] pairs, ordered from the oldest to the newest node.
278 [hexnode, line number] pairs, ordered from the oldest to the newest node.
279
279
280 When calcnodelines is False the hextaglines dict is not calculated (an
280 When calcnodelines is False the hextaglines dict is not calculated (an
281 empty dict is returned). This is done to improve this function's
281 empty dict is returned). This is done to improve this function's
282 performance in cases where the line numbers are not needed.
282 performance in cases where the line numbers are not needed.
283 """
283 """
284
284
285 bintaghist = util.sortdict()
285 bintaghist = util.sortdict()
286 hextaglines = util.sortdict()
286 hextaglines = util.sortdict()
287 count = 0
287 count = 0
288
288
289 def dbg(msg):
289 def dbg(msg):
290 ui.debug(b"%s, line %d: %s\n" % (fn, count, msg))
290 ui.debug(b"%s, line %d: %s\n" % (fn, count, msg))
291
291
292 for nline, line in enumerate(lines):
292 for nline, line in enumerate(lines):
293 count += 1
293 count += 1
294 if not line:
294 if not line:
295 continue
295 continue
296 try:
296 try:
297 (nodehex, name) = line.split(b" ", 1)
297 (nodehex, name) = line.split(b" ", 1)
298 except ValueError:
298 except ValueError:
299 dbg(b"cannot parse entry")
299 dbg(b"cannot parse entry")
300 continue
300 continue
301 name = name.strip()
301 name = name.strip()
302 if recode:
302 if recode:
303 name = recode(name)
303 name = recode(name)
304 try:
304 try:
305 nodebin = bin(nodehex)
305 nodebin = bin(nodehex)
306 except TypeError:
306 except TypeError:
307 dbg(b"node '%s' is not well formed" % nodehex)
307 dbg(b"node '%s' is not well formed" % nodehex)
308 continue
308 continue
309
309
310 # update filetags
310 # update filetags
311 if calcnodelines:
311 if calcnodelines:
312 # map tag name to a list of line numbers
312 # map tag name to a list of line numbers
313 if name not in hextaglines:
313 if name not in hextaglines:
314 hextaglines[name] = []
314 hextaglines[name] = []
315 hextaglines[name].append([nodehex, nline])
315 hextaglines[name].append([nodehex, nline])
316 continue
316 continue
317 # map tag name to (node, hist)
317 # map tag name to (node, hist)
318 if name not in bintaghist:
318 if name not in bintaghist:
319 bintaghist[name] = []
319 bintaghist[name] = []
320 bintaghist[name].append(nodebin)
320 bintaghist[name].append(nodebin)
321 return bintaghist, hextaglines
321 return bintaghist, hextaglines
322
322
323
323
324 def _readtags(ui, repo, lines, fn, recode=None, calcnodelines=False):
324 def _readtags(ui, repo, lines, fn, recode=None, calcnodelines=False):
325 """Read tag definitions from a file (or any source of lines).
325 """Read tag definitions from a file (or any source of lines).
326
326
327 Returns a mapping from tag name to (node, hist).
327 Returns a mapping from tag name to (node, hist).
328
328
329 "node" is the node id from the last line read for that name. "hist"
329 "node" is the node id from the last line read for that name. "hist"
330 is the list of node ids previously associated with it (in file order).
330 is the list of node ids previously associated with it (in file order).
331 All node ids are binary, not hex.
331 All node ids are binary, not hex.
332 """
332 """
333 filetags, nodelines = _readtaghist(
333 filetags, nodelines = _readtaghist(
334 ui, repo, lines, fn, recode=recode, calcnodelines=calcnodelines
334 ui, repo, lines, fn, recode=recode, calcnodelines=calcnodelines
335 )
335 )
336 # util.sortdict().__setitem__ is much slower at replacing then inserting
336 # util.sortdict().__setitem__ is much slower at replacing then inserting
337 # new entries. The difference can matter if there are thousands of tags.
337 # new entries. The difference can matter if there are thousands of tags.
338 # Create a new sortdict to avoid the performance penalty.
338 # Create a new sortdict to avoid the performance penalty.
339 newtags = util.sortdict()
339 newtags = util.sortdict()
340 for tag, taghist in filetags.items():
340 for tag, taghist in filetags.items():
341 newtags[tag] = (taghist[-1], taghist[:-1])
341 newtags[tag] = (taghist[-1], taghist[:-1])
342 return newtags
342 return newtags
343
343
344
344
345 def _updatetags(filetags, alltags, tagtype=None, tagtypes=None):
345 def _updatetags(filetags, alltags, tagtype=None, tagtypes=None):
346 """Incorporate the tag info read from one file into dictionnaries
346 """Incorporate the tag info read from one file into dictionnaries
347
347
348 The first one, 'alltags', is a "tagmaps" (see 'findglobaltags' for details).
348 The first one, 'alltags', is a "tagmaps" (see 'findglobaltags' for details).
349
349
350 The second one, 'tagtypes', is optional and will be updated to track the
350 The second one, 'tagtypes', is optional and will be updated to track the
351 "tagtype" of entries in the tagmaps. When set, the 'tagtype' argument also
351 "tagtype" of entries in the tagmaps. When set, the 'tagtype' argument also
352 needs to be set."""
352 needs to be set."""
353 if tagtype is None:
353 if tagtype is None:
354 assert tagtypes is None
354 assert tagtypes is None
355
355
356 for name, nodehist in filetags.items():
356 for name, nodehist in filetags.items():
357 if name not in alltags:
357 if name not in alltags:
358 alltags[name] = nodehist
358 alltags[name] = nodehist
359 if tagtype is not None:
359 if tagtype is not None:
360 tagtypes[name] = tagtype
360 tagtypes[name] = tagtype
361 continue
361 continue
362
362
363 # we prefer alltags[name] if:
363 # we prefer alltags[name] if:
364 # it supersedes us OR
364 # it supersedes us OR
365 # mutual supersedes and it has a higher rank
365 # mutual supersedes and it has a higher rank
366 # otherwise we win because we're tip-most
366 # otherwise we win because we're tip-most
367 anode, ahist = nodehist
367 anode, ahist = nodehist
368 bnode, bhist = alltags[name]
368 bnode, bhist = alltags[name]
369 if (
369 if (
370 bnode != anode
370 bnode != anode
371 and anode in bhist
371 and anode in bhist
372 and (bnode not in ahist or len(bhist) > len(ahist))
372 and (bnode not in ahist or len(bhist) > len(ahist))
373 ):
373 ):
374 anode = bnode
374 anode = bnode
375 elif tagtype is not None:
375 elif tagtype is not None:
376 tagtypes[name] = tagtype
376 tagtypes[name] = tagtype
377 ahist.extend([n for n in bhist if n not in ahist])
377 ahist.extend([n for n in bhist if n not in ahist])
378 alltags[name] = anode, ahist
378 alltags[name] = anode, ahist
379
379
380
380
381 def _filename(repo):
381 def _filename(repo):
382 """name of a tagcache file for a given repo or repoview"""
382 """name of a tagcache file for a given repo or repoview"""
383 filename = b'tags2'
383 filename = b'tags2'
384 if repo.filtername:
384 if repo.filtername:
385 filename = b'%s-%s' % (filename, repo.filtername)
385 filename = b'%s-%s' % (filename, repo.filtername)
386 return filename
386 return filename
387
387
388
388
389 def _readtagcache(ui, repo):
389 def _readtagcache(ui, repo):
390 """Read the tag cache.
390 """Read the tag cache.
391
391
392 Returns a tuple (heads, fnodes, validinfo, cachetags, shouldwrite).
392 Returns a tuple (heads, fnodes, validinfo, cachetags, shouldwrite).
393
393
394 If the cache is completely up-to-date, "cachetags" is a dict of the
394 If the cache is completely up-to-date, "cachetags" is a dict of the
395 form returned by _readtags() and "heads", "fnodes", and "validinfo" are
395 form returned by _readtags() and "heads", "fnodes", and "validinfo" are
396 None and "shouldwrite" is False.
396 None and "shouldwrite" is False.
397
397
398 If the cache is not up to date, "cachetags" is None. "heads" is a list
398 If the cache is not up to date, "cachetags" is None. "heads" is a list
399 of all heads currently in the repository, ordered from tip to oldest.
399 of all heads currently in the repository, ordered from tip to oldest.
400 "validinfo" is a tuple describing cache validation info. This is used
400 "validinfo" is a tuple describing cache validation info. This is used
401 when writing the tags cache. "fnodes" is a mapping from head to .hgtags
401 when writing the tags cache. "fnodes" is a mapping from head to .hgtags
402 filenode. "shouldwrite" is True.
402 filenode. "shouldwrite" is True.
403
403
404 If the cache is not up to date, the caller is responsible for reading tag
404 If the cache is not up to date, the caller is responsible for reading tag
405 info from each returned head. (See findglobaltags().)
405 info from each returned head. (See findglobaltags().)
406 """
406 """
407 try:
407 try:
408 cachefile = repo.cachevfs(_filename(repo), b'r')
408 cachefile = repo.cachevfs(_filename(repo), b'r')
409 # force reading the file for static-http
409 # force reading the file for static-http
410 cachelines = iter(cachefile)
410 cachelines = iter(cachefile)
411 except IOError:
411 except IOError:
412 cachefile = None
412 cachefile = None
413
413
414 cacherev = None
414 cacherev = None
415 cachenode = None
415 cachenode = None
416 cachehash = None
416 cachehash = None
417 if cachefile:
417 if cachefile:
418 try:
418 try:
419 validline = next(cachelines)
419 validline = next(cachelines)
420 validline = validline.split()
420 validline = validline.split()
421 cacherev = int(validline[0])
421 cacherev = int(validline[0])
422 cachenode = bin(validline[1])
422 cachenode = bin(validline[1])
423 if len(validline) > 2:
423 if len(validline) > 2:
424 cachehash = bin(validline[2])
424 cachehash = bin(validline[2])
425 except Exception:
425 except Exception:
426 # corruption of the cache, just recompute it.
426 # corruption of the cache, just recompute it.
427 pass
427 pass
428
428
429 tipnode = repo.changelog.tip()
429 tipnode = repo.changelog.tip()
430 tiprev = len(repo.changelog) - 1
430 tiprev = len(repo.changelog) - 1
431
431
432 # Case 1 (common): tip is the same, so nothing has changed.
432 # Case 1 (common): tip is the same, so nothing has changed.
433 # (Unchanged tip trivially means no changesets have been added.
433 # (Unchanged tip trivially means no changesets have been added.
434 # But, thanks to localrepository.destroyed(), it also means none
434 # But, thanks to localrepository.destroyed(), it also means none
435 # have been destroyed by strip or rollback.)
435 # have been destroyed by strip or rollback.)
436 if (
436 if (
437 cacherev == tiprev
437 cacherev == tiprev
438 and cachenode == tipnode
438 and cachenode == tipnode
439 and cachehash == scmutil.filteredhash(repo, tiprev)
439 and cachehash == scmutil.filteredhash(repo, tiprev)
440 ):
440 ):
441 tags = _readtags(ui, repo, cachelines, cachefile.name)
441 tags = _readtags(ui, repo, cachelines, cachefile.name)
442 cachefile.close()
442 cachefile.close()
443 return (None, None, None, tags, False)
443 return (None, None, None, tags, False)
444 if cachefile:
444 if cachefile:
445 cachefile.close() # ignore rest of file
445 cachefile.close() # ignore rest of file
446
446
447 valid = (tiprev, tipnode, scmutil.filteredhash(repo, tiprev))
447 valid = (tiprev, tipnode, scmutil.filteredhash(repo, tiprev))
448
448
449 repoheads = repo.heads()
449 repoheads = repo.heads()
450 # Case 2 (uncommon): empty repo; get out quickly and don't bother
450 # Case 2 (uncommon): empty repo; get out quickly and don't bother
451 # writing an empty cache.
451 # writing an empty cache.
452 if repoheads == [repo.nullid]:
452 if repoheads == [repo.nullid]:
453 return ([], {}, valid, {}, False)
453 return ([], {}, valid, {}, False)
454
454
455 # Case 3 (uncommon): cache file missing or empty.
455 # Case 3 (uncommon): cache file missing or empty.
456
456
457 # Case 4 (uncommon): tip rev decreased. This should only happen
457 # Case 4 (uncommon): tip rev decreased. This should only happen
458 # when we're called from localrepository.destroyed(). Refresh the
458 # when we're called from localrepository.destroyed(). Refresh the
459 # cache so future invocations will not see disappeared heads in the
459 # cache so future invocations will not see disappeared heads in the
460 # cache.
460 # cache.
461
461
462 # Case 5 (common): tip has changed, so we've added/replaced heads.
462 # Case 5 (common): tip has changed, so we've added/replaced heads.
463
463
464 # As it happens, the code to handle cases 3, 4, 5 is the same.
464 # As it happens, the code to handle cases 3, 4, 5 is the same.
465
465
466 # N.B. in case 4 (nodes destroyed), "new head" really means "newly
466 # N.B. in case 4 (nodes destroyed), "new head" really means "newly
467 # exposed".
467 # exposed".
468 if not len(repo.file(b'.hgtags')):
468 if not len(repo.file(b'.hgtags')):
469 # No tags have ever been committed, so we can avoid a
469 # No tags have ever been committed, so we can avoid a
470 # potentially expensive search.
470 # potentially expensive search.
471 return ([], {}, valid, None, True)
471 return ([], {}, valid, None, True)
472
472
473 # Now we have to lookup the .hgtags filenode for every new head.
473 # Now we have to lookup the .hgtags filenode for every new head.
474 # This is the most expensive part of finding tags, so performance
474 # This is the most expensive part of finding tags, so performance
475 # depends primarily on the size of newheads. Worst case: no cache
475 # depends primarily on the size of newheads. Worst case: no cache
476 # file, so newheads == repoheads.
476 # file, so newheads == repoheads.
477 # Reversed order helps the cache ('repoheads' is in descending order)
477 # Reversed order helps the cache ('repoheads' is in descending order)
478 cachefnode = _getfnodes(ui, repo, reversed(repoheads))
478 cachefnode = _getfnodes(ui, repo, reversed(repoheads))
479
479
480 # Caller has to iterate over all heads, but can use the filenodes in
480 # Caller has to iterate over all heads, but can use the filenodes in
481 # cachefnode to get to each .hgtags revision quickly.
481 # cachefnode to get to each .hgtags revision quickly.
482 return (repoheads, cachefnode, valid, None, True)
482 return (repoheads, cachefnode, valid, None, True)
483
483
484
484
485 def _getfnodes(ui, repo, nodes):
485 def _getfnodes(ui, repo, nodes):
486 """return .hgtags fnodes for a list of changeset nodes
486 """return .hgtags fnodes for a list of changeset nodes
487
487
488 Return value is a {node: fnode} mapping. There will be no entry for nodes
488 Return value is a {node: fnode} mapping. There will be no entry for nodes
489 without a '.hgtags' file.
489 without a '.hgtags' file.
490 """
490 """
491 starttime = util.timer()
491 starttime = util.timer()
492 fnodescache = hgtagsfnodescache(repo.unfiltered())
492 fnodescache = hgtagsfnodescache(repo.unfiltered())
493 cachefnode = {}
493 cachefnode = {}
494 validated_fnodes = set()
494 validated_fnodes = set()
495 unknown_entries = set()
495 unknown_entries = set()
496 for node in nodes:
496 for node in nodes:
497 fnode = fnodescache.getfnode(node)
497 fnode = fnodescache.getfnode(node)
498 flog = repo.file(b'.hgtags')
498 flog = repo.file(b'.hgtags')
499 if fnode != repo.nullid:
499 if fnode != repo.nullid:
500 if fnode not in validated_fnodes:
500 if fnode not in validated_fnodes:
501 if flog.hasnode(fnode):
501 if flog.hasnode(fnode):
502 validated_fnodes.add(fnode)
502 validated_fnodes.add(fnode)
503 else:
503 else:
504 unknown_entries.add(node)
504 unknown_entries.add(node)
505 cachefnode[node] = fnode
505 cachefnode[node] = fnode
506
506
507 if unknown_entries:
507 if unknown_entries:
508 fixed_nodemap = fnodescache.refresh_invalid_nodes(unknown_entries)
508 fixed_nodemap = fnodescache.refresh_invalid_nodes(unknown_entries)
509 for node, fnode in fixed_nodemap.items():
509 for node, fnode in fixed_nodemap.items():
510 if fnode != repo.nullid:
510 if fnode != repo.nullid:
511 cachefnode[node] = fnode
511 cachefnode[node] = fnode
512
512
513 fnodescache.write()
513 fnodescache.write()
514
514
515 duration = util.timer() - starttime
515 duration = util.timer() - starttime
516 ui.log(
516 ui.log(
517 b'tagscache',
517 b'tagscache',
518 b'%d/%d cache hits/lookups in %0.4f seconds\n',
518 b'%d/%d cache hits/lookups in %0.4f seconds\n',
519 fnodescache.hitcount,
519 fnodescache.hitcount,
520 fnodescache.lookupcount,
520 fnodescache.lookupcount,
521 duration,
521 duration,
522 )
522 )
523 return cachefnode
523 return cachefnode
524
524
525
525
526 def _writetagcache(ui, repo, valid, cachetags):
526 def _writetagcache(ui, repo, valid, cachetags):
527 filename = _filename(repo)
527 filename = _filename(repo)
528 try:
528 try:
529 cachefile = repo.cachevfs(filename, b'w', atomictemp=True)
529 cachefile = repo.cachevfs(filename, b'w', atomictemp=True)
530 except (OSError, IOError):
530 except (OSError, IOError):
531 return
531 return
532
532
533 ui.log(
533 ui.log(
534 b'tagscache',
534 b'tagscache',
535 b'writing .hg/cache/%s with %d tags\n',
535 b'writing .hg/cache/%s with %d tags\n',
536 filename,
536 filename,
537 len(cachetags),
537 len(cachetags),
538 )
538 )
539
539
540 if valid[2]:
540 if valid[2]:
541 cachefile.write(
541 cachefile.write(
542 b'%d %s %s\n' % (valid[0], hex(valid[1]), hex(valid[2]))
542 b'%d %s %s\n' % (valid[0], hex(valid[1]), hex(valid[2]))
543 )
543 )
544 else:
544 else:
545 cachefile.write(b'%d %s\n' % (valid[0], hex(valid[1])))
545 cachefile.write(b'%d %s\n' % (valid[0], hex(valid[1])))
546
546
547 # Tag names in the cache are in UTF-8 -- which is the whole reason
547 # Tag names in the cache are in UTF-8 -- which is the whole reason
548 # we keep them in UTF-8 throughout this module. If we converted
548 # we keep them in UTF-8 throughout this module. If we converted
549 # them local encoding on input, we would lose info writing them to
549 # them local encoding on input, we would lose info writing them to
550 # the cache.
550 # the cache.
551 for (name, (node, hist)) in sorted(cachetags.items()):
551 for (name, (node, hist)) in sorted(cachetags.items()):
552 for n in hist:
552 for n in hist:
553 cachefile.write(b"%s %s\n" % (hex(n), name))
553 cachefile.write(b"%s %s\n" % (hex(n), name))
554 cachefile.write(b"%s %s\n" % (hex(node), name))
554 cachefile.write(b"%s %s\n" % (hex(node), name))
555
555
556 try:
556 try:
557 cachefile.close()
557 cachefile.close()
558 except (OSError, IOError):
558 except (OSError, IOError):
559 pass
559 pass
560
560
561
561
562 def tag(repo, names, node, message, local, user, date, editor=False):
562 def tag(repo, names, node, message, local, user, date, editor=False):
563 """tag a revision with one or more symbolic names.
563 """tag a revision with one or more symbolic names.
564
564
565 names is a list of strings or, when adding a single tag, names may be a
565 names is a list of strings or, when adding a single tag, names may be a
566 string.
566 string.
567
567
568 if local is True, the tags are stored in a per-repository file.
568 if local is True, the tags are stored in a per-repository file.
569 otherwise, they are stored in the .hgtags file, and a new
569 otherwise, they are stored in the .hgtags file, and a new
570 changeset is committed with the change.
570 changeset is committed with the change.
571
571
572 keyword arguments:
572 keyword arguments:
573
573
574 local: whether to store tags in non-version-controlled file
574 local: whether to store tags in non-version-controlled file
575 (default False)
575 (default False)
576
576
577 message: commit message to use if committing
577 message: commit message to use if committing
578
578
579 user: name of user to use if committing
579 user: name of user to use if committing
580
580
581 date: date tuple to use if committing"""
581 date: date tuple to use if committing"""
582
582
583 if not local:
583 if not local:
584 m = matchmod.exact([b'.hgtags'])
584 m = matchmod.exact([b'.hgtags'])
585 st = repo.status(match=m, unknown=True, ignored=True)
585 st = repo.status(match=m, unknown=True, ignored=True)
586 if any(
586 if any(
587 (
587 (
588 st.modified,
588 st.modified,
589 st.added,
589 st.added,
590 st.removed,
590 st.removed,
591 st.deleted,
591 st.deleted,
592 st.unknown,
592 st.unknown,
593 st.ignored,
593 st.ignored,
594 )
594 )
595 ):
595 ):
596 raise error.Abort(
596 raise error.Abort(
597 _(b'working copy of .hgtags is changed'),
597 _(b'working copy of .hgtags is changed'),
598 hint=_(b'please commit .hgtags manually'),
598 hint=_(b'please commit .hgtags manually'),
599 )
599 )
600
600
601 with repo.wlock():
601 with repo.wlock():
602 repo.tags() # instantiate the cache
602 repo.tags() # instantiate the cache
603 _tag(repo, names, node, message, local, user, date, editor=editor)
603 _tag(repo, names, node, message, local, user, date, editor=editor)
604
604
605
605
606 def _tag(
606 def _tag(
607 repo, names, node, message, local, user, date, extra=None, editor=False
607 repo, names, node, message, local, user, date, extra=None, editor=False
608 ):
608 ):
609 if isinstance(names, bytes):
609 if isinstance(names, bytes):
610 names = (names,)
610 names = (names,)
611
611
612 branches = repo.branchmap()
612 branches = repo.branchmap()
613 for name in names:
613 for name in names:
614 repo.hook(b'pretag', throw=True, node=hex(node), tag=name, local=local)
614 repo.hook(b'pretag', throw=True, node=hex(node), tag=name, local=local)
615 if name in branches:
615 if name in branches:
616 repo.ui.warn(
616 repo.ui.warn(
617 _(b"warning: tag %s conflicts with existing branch name\n")
617 _(b"warning: tag %s conflicts with existing branch name\n")
618 % name
618 % name
619 )
619 )
620
620
621 def writetags(fp, names, munge, prevtags):
621 def writetags(fp, names, munge, prevtags):
622 fp.seek(0, io.SEEK_END)
622 fp.seek(0, io.SEEK_END)
623 if prevtags and not prevtags.endswith(b'\n'):
623 if prevtags and not prevtags.endswith(b'\n'):
624 fp.write(b'\n')
624 fp.write(b'\n')
625 for name in names:
625 for name in names:
626 if munge:
626 if munge:
627 m = munge(name)
627 m = munge(name)
628 else:
628 else:
629 m = name
629 m = name
630
630
631 if repo._tagscache.tagtypes and name in repo._tagscache.tagtypes:
631 if repo._tagscache.tagtypes and name in repo._tagscache.tagtypes:
632 old = repo.tags().get(name, repo.nullid)
632 old = repo.tags().get(name, repo.nullid)
633 fp.write(b'%s %s\n' % (hex(old), m))
633 fp.write(b'%s %s\n' % (hex(old), m))
634 fp.write(b'%s %s\n' % (hex(node), m))
634 fp.write(b'%s %s\n' % (hex(node), m))
635 fp.close()
635 fp.close()
636
636
637 prevtags = b''
637 prevtags = b''
638 if local:
638 if local:
639 try:
639 try:
640 fp = repo.vfs(b'localtags', b'r+')
640 fp = repo.vfs(b'localtags', b'r+')
641 except IOError:
641 except IOError:
642 fp = repo.vfs(b'localtags', b'a')
642 fp = repo.vfs(b'localtags', b'a')
643 else:
643 else:
644 prevtags = fp.read()
644 prevtags = fp.read()
645
645
646 # local tags are stored in the current charset
646 # local tags are stored in the current charset
647 writetags(fp, names, None, prevtags)
647 writetags(fp, names, None, prevtags)
648 for name in names:
648 for name in names:
649 repo.hook(b'tag', node=hex(node), tag=name, local=local)
649 repo.hook(b'tag', node=hex(node), tag=name, local=local)
650 return
650 return
651
651
652 try:
652 try:
653 fp = repo.wvfs(b'.hgtags', b'rb+')
653 fp = repo.wvfs(b'.hgtags', b'rb+')
654 except IOError as e:
654 except IOError as e:
655 if e.errno != errno.ENOENT:
655 if e.errno != errno.ENOENT:
656 raise
656 raise
657 fp = repo.wvfs(b'.hgtags', b'ab')
657 fp = repo.wvfs(b'.hgtags', b'ab')
658 else:
658 else:
659 prevtags = fp.read()
659 prevtags = fp.read()
660
660
661 # committed tags are stored in UTF-8
661 # committed tags are stored in UTF-8
662 writetags(fp, names, encoding.fromlocal, prevtags)
662 writetags(fp, names, encoding.fromlocal, prevtags)
663
663
664 fp.close()
664 fp.close()
665
665
666 repo.invalidatecaches()
666 repo.invalidatecaches()
667
667
668 if b'.hgtags' not in repo.dirstate:
668 if b'.hgtags' not in repo.dirstate:
669 repo[None].add([b'.hgtags'])
669 repo[None].add([b'.hgtags'])
670
670
671 m = matchmod.exact([b'.hgtags'])
671 m = matchmod.exact([b'.hgtags'])
672 tagnode = repo.commit(
672 tagnode = repo.commit(
673 message, user, date, extra=extra, match=m, editor=editor
673 message, user, date, extra=extra, match=m, editor=editor
674 )
674 )
675
675
676 for name in names:
676 for name in names:
677 repo.hook(b'tag', node=hex(node), tag=name, local=local)
677 repo.hook(b'tag', node=hex(node), tag=name, local=local)
678
678
679 return tagnode
679 return tagnode
680
680
681
681
682 _fnodescachefile = b'hgtagsfnodes1'
682 _fnodescachefile = b'hgtagsfnodes1'
683 _fnodesrecsize = 4 + 20 # changeset fragment + filenode
683 _fnodesrecsize = 4 + 20 # changeset fragment + filenode
684 _fnodesmissingrec = b'\xff' * 24
684 _fnodesmissingrec = b'\xff' * 24
685
685
686
686
687 class hgtagsfnodescache:
687 class hgtagsfnodescache:
688 """Persistent cache mapping revisions to .hgtags filenodes.
688 """Persistent cache mapping revisions to .hgtags filenodes.
689
689
690 The cache is an array of records. Each item in the array corresponds to
690 The cache is an array of records. Each item in the array corresponds to
691 a changelog revision. Values in the array contain the first 4 bytes of
691 a changelog revision. Values in the array contain the first 4 bytes of
692 the node hash and the 20 bytes .hgtags filenode for that revision.
692 the node hash and the 20 bytes .hgtags filenode for that revision.
693
693
694 The first 4 bytes are present as a form of verification. Repository
694 The first 4 bytes are present as a form of verification. Repository
695 stripping and rewriting may change the node at a numeric revision in the
695 stripping and rewriting may change the node at a numeric revision in the
696 changelog. The changeset fragment serves as a verifier to detect
696 changelog. The changeset fragment serves as a verifier to detect
697 rewriting. This logic is shared with the rev branch cache (see
697 rewriting. This logic is shared with the rev branch cache (see
698 branchmap.py).
698 branchmap.py).
699
699
700 The instance holds in memory the full cache content but entries are
700 The instance holds in memory the full cache content but entries are
701 only parsed on read.
701 only parsed on read.
702
702
703 Instances behave like lists. ``c[i]`` works where i is a rev or
703 Instances behave like lists. ``c[i]`` works where i is a rev or
704 changeset node. Missing indexes are populated automatically on access.
704 changeset node. Missing indexes are populated automatically on access.
705 """
705 """
706
706
707 def __init__(self, repo):
707 def __init__(self, repo):
708 assert repo.filtername is None
708 assert repo.filtername is None
709
709
710 self._repo = repo
710 self._repo = repo
711
711
712 # Only for reporting purposes.
712 # Only for reporting purposes.
713 self.lookupcount = 0
713 self.lookupcount = 0
714 self.hitcount = 0
714 self.hitcount = 0
715
715
716 try:
716 try:
717 data = repo.cachevfs.read(_fnodescachefile)
717 data = repo.cachevfs.read(_fnodescachefile)
718 except (OSError, IOError):
718 except (OSError, IOError):
719 data = b""
719 data = b""
720 self._raw = bytearray(data)
720 self._raw = bytearray(data)
721
721
722 # The end state of self._raw is an array that is of the exact length
722 # The end state of self._raw is an array that is of the exact length
723 # required to hold a record for every revision in the repository.
723 # required to hold a record for every revision in the repository.
724 # We truncate or extend the array as necessary. self._dirtyoffset is
724 # We truncate or extend the array as necessary. self._dirtyoffset is
725 # defined to be the start offset at which we need to write the output
725 # defined to be the start offset at which we need to write the output
726 # file. This offset is also adjusted when new entries are calculated
726 # file. This offset is also adjusted when new entries are calculated
727 # for array members.
727 # for array members.
728 cllen = len(repo.changelog)
728 cllen = len(repo.changelog)
729 wantedlen = cllen * _fnodesrecsize
729 wantedlen = cllen * _fnodesrecsize
730 rawlen = len(self._raw)
730 rawlen = len(self._raw)
731
731
732 self._dirtyoffset = None
732 self._dirtyoffset = None
733
733
734 rawlentokeep = min(
734 rawlentokeep = min(
735 wantedlen, (rawlen // _fnodesrecsize) * _fnodesrecsize
735 wantedlen, (rawlen // _fnodesrecsize) * _fnodesrecsize
736 )
736 )
737 if rawlen > rawlentokeep:
737 if rawlen > rawlentokeep:
738 # There's no easy way to truncate array instances. This seems
738 # There's no easy way to truncate array instances. This seems
739 # slightly less evil than copying a potentially large array slice.
739 # slightly less evil than copying a potentially large array slice.
740 for i in range(rawlen - rawlentokeep):
740 for i in range(rawlen - rawlentokeep):
741 self._raw.pop()
741 self._raw.pop()
742 rawlen = len(self._raw)
742 rawlen = len(self._raw)
743 self._dirtyoffset = rawlen
743 self._dirtyoffset = rawlen
744 if rawlen < wantedlen:
744 if rawlen < wantedlen:
745 if self._dirtyoffset is None:
745 if self._dirtyoffset is None:
746 self._dirtyoffset = rawlen
746 self._dirtyoffset = rawlen
747 # TODO: zero fill entire record, because it's invalid not missing?
747 # TODO: zero fill entire record, because it's invalid not missing?
748 self._raw.extend(b'\xff' * (wantedlen - rawlen))
748 self._raw.extend(b'\xff' * (wantedlen - rawlen))
749
749
750 def getfnode(self, node, computemissing=True):
750 def getfnode(self, node, computemissing=True):
751 """Obtain the filenode of the .hgtags file at a specified revision.
751 """Obtain the filenode of the .hgtags file at a specified revision.
752
752
753 If the value is in the cache, the entry will be validated and returned.
753 If the value is in the cache, the entry will be validated and returned.
754 Otherwise, the filenode will be computed and returned unless
754 Otherwise, the filenode will be computed and returned unless
755 "computemissing" is False. In that case, None will be returned if
755 "computemissing" is False. In that case, None will be returned if
756 the entry is missing or False if the entry is invalid without
756 the entry is missing or False if the entry is invalid without
757 any potentially expensive computation being performed.
757 any potentially expensive computation being performed.
758
758
759 If an .hgtags does not exist at the specified revision, nullid is
759 If an .hgtags does not exist at the specified revision, nullid is
760 returned.
760 returned.
761 """
761 """
762 if node == self._repo.nullid:
762 if node == self._repo.nullid:
763 return node
763 return node
764
764
765 ctx = self._repo[node]
765 ctx = self._repo[node]
766 rev = ctx.rev()
766 rev = ctx.rev()
767
767
768 self.lookupcount += 1
768 self.lookupcount += 1
769
769
770 offset = rev * _fnodesrecsize
770 offset = rev * _fnodesrecsize
771 record = b'%s' % self._raw[offset : offset + _fnodesrecsize]
771 record = b'%s' % self._raw[offset : offset + _fnodesrecsize]
772 properprefix = node[0:4]
772 properprefix = node[0:4]
773
773
774 # Validate and return existing entry.
774 # Validate and return existing entry.
775 if record != _fnodesmissingrec and len(record) == _fnodesrecsize:
775 if record != _fnodesmissingrec and len(record) == _fnodesrecsize:
776 fileprefix = record[0:4]
776 fileprefix = record[0:4]
777
777
778 if fileprefix == properprefix:
778 if fileprefix == properprefix:
779 self.hitcount += 1
779 self.hitcount += 1
780 return record[4:]
780 return record[4:]
781
781
782 # Fall through.
782 # Fall through.
783
783
784 # If we get here, the entry is either missing or invalid.
784 # If we get here, the entry is either missing or invalid.
785
785
786 if not computemissing:
786 if not computemissing:
787 if record != _fnodesmissingrec:
787 if record != _fnodesmissingrec:
788 return False
788 return False
789 return None
789 return None
790
790
791 fnode = self._computefnode(node)
791 fnode = self._computefnode(node)
792 self._writeentry(offset, properprefix, fnode)
792 self._writeentry(offset, properprefix, fnode)
793 return fnode
793 return fnode
794
794
795 def _computefnode(self, node):
795 def _computefnode(self, node):
796 """Finds the tag filenode for a node which is missing or invalid
796 """Finds the tag filenode for a node which is missing or invalid
797 in cache"""
797 in cache"""
798 ctx = self._repo[node]
798 ctx = self._repo[node]
799 rev = ctx.rev()
799 rev = ctx.rev()
800 fnode = None
800 fnode = None
801 cl = self._repo.changelog
801 cl = self._repo.changelog
802 p1rev, p2rev = cl._uncheckedparentrevs(rev)
802 p1rev, p2rev = cl._uncheckedparentrevs(rev)
803 p1node = cl.node(p1rev)
803 p1node = cl.node(p1rev)
804 p1fnode = self.getfnode(p1node, computemissing=False)
804 p1fnode = self.getfnode(p1node, computemissing=False)
805 if p2rev != nullrev:
805 if p2rev != nullrev:
806 # There is some no-merge changeset where p1 is null and p2 is set
806 # There is some no-merge changeset where p1 is null and p2 is set
807 # Processing them as merge is just slower, but still gives a good
807 # Processing them as merge is just slower, but still gives a good
808 # result.
808 # result.
809 p2node = cl.node(p1rev)
809 p2node = cl.node(p2rev)
810 p2fnode = self.getfnode(p2node, computemissing=False)
810 p2fnode = self.getfnode(p2node, computemissing=False)
811 if p1fnode != p2fnode:
811 if p1fnode != p2fnode:
812 # we cannot rely on readfast because we don't know against what
812 # we cannot rely on readfast because we don't know against what
813 # parent the readfast delta is computed
813 # parent the readfast delta is computed
814 p1fnode = None
814 p1fnode = None
815 if p1fnode:
815 if p1fnode:
816 mctx = ctx.manifestctx()
816 mctx = ctx.manifestctx()
817 fnode = mctx.readfast().get(b'.hgtags')
817 fnode = mctx.readfast().get(b'.hgtags')
818 if fnode is None:
818 if fnode is None:
819 fnode = p1fnode
819 fnode = p1fnode
820 if fnode is None:
820 if fnode is None:
821 # Populate missing entry.
821 # Populate missing entry.
822 try:
822 try:
823 fnode = ctx.filenode(b'.hgtags')
823 fnode = ctx.filenode(b'.hgtags')
824 except error.LookupError:
824 except error.LookupError:
825 # No .hgtags file on this revision.
825 # No .hgtags file on this revision.
826 fnode = self._repo.nullid
826 fnode = self._repo.nullid
827 return fnode
827 return fnode
828
828
829 def setfnode(self, node, fnode):
829 def setfnode(self, node, fnode):
830 """Set the .hgtags filenode for a given changeset."""
830 """Set the .hgtags filenode for a given changeset."""
831 assert len(fnode) == 20
831 assert len(fnode) == 20
832 ctx = self._repo[node]
832 ctx = self._repo[node]
833
833
834 # Do a lookup first to avoid writing if nothing has changed.
834 # Do a lookup first to avoid writing if nothing has changed.
835 if self.getfnode(ctx.node(), computemissing=False) == fnode:
835 if self.getfnode(ctx.node(), computemissing=False) == fnode:
836 return
836 return
837
837
838 self._writeentry(ctx.rev() * _fnodesrecsize, node[0:4], fnode)
838 self._writeentry(ctx.rev() * _fnodesrecsize, node[0:4], fnode)
839
839
840 def refresh_invalid_nodes(self, nodes):
840 def refresh_invalid_nodes(self, nodes):
841 """recomputes file nodes for a given set of nodes which has unknown
841 """recomputes file nodes for a given set of nodes which has unknown
842 filenodes for them in the cache
842 filenodes for them in the cache
843 Also updates the in-memory cache with the correct filenode.
843 Also updates the in-memory cache with the correct filenode.
844 Caller needs to take care about calling `.write()` so that updates are
844 Caller needs to take care about calling `.write()` so that updates are
845 persisted.
845 persisted.
846 Returns a map {node: recomputed fnode}
846 Returns a map {node: recomputed fnode}
847 """
847 """
848 fixed_nodemap = {}
848 fixed_nodemap = {}
849 for node in nodes:
849 for node in nodes:
850 fnode = self._computefnode(node)
850 fnode = self._computefnode(node)
851 fixed_nodemap[node] = fnode
851 fixed_nodemap[node] = fnode
852 self.setfnode(node, fnode)
852 self.setfnode(node, fnode)
853 return fixed_nodemap
853 return fixed_nodemap
854
854
855 def _writeentry(self, offset, prefix, fnode):
855 def _writeentry(self, offset, prefix, fnode):
856 # Slices on array instances only accept other array.
856 # Slices on array instances only accept other array.
857 entry = bytearray(prefix + fnode)
857 entry = bytearray(prefix + fnode)
858 self._raw[offset : offset + _fnodesrecsize] = entry
858 self._raw[offset : offset + _fnodesrecsize] = entry
859 # self._dirtyoffset could be None.
859 # self._dirtyoffset could be None.
860 self._dirtyoffset = min(self._dirtyoffset or 0, offset or 0)
860 self._dirtyoffset = min(self._dirtyoffset or 0, offset or 0)
861
861
862 def write(self):
862 def write(self):
863 """Perform all necessary writes to cache file.
863 """Perform all necessary writes to cache file.
864
864
865 This may no-op if no writes are needed or if a write lock could
865 This may no-op if no writes are needed or if a write lock could
866 not be obtained.
866 not be obtained.
867 """
867 """
868 if self._dirtyoffset is None:
868 if self._dirtyoffset is None:
869 return
869 return
870
870
871 data = self._raw[self._dirtyoffset :]
871 data = self._raw[self._dirtyoffset :]
872 if not data:
872 if not data:
873 return
873 return
874
874
875 repo = self._repo
875 repo = self._repo
876
876
877 try:
877 try:
878 lock = repo.lock(wait=False)
878 lock = repo.lock(wait=False)
879 except error.LockError:
879 except error.LockError:
880 repo.ui.log(
880 repo.ui.log(
881 b'tagscache',
881 b'tagscache',
882 b'not writing .hg/cache/%s because '
882 b'not writing .hg/cache/%s because '
883 b'lock cannot be acquired\n' % _fnodescachefile,
883 b'lock cannot be acquired\n' % _fnodescachefile,
884 )
884 )
885 return
885 return
886
886
887 try:
887 try:
888 f = repo.cachevfs.open(_fnodescachefile, b'ab')
888 f = repo.cachevfs.open(_fnodescachefile, b'ab')
889 try:
889 try:
890 # if the file has been truncated
890 # if the file has been truncated
891 actualoffset = f.tell()
891 actualoffset = f.tell()
892 if actualoffset < self._dirtyoffset:
892 if actualoffset < self._dirtyoffset:
893 self._dirtyoffset = actualoffset
893 self._dirtyoffset = actualoffset
894 data = self._raw[self._dirtyoffset :]
894 data = self._raw[self._dirtyoffset :]
895 f.seek(self._dirtyoffset)
895 f.seek(self._dirtyoffset)
896 f.truncate()
896 f.truncate()
897 repo.ui.log(
897 repo.ui.log(
898 b'tagscache',
898 b'tagscache',
899 b'writing %d bytes to cache/%s\n'
899 b'writing %d bytes to cache/%s\n'
900 % (len(data), _fnodescachefile),
900 % (len(data), _fnodescachefile),
901 )
901 )
902 f.write(data)
902 f.write(data)
903 self._dirtyoffset = None
903 self._dirtyoffset = None
904 finally:
904 finally:
905 f.close()
905 f.close()
906 except (IOError, OSError) as inst:
906 except (IOError, OSError) as inst:
907 repo.ui.log(
907 repo.ui.log(
908 b'tagscache',
908 b'tagscache',
909 b"couldn't write cache/%s: %s\n"
909 b"couldn't write cache/%s: %s\n"
910 % (_fnodescachefile, stringutil.forcebytestr(inst)),
910 % (_fnodescachefile, stringutil.forcebytestr(inst)),
911 )
911 )
912 finally:
912 finally:
913 lock.release()
913 lock.release()
@@ -1,819 +1,825 b''
1 #require serve no-reposimplestore no-chg
1 #require serve no-reposimplestore no-chg
2
2
3 #testcases stream-legacy stream-bundle2
3 #testcases stream-legacy stream-bundle2
4
4
5 #if stream-legacy
5 #if stream-legacy
6 $ cat << EOF >> $HGRCPATH
6 $ cat << EOF >> $HGRCPATH
7 > [server]
7 > [server]
8 > bundle2.stream = no
8 > bundle2.stream = no
9 > EOF
9 > EOF
10 #endif
10 #endif
11
11
12 Initialize repository
12 Initialize repository
13
13
14 $ hg init server
14 $ hg init server
15 $ cd server
15 $ cd server
16 $ sh $TESTDIR/testlib/stream_clone_setup.sh
16 $ sh $TESTDIR/testlib/stream_clone_setup.sh
17 adding 00changelog-ab349180a0405010.nd
17 adding 00changelog-ab349180a0405010.nd
18 adding 00changelog.d
18 adding 00changelog.d
19 adding 00changelog.i
19 adding 00changelog.i
20 adding 00changelog.n
20 adding 00changelog.n
21 adding 00manifest.d
21 adding 00manifest.d
22 adding 00manifest.i
22 adding 00manifest.i
23 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
23 adding container/isam-build-centos7/bazel-coverage-generator-sandboxfs-compatibility-0758e3e4f6057904d44399bd666faba9e7f40686.patch
24 adding data/foo.d
24 adding data/foo.d
25 adding data/foo.i
25 adding data/foo.i
26 adding data/foo.n
26 adding data/foo.n
27 adding data/undo.babar
27 adding data/undo.babar
28 adding data/undo.d
28 adding data/undo.d
29 adding data/undo.foo.d
29 adding data/undo.foo.d
30 adding data/undo.foo.i
30 adding data/undo.foo.i
31 adding data/undo.foo.n
31 adding data/undo.foo.n
32 adding data/undo.i
32 adding data/undo.i
33 adding data/undo.n
33 adding data/undo.n
34 adding data/undo.py
34 adding data/undo.py
35 adding foo.d
35 adding foo.d
36 adding foo.i
36 adding foo.i
37 adding foo.n
37 adding foo.n
38 adding meta/foo.d
38 adding meta/foo.d
39 adding meta/foo.i
39 adding meta/foo.i
40 adding meta/foo.n
40 adding meta/foo.n
41 adding meta/undo.babar
41 adding meta/undo.babar
42 adding meta/undo.d
42 adding meta/undo.d
43 adding meta/undo.foo.d
43 adding meta/undo.foo.d
44 adding meta/undo.foo.i
44 adding meta/undo.foo.i
45 adding meta/undo.foo.n
45 adding meta/undo.foo.n
46 adding meta/undo.i
46 adding meta/undo.i
47 adding meta/undo.n
47 adding meta/undo.n
48 adding meta/undo.py
48 adding meta/undo.py
49 adding savanah/foo.d
49 adding savanah/foo.d
50 adding savanah/foo.i
50 adding savanah/foo.i
51 adding savanah/foo.n
51 adding savanah/foo.n
52 adding savanah/undo.babar
52 adding savanah/undo.babar
53 adding savanah/undo.d
53 adding savanah/undo.d
54 adding savanah/undo.foo.d
54 adding savanah/undo.foo.d
55 adding savanah/undo.foo.i
55 adding savanah/undo.foo.i
56 adding savanah/undo.foo.n
56 adding savanah/undo.foo.n
57 adding savanah/undo.i
57 adding savanah/undo.i
58 adding savanah/undo.n
58 adding savanah/undo.n
59 adding savanah/undo.py
59 adding savanah/undo.py
60 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
60 adding store/C\xc3\xa9lesteVille_is_a_Capital_City (esc)
61 adding store/foo.d
61 adding store/foo.d
62 adding store/foo.i
62 adding store/foo.i
63 adding store/foo.n
63 adding store/foo.n
64 adding store/undo.babar
64 adding store/undo.babar
65 adding store/undo.d
65 adding store/undo.d
66 adding store/undo.foo.d
66 adding store/undo.foo.d
67 adding store/undo.foo.i
67 adding store/undo.foo.i
68 adding store/undo.foo.n
68 adding store/undo.foo.n
69 adding store/undo.i
69 adding store/undo.i
70 adding store/undo.n
70 adding store/undo.n
71 adding store/undo.py
71 adding store/undo.py
72 adding undo.babar
72 adding undo.babar
73 adding undo.d
73 adding undo.d
74 adding undo.foo.d
74 adding undo.foo.d
75 adding undo.foo.i
75 adding undo.foo.i
76 adding undo.foo.n
76 adding undo.foo.n
77 adding undo.i
77 adding undo.i
78 adding undo.n
78 adding undo.n
79 adding undo.py
79 adding undo.py
80
80
81 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
81 $ hg --config server.uncompressed=false serve -p $HGPORT -d --pid-file=hg.pid
82 $ cat hg.pid > $DAEMON_PIDS
82 $ cat hg.pid > $DAEMON_PIDS
83 $ cd ..
83 $ cd ..
84
84
85 Check local clone
85 Check local clone
86 ==================
86 ==================
87
87
88 The logic is close enough of uncompressed.
88 The logic is close enough of uncompressed.
89 This is present here to reuse the testing around file with "special" names.
89 This is present here to reuse the testing around file with "special" names.
90
90
91 $ hg clone server local-clone
91 $ hg clone server local-clone
92 updating to branch default
92 updating to branch default
93 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
93 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
94
94
95 Check that the clone went well
95 Check that the clone went well
96
96
97 $ hg verify -R local-clone
97 $ hg verify -R local-clone
98 checking changesets
98 checking changesets
99 checking manifests
99 checking manifests
100 crosschecking files in changesets and manifests
100 crosschecking files in changesets and manifests
101 checking files
101 checking files
102 checked 3 changesets with 1088 changes to 1088 files
102 checked 3 changesets with 1088 changes to 1088 files
103
103
104 Check uncompressed
104 Check uncompressed
105 ==================
105 ==================
106
106
107 Cannot stream clone when server.uncompressed is set
107 Cannot stream clone when server.uncompressed is set
108
108
109 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
109 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out'
110 200 Script output follows
110 200 Script output follows
111
111
112 1
112 1
113
113
114 #if stream-legacy
114 #if stream-legacy
115 $ hg debugcapabilities http://localhost:$HGPORT
115 $ hg debugcapabilities http://localhost:$HGPORT
116 Main capabilities:
116 Main capabilities:
117 batch
117 batch
118 branchmap
118 branchmap
119 $USUAL_BUNDLE2_CAPS_SERVER$
119 $USUAL_BUNDLE2_CAPS_SERVER$
120 changegroupsubset
120 changegroupsubset
121 compression=$BUNDLE2_COMPRESSIONS$
121 compression=$BUNDLE2_COMPRESSIONS$
122 getbundle
122 getbundle
123 httpheader=1024
123 httpheader=1024
124 httpmediatype=0.1rx,0.1tx,0.2tx
124 httpmediatype=0.1rx,0.1tx,0.2tx
125 known
125 known
126 lookup
126 lookup
127 pushkey
127 pushkey
128 unbundle=HG10GZ,HG10BZ,HG10UN
128 unbundle=HG10GZ,HG10BZ,HG10UN
129 unbundlehash
129 unbundlehash
130 Bundle2 capabilities:
130 Bundle2 capabilities:
131 HG20
131 HG20
132 bookmarks
132 bookmarks
133 changegroup
133 changegroup
134 01
134 01
135 02
135 02
136 checkheads
136 checkheads
137 related
137 related
138 digests
138 digests
139 md5
139 md5
140 sha1
140 sha1
141 sha512
141 sha512
142 error
142 error
143 abort
143 abort
144 unsupportedcontent
144 unsupportedcontent
145 pushraced
145 pushraced
146 pushkey
146 pushkey
147 hgtagsfnodes
147 hgtagsfnodes
148 listkeys
148 listkeys
149 phases
149 phases
150 heads
150 heads
151 pushkey
151 pushkey
152 remote-changegroup
152 remote-changegroup
153 http
153 http
154 https
154 https
155
155
156 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
156 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
157 warning: stream clone requested but server has them disabled
157 warning: stream clone requested but server has them disabled
158 requesting all changes
158 requesting all changes
159 adding changesets
159 adding changesets
160 adding manifests
160 adding manifests
161 adding file changes
161 adding file changes
162 added 3 changesets with 1088 changes to 1088 files
162 added 3 changesets with 1088 changes to 1088 files
163 new changesets 96ee1d7354c4:5223b5e3265f
163 new changesets 96ee1d7354c4:5223b5e3265f
164
164
165 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
165 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
166 200 Script output follows
166 200 Script output follows
167 content-type: application/mercurial-0.2
167 content-type: application/mercurial-0.2
168
168
169
169
170 $ f --size body --hexdump --bytes 100
170 $ f --size body --hexdump --bytes 100
171 body: size=232
171 body: size=232
172 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
172 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
173 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
173 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
174 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
174 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
175 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
175 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
176 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
176 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
177 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
177 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
178 0060: 69 73 20 66 |is f|
178 0060: 69 73 20 66 |is f|
179
179
180 #endif
180 #endif
181 #if stream-bundle2
181 #if stream-bundle2
182 $ hg debugcapabilities http://localhost:$HGPORT
182 $ hg debugcapabilities http://localhost:$HGPORT
183 Main capabilities:
183 Main capabilities:
184 batch
184 batch
185 branchmap
185 branchmap
186 $USUAL_BUNDLE2_CAPS_SERVER$
186 $USUAL_BUNDLE2_CAPS_SERVER$
187 changegroupsubset
187 changegroupsubset
188 compression=$BUNDLE2_COMPRESSIONS$
188 compression=$BUNDLE2_COMPRESSIONS$
189 getbundle
189 getbundle
190 httpheader=1024
190 httpheader=1024
191 httpmediatype=0.1rx,0.1tx,0.2tx
191 httpmediatype=0.1rx,0.1tx,0.2tx
192 known
192 known
193 lookup
193 lookup
194 pushkey
194 pushkey
195 unbundle=HG10GZ,HG10BZ,HG10UN
195 unbundle=HG10GZ,HG10BZ,HG10UN
196 unbundlehash
196 unbundlehash
197 Bundle2 capabilities:
197 Bundle2 capabilities:
198 HG20
198 HG20
199 bookmarks
199 bookmarks
200 changegroup
200 changegroup
201 01
201 01
202 02
202 02
203 checkheads
203 checkheads
204 related
204 related
205 digests
205 digests
206 md5
206 md5
207 sha1
207 sha1
208 sha512
208 sha512
209 error
209 error
210 abort
210 abort
211 unsupportedcontent
211 unsupportedcontent
212 pushraced
212 pushraced
213 pushkey
213 pushkey
214 hgtagsfnodes
214 hgtagsfnodes
215 listkeys
215 listkeys
216 phases
216 phases
217 heads
217 heads
218 pushkey
218 pushkey
219 remote-changegroup
219 remote-changegroup
220 http
220 http
221 https
221 https
222
222
223 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
223 $ hg clone --stream -U http://localhost:$HGPORT server-disabled
224 warning: stream clone requested but server has them disabled
224 warning: stream clone requested but server has them disabled
225 requesting all changes
225 requesting all changes
226 adding changesets
226 adding changesets
227 adding manifests
227 adding manifests
228 adding file changes
228 adding file changes
229 added 3 changesets with 1088 changes to 1088 files
229 added 3 changesets with 1088 changes to 1088 files
230 new changesets 96ee1d7354c4:5223b5e3265f
230 new changesets 96ee1d7354c4:5223b5e3265f
231
231
232 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
232 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
233 200 Script output follows
233 200 Script output follows
234 content-type: application/mercurial-0.2
234 content-type: application/mercurial-0.2
235
235
236
236
237 $ f --size body --hexdump --bytes 100
237 $ f --size body --hexdump --bytes 100
238 body: size=232
238 body: size=232
239 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
239 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
240 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
240 0010: cf 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |..ERROR:ABORT...|
241 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
241 0020: 00 01 01 07 3c 04 72 6d 65 73 73 61 67 65 73 74 |....<.rmessagest|
242 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
242 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques|
243 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
243 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d|
244 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
244 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th|
245 0060: 69 73 20 66 |is f|
245 0060: 69 73 20 66 |is f|
246
246
247 #endif
247 #endif
248
248
249 $ killdaemons.py
249 $ killdaemons.py
250 $ cd server
250 $ cd server
251 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
251 $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt
252 $ cat hg.pid > $DAEMON_PIDS
252 $ cat hg.pid > $DAEMON_PIDS
253 $ cd ..
253 $ cd ..
254
254
255 Basic clone
255 Basic clone
256
256
257 #if stream-legacy
257 #if stream-legacy
258 $ hg clone --stream -U http://localhost:$HGPORT clone1
258 $ hg clone --stream -U http://localhost:$HGPORT clone1
259 streaming all changes
259 streaming all changes
260 1090 files to transfer, 102 KB of data (no-zstd !)
260 1090 files to transfer, 102 KB of data (no-zstd !)
261 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
261 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
262 1090 files to transfer, 98.8 KB of data (zstd !)
262 1090 files to transfer, 98.8 KB of data (zstd !)
263 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
263 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
264 searching for changes
264 searching for changes
265 no changes found
265 no changes found
266 $ cat server/errors.txt
266 $ cat server/errors.txt
267 #endif
267 #endif
268 #if stream-bundle2
268 #if stream-bundle2
269 $ hg clone --stream -U http://localhost:$HGPORT clone1
269 $ hg clone --stream -U http://localhost:$HGPORT clone1
270 streaming all changes
270 streaming all changes
271 1093 files to transfer, 102 KB of data (no-zstd !)
271 1093 files to transfer, 102 KB of data (no-zstd !)
272 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
272 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
273 1093 files to transfer, 98.9 KB of data (zstd !)
273 1093 files to transfer, 98.9 KB of data (zstd !)
274 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
274 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
275
275
276 $ ls -1 clone1/.hg/cache
276 $ ls -1 clone1/.hg/cache
277 branch2-base
277 branch2-base
278 branch2-immutable
278 branch2-immutable
279 branch2-served
279 branch2-served
280 branch2-served.hidden
280 branch2-served.hidden
281 branch2-visible
281 branch2-visible
282 branch2-visible-hidden
282 branch2-visible-hidden
283 rbc-names-v1
283 rbc-names-v1
284 rbc-revs-v1
284 rbc-revs-v1
285 tags2
285 tags2
286 tags2-served
286 tags2-served
287 $ cat server/errors.txt
287 $ cat server/errors.txt
288 #endif
288 #endif
289
289
290 getbundle requests with stream=1 are uncompressed
290 getbundle requests with stream=1 are uncompressed
291
291
292 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
292 $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1"
293 200 Script output follows
293 200 Script output follows
294 content-type: application/mercurial-0.2
294 content-type: application/mercurial-0.2
295
295
296
296
297 #if no-zstd no-rust
297 #if no-zstd no-rust
298 $ f --size --hex --bytes 256 body
298 $ f --size --hex --bytes 256 body
299 body: size=119123
299 body: size=119123
300 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
300 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
301 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
301 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......|
302 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
302 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10|
303 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
303 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109|
304 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
304 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
305 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
305 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
306 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
306 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev|
307 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
307 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0|
308 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
308 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............|
309 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
309 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................|
310 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
310 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.|
311 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
311 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........|
312 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
312 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch|
313 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
313 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180|
314 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
314 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...|
315 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
315 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................|
316 #endif
316 #endif
317 #if zstd no-rust
317 #if zstd no-rust
318 $ f --size --hex --bytes 256 body
318 $ f --size --hex --bytes 256 body
319 body: size=116310 (no-bigendian !)
319 body: size=116310 (no-bigendian !)
320 body: size=116305 (bigendian !)
320 body: size=116305 (bigendian !)
321 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
321 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
322 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
322 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
323 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
323 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
324 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
324 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !)
325 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
325 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !)
326 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
326 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
327 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
327 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
328 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
328 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
329 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
329 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
330 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
330 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
331 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
331 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
332 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
332 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
333 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
333 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
334 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
334 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
335 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
335 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
336 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
336 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
337 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
337 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
338 #endif
338 #endif
339 #if zstd rust no-dirstate-v2
339 #if zstd rust no-dirstate-v2
340 $ f --size --hex --bytes 256 body
340 $ f --size --hex --bytes 256 body
341 body: size=116310
341 body: size=116310
342 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
342 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
343 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
343 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......|
344 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
344 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10|
345 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
345 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109|
346 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
346 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen|
347 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
347 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl|
348 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
348 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z|
349 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
349 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2|
350 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
350 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...|
351 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
351 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....|
352 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
352 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................|
353 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
353 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.|
354 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
354 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9|
355 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
355 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&|
356 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
356 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo|
357 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
357 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050|
358 #endif
358 #endif
359 #if zstd dirstate-v2
359 #if zstd dirstate-v2
360 $ f --size --hex --bytes 256 body
360 $ f --size --hex --bytes 256 body
361 body: size=109549
361 body: size=109549
362 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
362 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......|
363 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
363 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......|
364 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
364 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95|
365 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
365 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030|
366 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
366 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote|
367 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
367 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs|
368 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
368 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach|
369 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
369 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta|
370 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
370 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no|
371 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
371 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c|
372 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
372 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%|
373 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
373 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa|
374 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
374 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor|
375 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
375 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i|
376 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
376 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................|
377 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
377 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................|
378 #endif
378 #endif
379
379
380 --uncompressed is an alias to --stream
380 --uncompressed is an alias to --stream
381
381
382 #if stream-legacy
382 #if stream-legacy
383 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
383 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
384 streaming all changes
384 streaming all changes
385 1090 files to transfer, 102 KB of data (no-zstd !)
385 1090 files to transfer, 102 KB of data (no-zstd !)
386 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
386 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
387 1090 files to transfer, 98.8 KB of data (zstd !)
387 1090 files to transfer, 98.8 KB of data (zstd !)
388 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
388 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
389 searching for changes
389 searching for changes
390 no changes found
390 no changes found
391 #endif
391 #endif
392 #if stream-bundle2
392 #if stream-bundle2
393 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
393 $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed
394 streaming all changes
394 streaming all changes
395 1093 files to transfer, 102 KB of data (no-zstd !)
395 1093 files to transfer, 102 KB of data (no-zstd !)
396 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
396 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
397 1093 files to transfer, 98.9 KB of data (zstd !)
397 1093 files to transfer, 98.9 KB of data (zstd !)
398 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
398 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
399 #endif
399 #endif
400
400
401 Clone with background file closing enabled
401 Clone with background file closing enabled
402
402
403 #if stream-legacy
403 #if stream-legacy
404 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
404 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
405 using http://localhost:$HGPORT/
405 using http://localhost:$HGPORT/
406 sending capabilities command
406 sending capabilities command
407 sending branchmap command
407 sending branchmap command
408 streaming all changes
408 streaming all changes
409 sending stream_out command
409 sending stream_out command
410 1090 files to transfer, 102 KB of data (no-zstd !)
410 1090 files to transfer, 102 KB of data (no-zstd !)
411 1090 files to transfer, 98.8 KB of data (zstd !)
411 1090 files to transfer, 98.8 KB of data (zstd !)
412 starting 4 threads for background file closing
412 starting 4 threads for background file closing
413 updating the branch cache
413 updating the branch cache
414 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
414 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
415 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
415 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
416 query 1; heads
416 query 1; heads
417 sending batch command
417 sending batch command
418 searching for changes
418 searching for changes
419 all remote heads known locally
419 all remote heads known locally
420 no changes found
420 no changes found
421 sending getbundle command
421 sending getbundle command
422 bundle2-input-bundle: with-transaction
422 bundle2-input-bundle: with-transaction
423 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
423 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
424 bundle2-input-part: "phase-heads" supported
424 bundle2-input-part: "phase-heads" supported
425 bundle2-input-part: total payload size 24
425 bundle2-input-part: total payload size 24
426 bundle2-input-bundle: 2 parts total
426 bundle2-input-bundle: 2 parts total
427 checking for updated bookmarks
427 checking for updated bookmarks
428 updating the branch cache
428 updating the branch cache
429 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
429 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
430 #endif
430 #endif
431 #if stream-bundle2
431 #if stream-bundle2
432 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
432 $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding
433 using http://localhost:$HGPORT/
433 using http://localhost:$HGPORT/
434 sending capabilities command
434 sending capabilities command
435 query 1; heads
435 query 1; heads
436 sending batch command
436 sending batch command
437 streaming all changes
437 streaming all changes
438 sending getbundle command
438 sending getbundle command
439 bundle2-input-bundle: with-transaction
439 bundle2-input-bundle: with-transaction
440 bundle2-input-part: "stream2" (params: 3 mandatory) supported
440 bundle2-input-part: "stream2" (params: 3 mandatory) supported
441 applying stream bundle
441 applying stream bundle
442 1093 files to transfer, 102 KB of data (no-zstd !)
442 1093 files to transfer, 102 KB of data (no-zstd !)
443 1093 files to transfer, 98.9 KB of data (zstd !)
443 1093 files to transfer, 98.9 KB of data (zstd !)
444 starting 4 threads for background file closing
444 starting 4 threads for background file closing
445 starting 4 threads for background file closing
445 starting 4 threads for background file closing
446 updating the branch cache
446 updating the branch cache
447 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
447 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
448 bundle2-input-part: total payload size 118984 (no-zstd !)
448 bundle2-input-part: total payload size 118984 (no-zstd !)
449 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
449 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
450 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
450 bundle2-input-part: total payload size 116145 (zstd no-bigendian !)
451 bundle2-input-part: total payload size 116140 (zstd bigendian !)
451 bundle2-input-part: total payload size 116140 (zstd bigendian !)
452 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
452 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
453 bundle2-input-bundle: 2 parts total
453 bundle2-input-bundle: 2 parts total
454 checking for updated bookmarks
454 checking for updated bookmarks
455 updating the branch cache
455 updating the branch cache
456 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
456 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
457 #endif
457 #endif
458
458
459 Cannot stream clone when there are secret changesets
459 Cannot stream clone when there are secret changesets
460
460
461 $ hg -R server phase --force --secret -r tip
461 $ hg -R server phase --force --secret -r tip
462 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
462 $ hg clone --stream -U http://localhost:$HGPORT secret-denied
463 warning: stream clone requested but server has them disabled
463 warning: stream clone requested but server has them disabled
464 requesting all changes
464 requesting all changes
465 adding changesets
465 adding changesets
466 adding manifests
466 adding manifests
467 adding file changes
467 adding file changes
468 added 2 changesets with 1025 changes to 1025 files
468 added 2 changesets with 1025 changes to 1025 files
469 new changesets 96ee1d7354c4:c17445101a72
469 new changesets 96ee1d7354c4:c17445101a72
470
470
471 $ killdaemons.py
471 $ killdaemons.py
472
472
473 Streaming of secrets can be overridden by server config
473 Streaming of secrets can be overridden by server config
474
474
475 $ cd server
475 $ cd server
476 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
476 $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid
477 $ cat hg.pid > $DAEMON_PIDS
477 $ cat hg.pid > $DAEMON_PIDS
478 $ cd ..
478 $ cd ..
479
479
480 #if stream-legacy
480 #if stream-legacy
481 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
481 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
482 streaming all changes
482 streaming all changes
483 1090 files to transfer, 102 KB of data (no-zstd !)
483 1090 files to transfer, 102 KB of data (no-zstd !)
484 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
484 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
485 1090 files to transfer, 98.8 KB of data (zstd !)
485 1090 files to transfer, 98.8 KB of data (zstd !)
486 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
486 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
487 searching for changes
487 searching for changes
488 no changes found
488 no changes found
489 #endif
489 #endif
490 #if stream-bundle2
490 #if stream-bundle2
491 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
491 $ hg clone --stream -U http://localhost:$HGPORT secret-allowed
492 streaming all changes
492 streaming all changes
493 1093 files to transfer, 102 KB of data (no-zstd !)
493 1093 files to transfer, 102 KB of data (no-zstd !)
494 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
494 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
495 1093 files to transfer, 98.9 KB of data (zstd !)
495 1093 files to transfer, 98.9 KB of data (zstd !)
496 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
496 transferred 98.9 KB in * seconds (* */sec) (glob) (zstd !)
497 #endif
497 #endif
498
498
499 $ killdaemons.py
499 $ killdaemons.py
500
500
501 Verify interaction between preferuncompressed and secret presence
501 Verify interaction between preferuncompressed and secret presence
502
502
503 $ cd server
503 $ cd server
504 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
504 $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid
505 $ cat hg.pid > $DAEMON_PIDS
505 $ cat hg.pid > $DAEMON_PIDS
506 $ cd ..
506 $ cd ..
507
507
508 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
508 $ hg clone -U http://localhost:$HGPORT preferuncompressed-secret
509 requesting all changes
509 requesting all changes
510 adding changesets
510 adding changesets
511 adding manifests
511 adding manifests
512 adding file changes
512 adding file changes
513 added 2 changesets with 1025 changes to 1025 files
513 added 2 changesets with 1025 changes to 1025 files
514 new changesets 96ee1d7354c4:c17445101a72
514 new changesets 96ee1d7354c4:c17445101a72
515
515
516 $ killdaemons.py
516 $ killdaemons.py
517
517
518 Clone not allowed when full bundles disabled and can't serve secrets
518 Clone not allowed when full bundles disabled and can't serve secrets
519
519
520 $ cd server
520 $ cd server
521 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
521 $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid
522 $ cat hg.pid > $DAEMON_PIDS
522 $ cat hg.pid > $DAEMON_PIDS
523 $ cd ..
523 $ cd ..
524
524
525 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
525 $ hg clone --stream http://localhost:$HGPORT secret-full-disabled
526 warning: stream clone requested but server has them disabled
526 warning: stream clone requested but server has them disabled
527 requesting all changes
527 requesting all changes
528 remote: abort: server has pull-based clones disabled
528 remote: abort: server has pull-based clones disabled
529 abort: pull failed on remote
529 abort: pull failed on remote
530 (remove --pull if specified or upgrade Mercurial)
530 (remove --pull if specified or upgrade Mercurial)
531 [100]
531 [100]
532
532
533 Local stream clone with secrets involved
533 Local stream clone with secrets involved
534 (This is just a test over behavior: if you have access to the repo's files,
534 (This is just a test over behavior: if you have access to the repo's files,
535 there is no security so it isn't important to prevent a clone here.)
535 there is no security so it isn't important to prevent a clone here.)
536
536
537 $ hg clone -U --stream server local-secret
537 $ hg clone -U --stream server local-secret
538 warning: stream clone requested but server has them disabled
538 warning: stream clone requested but server has them disabled
539 requesting all changes
539 requesting all changes
540 adding changesets
540 adding changesets
541 adding manifests
541 adding manifests
542 adding file changes
542 adding file changes
543 added 2 changesets with 1025 changes to 1025 files
543 added 2 changesets with 1025 changes to 1025 files
544 new changesets 96ee1d7354c4:c17445101a72
544 new changesets 96ee1d7354c4:c17445101a72
545
545
546 Stream clone while repo is changing:
546 Stream clone while repo is changing:
547
547
548 $ mkdir changing
548 $ mkdir changing
549 $ cd changing
549 $ cd changing
550
550
551 extension for delaying the server process so we reliably can modify the repo
551 extension for delaying the server process so we reliably can modify the repo
552 while cloning
552 while cloning
553
553
554 $ cat > stream_steps.py <<EOF
554 $ cat > stream_steps.py <<EOF
555 > import os
555 > import os
556 > import sys
556 > import sys
557 > from mercurial import (
557 > from mercurial import (
558 > encoding,
558 > encoding,
559 > extensions,
559 > extensions,
560 > streamclone,
560 > streamclone,
561 > testing,
561 > testing,
562 > )
562 > )
563 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
563 > WALKED_FILE_1 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_1']
564 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
564 > WALKED_FILE_2 = encoding.environ[b'HG_TEST_STREAM_WALKED_FILE_2']
565 >
565 >
566 > def _test_sync_point_walk_1(orig, repo):
566 > def _test_sync_point_walk_1(orig, repo):
567 > testing.write_file(WALKED_FILE_1)
567 > testing.write_file(WALKED_FILE_1)
568 >
568 >
569 > def _test_sync_point_walk_2(orig, repo):
569 > def _test_sync_point_walk_2(orig, repo):
570 > assert repo._currentlock(repo._lockref) is None
570 > assert repo._currentlock(repo._lockref) is None
571 > testing.wait_file(WALKED_FILE_2)
571 > testing.wait_file(WALKED_FILE_2)
572 >
572 >
573 > extensions.wrapfunction(
573 > extensions.wrapfunction(
574 > streamclone,
574 > streamclone,
575 > '_test_sync_point_walk_1',
575 > '_test_sync_point_walk_1',
576 > _test_sync_point_walk_1
576 > _test_sync_point_walk_1
577 > )
577 > )
578 > extensions.wrapfunction(
578 > extensions.wrapfunction(
579 > streamclone,
579 > streamclone,
580 > '_test_sync_point_walk_2',
580 > '_test_sync_point_walk_2',
581 > _test_sync_point_walk_2
581 > _test_sync_point_walk_2
582 > )
582 > )
583 > EOF
583 > EOF
584
584
585 prepare repo with small and big file to cover both code paths in emitrevlogdata
585 prepare repo with small and big file to cover both code paths in emitrevlogdata
586
586
587 $ hg init repo
587 $ hg init repo
588 $ touch repo/f1
588 $ touch repo/f1
589 $ $TESTDIR/seq.py 50000 > repo/f2
589 $ $TESTDIR/seq.py 50000 > repo/f2
590 $ hg -R repo ci -Aqm "0"
590 $ hg -R repo ci -Aqm "0"
591 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
591 $ HG_TEST_STREAM_WALKED_FILE_1="$TESTTMP/sync_file_walked_1"
592 $ export HG_TEST_STREAM_WALKED_FILE_1
592 $ export HG_TEST_STREAM_WALKED_FILE_1
593 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
593 $ HG_TEST_STREAM_WALKED_FILE_2="$TESTTMP/sync_file_walked_2"
594 $ export HG_TEST_STREAM_WALKED_FILE_2
594 $ export HG_TEST_STREAM_WALKED_FILE_2
595 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
595 $ HG_TEST_STREAM_WALKED_FILE_3="$TESTTMP/sync_file_walked_3"
596 $ export HG_TEST_STREAM_WALKED_FILE_3
596 $ export HG_TEST_STREAM_WALKED_FILE_3
597 # $ cat << EOF >> $HGRCPATH
597 # $ cat << EOF >> $HGRCPATH
598 # > [hooks]
598 # > [hooks]
599 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
599 # > pre-clone=rm -f "$TESTTMP/sync_file_walked_*"
600 # > EOF
600 # > EOF
601 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
601 $ hg serve -R repo -p $HGPORT1 -d --error errors.log --pid-file=hg.pid --config extensions.stream_steps="$RUNTESTDIR/testlib/ext-stream-clone-steps.py"
602 $ cat hg.pid >> $DAEMON_PIDS
602 $ cat hg.pid >> $DAEMON_PIDS
603
603
604 clone while modifying the repo between stating file with write lock and
604 clone while modifying the repo between stating file with write lock and
605 actually serving file content
605 actually serving file content
606
606
607 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
607 $ (hg clone -q --stream -U http://localhost:$HGPORT1 clone; touch "$HG_TEST_STREAM_WALKED_FILE_3") &
608 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
608 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_1
609 $ echo >> repo/f1
609 $ echo >> repo/f1
610 $ echo >> repo/f2
610 $ echo >> repo/f2
611 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
611 $ hg -R repo ci -m "1" --config ui.timeout.warn=-1
612 $ touch $HG_TEST_STREAM_WALKED_FILE_2
612 $ touch $HG_TEST_STREAM_WALKED_FILE_2
613 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
613 $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3
614 $ hg -R clone id
614 $ hg -R clone id
615 000000000000
615 000000000000
616 $ cat errors.log
616 $ cat errors.log
617 $ cd ..
617 $ cd ..
618
618
619 Stream repository with bookmarks
619 Stream repository with bookmarks
620 --------------------------------
620 --------------------------------
621
621
622 (revert introduction of secret changeset)
622 (revert introduction of secret changeset)
623
623
624 $ hg -R server phase --draft 'secret()'
624 $ hg -R server phase --draft 'secret()'
625
625
626 add a bookmark
626 add a bookmark
627
627
628 $ hg -R server bookmark -r tip some-bookmark
628 $ hg -R server bookmark -r tip some-bookmark
629
629
630 clone it
630 clone it
631
631
632 #if stream-legacy
632 #if stream-legacy
633 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
633 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
634 streaming all changes
634 streaming all changes
635 1090 files to transfer, 102 KB of data (no-zstd !)
635 1090 files to transfer, 102 KB of data (no-zstd !)
636 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
636 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
637 1090 files to transfer, 98.8 KB of data (zstd !)
637 1090 files to transfer, 98.8 KB of data (zstd !)
638 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
638 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
639 searching for changes
639 searching for changes
640 no changes found
640 no changes found
641 updating to branch default
641 updating to branch default
642 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
642 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
643 #endif
643 #endif
644 #if stream-bundle2
644 #if stream-bundle2
645 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
645 $ hg clone --stream http://localhost:$HGPORT with-bookmarks
646 streaming all changes
646 streaming all changes
647 1096 files to transfer, 102 KB of data (no-zstd !)
647 1096 files to transfer, 102 KB of data (no-zstd !)
648 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
648 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
649 1096 files to transfer, 99.1 KB of data (zstd !)
649 1096 files to transfer, 99.1 KB of data (zstd !)
650 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
650 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
651 updating to branch default
651 updating to branch default
652 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
652 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
653 #endif
653 #endif
654 $ hg verify -R with-bookmarks
654 $ hg verify -R with-bookmarks
655 checking changesets
655 checking changesets
656 checking manifests
656 checking manifests
657 crosschecking files in changesets and manifests
657 crosschecking files in changesets and manifests
658 checking files
658 checking files
659 checked 3 changesets with 1088 changes to 1088 files
659 checked 3 changesets with 1088 changes to 1088 files
660 $ hg -R with-bookmarks bookmarks
660 $ hg -R with-bookmarks bookmarks
661 some-bookmark 2:5223b5e3265f
661 some-bookmark 2:5223b5e3265f
662
662
663 Stream repository with phases
663 Stream repository with phases
664 -----------------------------
664 -----------------------------
665
665
666 Clone as publishing
666 Clone as publishing
667
667
668 $ hg -R server phase -r 'all()'
668 $ hg -R server phase -r 'all()'
669 0: draft
669 0: draft
670 1: draft
670 1: draft
671 2: draft
671 2: draft
672
672
673 #if stream-legacy
673 #if stream-legacy
674 $ hg clone --stream http://localhost:$HGPORT phase-publish
674 $ hg clone --stream http://localhost:$HGPORT phase-publish
675 streaming all changes
675 streaming all changes
676 1090 files to transfer, 102 KB of data (no-zstd !)
676 1090 files to transfer, 102 KB of data (no-zstd !)
677 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
677 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
678 1090 files to transfer, 98.8 KB of data (zstd !)
678 1090 files to transfer, 98.8 KB of data (zstd !)
679 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
679 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
680 searching for changes
680 searching for changes
681 no changes found
681 no changes found
682 updating to branch default
682 updating to branch default
683 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
683 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
684 #endif
684 #endif
685 #if stream-bundle2
685 #if stream-bundle2
686 $ hg clone --stream http://localhost:$HGPORT phase-publish
686 $ hg clone --stream http://localhost:$HGPORT phase-publish
687 streaming all changes
687 streaming all changes
688 1096 files to transfer, 102 KB of data (no-zstd !)
688 1096 files to transfer, 102 KB of data (no-zstd !)
689 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
689 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
690 1096 files to transfer, 99.1 KB of data (zstd !)
690 1096 files to transfer, 99.1 KB of data (zstd !)
691 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
691 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
692 updating to branch default
692 updating to branch default
693 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
693 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
694 #endif
694 #endif
695 $ hg verify -R phase-publish
695 $ hg verify -R phase-publish
696 checking changesets
696 checking changesets
697 checking manifests
697 checking manifests
698 crosschecking files in changesets and manifests
698 crosschecking files in changesets and manifests
699 checking files
699 checking files
700 checked 3 changesets with 1088 changes to 1088 files
700 checked 3 changesets with 1088 changes to 1088 files
701 $ hg -R phase-publish phase -r 'all()'
701 $ hg -R phase-publish phase -r 'all()'
702 0: public
702 0: public
703 1: public
703 1: public
704 2: public
704 2: public
705
705
706 Clone as non publishing
706 Clone as non publishing
707
707
708 $ cat << EOF >> server/.hg/hgrc
708 $ cat << EOF >> server/.hg/hgrc
709 > [phases]
709 > [phases]
710 > publish = False
710 > publish = False
711 > EOF
711 > EOF
712 $ killdaemons.py
712 $ killdaemons.py
713 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
713 $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid
714 $ cat hg.pid > $DAEMON_PIDS
714 $ cat hg.pid > $DAEMON_PIDS
715
715
716 #if stream-legacy
716 #if stream-legacy
717
717
718 With v1 of the stream protocol, changeset are always cloned as public. It make
718 With v1 of the stream protocol, changeset are always cloned as public. It make
719 stream v1 unsuitable for non-publishing repository.
719 stream v1 unsuitable for non-publishing repository.
720
720
721 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
721 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
722 streaming all changes
722 streaming all changes
723 1090 files to transfer, 102 KB of data (no-zstd !)
723 1090 files to transfer, 102 KB of data (no-zstd !)
724 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
724 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
725 1090 files to transfer, 98.8 KB of data (zstd !)
725 1090 files to transfer, 98.8 KB of data (zstd !)
726 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
726 transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !)
727 searching for changes
727 searching for changes
728 no changes found
728 no changes found
729 updating to branch default
729 updating to branch default
730 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
730 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
731 $ hg -R phase-no-publish phase -r 'all()'
731 $ hg -R phase-no-publish phase -r 'all()'
732 0: public
732 0: public
733 1: public
733 1: public
734 2: public
734 2: public
735 #endif
735 #endif
736 #if stream-bundle2
736 #if stream-bundle2
737 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
737 $ hg clone --stream http://localhost:$HGPORT phase-no-publish
738 streaming all changes
738 streaming all changes
739 1097 files to transfer, 102 KB of data (no-zstd !)
739 1097 files to transfer, 102 KB of data (no-zstd !)
740 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
740 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
741 1097 files to transfer, 99.1 KB of data (zstd !)
741 1097 files to transfer, 99.1 KB of data (zstd !)
742 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
742 transferred 99.1 KB in * seconds (* */sec) (glob) (zstd !)
743 updating to branch default
743 updating to branch default
744 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
744 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved
745 $ hg -R phase-no-publish phase -r 'all()'
745 $ hg -R phase-no-publish phase -r 'all()'
746 0: draft
746 0: draft
747 1: draft
747 1: draft
748 2: draft
748 2: draft
749 #endif
749 #endif
750 $ hg verify -R phase-no-publish
750 $ hg verify -R phase-no-publish
751 checking changesets
751 checking changesets
752 checking manifests
752 checking manifests
753 crosschecking files in changesets and manifests
753 crosschecking files in changesets and manifests
754 checking files
754 checking files
755 checked 3 changesets with 1088 changes to 1088 files
755 checked 3 changesets with 1088 changes to 1088 files
756
756
757 $ killdaemons.py
757 $ killdaemons.py
758
758
759 #if stream-legacy
759 #if stream-legacy
760
760
761 With v1 of the stream protocol, changeset are always cloned as public. There's
761 With v1 of the stream protocol, changeset are always cloned as public. There's
762 no obsolescence markers exchange in stream v1.
762 no obsolescence markers exchange in stream v1.
763
763
764 #endif
764 #endif
765 #if stream-bundle2
765 #if stream-bundle2
766
766
767 Stream repository with obsolescence
767 Stream repository with obsolescence
768 -----------------------------------
768 -----------------------------------
769
769
770 Clone non-publishing with obsolescence
770 Clone non-publishing with obsolescence
771
771
772 $ cat >> $HGRCPATH << EOF
772 $ cat >> $HGRCPATH << EOF
773 > [experimental]
773 > [experimental]
774 > evolution=all
774 > evolution=all
775 > EOF
775 > EOF
776
776
777 $ cd server
777 $ cd server
778 $ echo foo > foo
778 $ echo foo > foo
779 $ hg -q commit -m 'about to be pruned'
779 $ hg -q commit -m 'about to be pruned'
780 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
780 $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents
781 1 new obsolescence markers
781 1 new obsolescence markers
782 obsoleted 1 changesets
782 obsoleted 1 changesets
783 $ hg up null -q
783 $ hg up null -q
784 $ hg log -T '{rev}: {phase}\n'
784 $ hg log -T '{rev}: {phase}\n'
785 2: draft
785 2: draft
786 1: draft
786 1: draft
787 0: draft
787 0: draft
788 $ hg serve -p $HGPORT -d --pid-file=hg.pid
788 $ hg serve -p $HGPORT -d --pid-file=hg.pid
789 $ cat hg.pid > $DAEMON_PIDS
789 $ cat hg.pid > $DAEMON_PIDS
790 $ cd ..
790 $ cd ..
791
791
792 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
792 $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence
793 streaming all changes
793 streaming all changes
794 1098 files to transfer, 102 KB of data (no-zstd !)
794 1098 files to transfer, 102 KB of data (no-zstd !)
795 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
795 transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !)
796 1098 files to transfer, 99.5 KB of data (zstd !)
796 1098 files to transfer, 99.5 KB of data (zstd !)
797 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
797 transferred 99.5 KB in * seconds (* */sec) (glob) (zstd !)
798 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
798 $ hg -R with-obsolescence log -T '{rev}: {phase}\n'
799 2: draft
799 2: draft
800 1: draft
800 1: draft
801 0: draft
801 0: draft
802 $ hg debugobsolete -R with-obsolescence
802 $ hg debugobsolete -R with-obsolescence
803 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
803 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
804 $ hg verify -R with-obsolescence
804 $ hg verify -R with-obsolescence
805 checking changesets
805 checking changesets
806 checking manifests
806 checking manifests
807 crosschecking files in changesets and manifests
807 crosschecking files in changesets and manifests
808 checking files
808 checking files
809 checked 4 changesets with 1089 changes to 1088 files
809 checked 4 changesets with 1089 changes to 1088 files
810
810
811 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
811 $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution
812 streaming all changes
812 streaming all changes
813 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
813 remote: abort: server has obsolescence markers, but client cannot receive them via stream clone
814 abort: pull failed on remote
814 abort: pull failed on remote
815 [100]
815 [100]
816
816
817 $ killdaemons.py
817 $ killdaemons.py
818
818
819 #endif
819 #endif
820
821 Cloning a repo with no requirements doesn't give some obscure error
822
823 $ mkdir -p empty-repo/.hg
824 $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo2
825 $ hg --cwd empty-repo2 verify -q
@@ -1,935 +1,990 b''
1 setup
1 setup
2
2
3 $ cat >> $HGRCPATH << EOF
3 $ cat >> $HGRCPATH << EOF
4 > [extensions]
4 > [extensions]
5 > blackbox=
5 > blackbox=
6 > mock=$TESTDIR/mockblackbox.py
6 > mock=$TESTDIR/mockblackbox.py
7 > [blackbox]
7 > [blackbox]
8 > track = command, commandfinish, tagscache
8 > track = command, commandfinish, tagscache
9 > EOF
9 > EOF
10
10
11 Helper functions:
11 Helper functions:
12
12
13 $ cacheexists() {
13 $ cacheexists() {
14 > [ -f .hg/cache/tags2-visible ] && echo "tag cache exists" || echo "no tag cache"
14 > [ -f .hg/cache/tags2-visible ] && echo "tag cache exists" || echo "no tag cache"
15 > }
15 > }
16
16
17 $ fnodescacheexists() {
17 $ fnodescacheexists() {
18 > [ -f .hg/cache/hgtagsfnodes1 ] && echo "fnodes cache exists" || echo "no fnodes cache"
18 > [ -f .hg/cache/hgtagsfnodes1 ] && echo "fnodes cache exists" || echo "no fnodes cache"
19 > }
19 > }
20
20
21 $ dumptags() {
21 $ dumptags() {
22 > rev=$1
22 > rev=$1
23 > echo "rev $rev: .hgtags:"
23 > echo "rev $rev: .hgtags:"
24 > hg cat -r$rev .hgtags
24 > hg cat -r$rev .hgtags
25 > }
25 > }
26
26
27 # XXX need to test that the tag cache works when we strip an old head
27 # XXX need to test that the tag cache works when we strip an old head
28 # and add a new one rooted off non-tip: i.e. node and rev of tip are the
28 # and add a new one rooted off non-tip: i.e. node and rev of tip are the
29 # same, but stuff has changed behind tip.
29 # same, but stuff has changed behind tip.
30
30
31 Setup:
31 Setup:
32
32
33 $ hg init t
33 $ hg init t
34 $ cd t
34 $ cd t
35 $ cacheexists
35 $ cacheexists
36 no tag cache
36 no tag cache
37 $ fnodescacheexists
37 $ fnodescacheexists
38 no fnodes cache
38 no fnodes cache
39 $ hg id
39 $ hg id
40 000000000000 tip
40 000000000000 tip
41 $ cacheexists
41 $ cacheexists
42 no tag cache
42 no tag cache
43 $ fnodescacheexists
43 $ fnodescacheexists
44 no fnodes cache
44 no fnodes cache
45 $ echo a > a
45 $ echo a > a
46 $ hg add a
46 $ hg add a
47 $ hg commit -m "test"
47 $ hg commit -m "test"
48 $ hg co
48 $ hg co
49 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
49 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
50 $ hg identify
50 $ hg identify
51 acb14030fe0a tip
51 acb14030fe0a tip
52 $ hg identify -r 'wdir()'
52 $ hg identify -r 'wdir()'
53 acb14030fe0a tip
53 acb14030fe0a tip
54 $ cacheexists
54 $ cacheexists
55 tag cache exists
55 tag cache exists
56 No fnodes cache because .hgtags file doesn't exist
56 No fnodes cache because .hgtags file doesn't exist
57 (this is an implementation detail)
57 (this is an implementation detail)
58 $ fnodescacheexists
58 $ fnodescacheexists
59 no fnodes cache
59 no fnodes cache
60
60
61 Try corrupting the cache
61 Try corrupting the cache
62
62
63 $ printf 'a b' > .hg/cache/tags2-visible
63 $ printf 'a b' > .hg/cache/tags2-visible
64 $ hg identify
64 $ hg identify
65 acb14030fe0a tip
65 acb14030fe0a tip
66 $ cacheexists
66 $ cacheexists
67 tag cache exists
67 tag cache exists
68 $ fnodescacheexists
68 $ fnodescacheexists
69 no fnodes cache
69 no fnodes cache
70 $ hg identify
70 $ hg identify
71 acb14030fe0a tip
71 acb14030fe0a tip
72
72
73 Create local tag with long name:
73 Create local tag with long name:
74
74
75 $ T=`hg identify --debug --id`
75 $ T=`hg identify --debug --id`
76 $ hg tag -l "This is a local tag with a really long name!"
76 $ hg tag -l "This is a local tag with a really long name!"
77 $ hg tags
77 $ hg tags
78 tip 0:acb14030fe0a
78 tip 0:acb14030fe0a
79 This is a local tag with a really long name! 0:acb14030fe0a
79 This is a local tag with a really long name! 0:acb14030fe0a
80 $ rm .hg/localtags
80 $ rm .hg/localtags
81
81
82 Create a tag behind hg's back:
82 Create a tag behind hg's back:
83
83
84 $ echo "$T first" > .hgtags
84 $ echo "$T first" > .hgtags
85 $ cat .hgtags
85 $ cat .hgtags
86 acb14030fe0a21b60322c440ad2d20cf7685a376 first
86 acb14030fe0a21b60322c440ad2d20cf7685a376 first
87 $ hg add .hgtags
87 $ hg add .hgtags
88 $ hg commit -m "add tags"
88 $ hg commit -m "add tags"
89 $ hg tags
89 $ hg tags
90 tip 1:b9154636be93
90 tip 1:b9154636be93
91 first 0:acb14030fe0a
91 first 0:acb14030fe0a
92 $ hg identify
92 $ hg identify
93 b9154636be93 tip
93 b9154636be93 tip
94
94
95 We should have a fnodes cache now that we have a real tag
95 We should have a fnodes cache now that we have a real tag
96 The cache should have an empty entry for rev 0 and a valid entry for rev 1.
96 The cache should have an empty entry for rev 0 and a valid entry for rev 1.
97
97
98
98
99 $ fnodescacheexists
99 $ fnodescacheexists
100 fnodes cache exists
100 fnodes cache exists
101 $ f --size --hexdump .hg/cache/hgtagsfnodes1
101 $ f --size --hexdump .hg/cache/hgtagsfnodes1
102 .hg/cache/hgtagsfnodes1: size=48
102 .hg/cache/hgtagsfnodes1: size=48
103 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
103 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
104 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
104 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
105 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
105 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
106 $ hg debugtagscache
106 $ hg debugtagscache
107 0 acb14030fe0a21b60322c440ad2d20cf7685a376 missing
107 0 acb14030fe0a21b60322c440ad2d20cf7685a376 missing
108 1 b9154636be938d3d431e75a7c906504a079bfe07 26b7b4a773e09ee3c52f510e19e05e1ff966d859
108 1 b9154636be938d3d431e75a7c906504a079bfe07 26b7b4a773e09ee3c52f510e19e05e1ff966d859
109
109
110 Repeat with cold tag cache:
110 Repeat with cold tag cache:
111
111
112 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
112 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
113 $ hg identify
113 $ hg identify
114 b9154636be93 tip
114 b9154636be93 tip
115
115
116 $ fnodescacheexists
116 $ fnodescacheexists
117 fnodes cache exists
117 fnodes cache exists
118 $ f --size --hexdump .hg/cache/hgtagsfnodes1
118 $ f --size --hexdump .hg/cache/hgtagsfnodes1
119 .hg/cache/hgtagsfnodes1: size=48
119 .hg/cache/hgtagsfnodes1: size=48
120 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
120 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
121 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
121 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
122 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
122 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
123
123
124 And again, but now unable to write tag cache or lock file:
124 And again, but now unable to write tag cache or lock file:
125
125
126 #if unix-permissions no-fsmonitor
126 #if unix-permissions no-fsmonitor
127
127
128 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
128 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
129 $ chmod 555 .hg/cache
129 $ chmod 555 .hg/cache
130 $ hg identify
130 $ hg identify
131 b9154636be93 tip
131 b9154636be93 tip
132 $ chmod 755 .hg/cache
132 $ chmod 755 .hg/cache
133
133
134 (this block should be protected by no-fsmonitor, because "chmod 555 .hg"
134 (this block should be protected by no-fsmonitor, because "chmod 555 .hg"
135 makes watchman fail at accessing to files under .hg)
135 makes watchman fail at accessing to files under .hg)
136
136
137 $ chmod 555 .hg
137 $ chmod 555 .hg
138 $ hg identify
138 $ hg identify
139 b9154636be93 tip
139 b9154636be93 tip
140 $ chmod 755 .hg
140 $ chmod 755 .hg
141 #endif
141 #endif
142
142
143 Tag cache debug info written to blackbox log
143 Tag cache debug info written to blackbox log
144
144
145 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
145 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
146 $ hg identify
146 $ hg identify
147 b9154636be93 tip
147 b9154636be93 tip
148 $ hg blackbox -l 6
148 $ hg blackbox -l 6
149 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
149 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
150 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing 48 bytes to cache/hgtagsfnodes1
150 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing 48 bytes to cache/hgtagsfnodes1
151 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
151 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
152 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
152 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
153 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
153 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
154 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
154 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
155
155
156 Failure to acquire lock results in no write
156 Failure to acquire lock results in no write
157
157
158 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
158 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
159 $ echo 'foo:1' > .hg/store/lock
159 $ echo 'foo:1' > .hg/store/lock
160 $ hg identify
160 $ hg identify
161 b9154636be93 tip
161 b9154636be93 tip
162 $ hg blackbox -l 6
162 $ hg blackbox -l 6
163 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
163 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify
164 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> not writing .hg/cache/hgtagsfnodes1 because lock cannot be acquired
164 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> not writing .hg/cache/hgtagsfnodes1 because lock cannot be acquired
165 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
165 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> 0/2 cache hits/lookups in * seconds (glob)
166 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
166 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> writing .hg/cache/tags2-visible with 1 tags
167 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
167 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> identify exited 0 after * seconds (glob)
168 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
168 1970-01-01 00:00:00.000 bob @b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l 6
169
169
170 $ fnodescacheexists
170 $ fnodescacheexists
171 no fnodes cache
171 no fnodes cache
172
172
173 $ rm .hg/store/lock
173 $ rm .hg/store/lock
174
174
175 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
175 $ rm -f .hg/cache/tags2-visible .hg/cache/hgtagsfnodes1
176 $ hg identify
176 $ hg identify
177 b9154636be93 tip
177 b9154636be93 tip
178
178
179 Create a branch:
179 Create a branch:
180
180
181 $ echo bb > a
181 $ echo bb > a
182 $ hg status
182 $ hg status
183 M a
183 M a
184 $ hg identify
184 $ hg identify
185 b9154636be93+ tip
185 b9154636be93+ tip
186 $ hg co first
186 $ hg co first
187 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
187 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
188 $ hg id
188 $ hg id
189 acb14030fe0a+ first
189 acb14030fe0a+ first
190 $ hg id -r 'wdir()'
190 $ hg id -r 'wdir()'
191 acb14030fe0a+ first
191 acb14030fe0a+ first
192 $ hg -v id
192 $ hg -v id
193 acb14030fe0a+ first
193 acb14030fe0a+ first
194 $ hg status
194 $ hg status
195 M a
195 M a
196 $ echo 1 > b
196 $ echo 1 > b
197 $ hg add b
197 $ hg add b
198 $ hg commit -m "branch"
198 $ hg commit -m "branch"
199 created new head
199 created new head
200
200
201 Creating a new commit shouldn't append the .hgtags fnodes cache until
201 Creating a new commit shouldn't append the .hgtags fnodes cache until
202 tags info is accessed
202 tags info is accessed
203
203
204 $ f --size --hexdump .hg/cache/hgtagsfnodes1
204 $ f --size --hexdump .hg/cache/hgtagsfnodes1
205 .hg/cache/hgtagsfnodes1: size=48
205 .hg/cache/hgtagsfnodes1: size=48
206 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
206 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
207 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
207 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
208 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
208 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
209
209
210 $ hg id
210 $ hg id
211 c8edf04160c7 tip
211 c8edf04160c7 tip
212
212
213 First 4 bytes of record 3 are changeset fragment
213 First 4 bytes of record 3 are changeset fragment
214
214
215 $ f --size --hexdump .hg/cache/hgtagsfnodes1
215 $ f --size --hexdump .hg/cache/hgtagsfnodes1
216 .hg/cache/hgtagsfnodes1: size=72
216 .hg/cache/hgtagsfnodes1: size=72
217 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
217 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
218 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
218 0010: ff ff ff ff ff ff ff ff b9 15 46 36 26 b7 b4 a7 |..........F6&...|
219 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
219 0020: 73 e0 9e e3 c5 2f 51 0e 19 e0 5e 1f f9 66 d8 59 |s..../Q...^..f.Y|
220 0030: c8 ed f0 41 00 00 00 00 00 00 00 00 00 00 00 00 |...A............|
220 0030: c8 ed f0 41 00 00 00 00 00 00 00 00 00 00 00 00 |...A............|
221 0040: 00 00 00 00 00 00 00 00 |........|
221 0040: 00 00 00 00 00 00 00 00 |........|
222
222
223 Merge the two heads:
223 Merge the two heads:
224
224
225 $ hg merge 1
225 $ hg merge 1
226 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
226 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
227 (branch merge, don't forget to commit)
227 (branch merge, don't forget to commit)
228 $ hg blackbox -l3
228 $ hg blackbox -l3
229 1970-01-01 00:00:00.000 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28 (5000)> merge 1
229 1970-01-01 00:00:00.000 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28 (5000)> merge 1
230 1970-01-01 00:00:00.000 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> merge 1 exited 0 after * seconds (glob)
230 1970-01-01 00:00:00.000 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> merge 1 exited 0 after * seconds (glob)
231 1970-01-01 00:00:00.000 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l3
231 1970-01-01 00:00:00.000 bob @c8edf04160c7f731e4589d66ab3ab3486a64ac28+b9154636be938d3d431e75a7c906504a079bfe07 (5000)> blackbox -l3
232 $ hg id
232 $ hg id
233 c8edf04160c7+b9154636be93+ tip
233 c8edf04160c7+b9154636be93+ tip
234 $ hg status
234 $ hg status
235 M .hgtags
235 M .hgtags
236 $ hg commit -m "merge"
236 $ hg commit -m "merge"
237
237
238 Create a fake head, make sure tag not visible afterwards:
238 Create a fake head, make sure tag not visible afterwards:
239
239
240 $ cp .hgtags tags
240 $ cp .hgtags tags
241 $ hg tag last
241 $ hg tag last
242 $ hg rm .hgtags
242 $ hg rm .hgtags
243 $ hg commit -m "remove"
243 $ hg commit -m "remove"
244
244
245 $ mv tags .hgtags
245 $ mv tags .hgtags
246 $ hg add .hgtags
246 $ hg add .hgtags
247 $ hg commit -m "readd"
247 $ hg commit -m "readd"
248 $
248 $
249 $ hg tags
249 $ hg tags
250 tip 6:35ff301afafe
250 tip 6:35ff301afafe
251 first 0:acb14030fe0a
251 first 0:acb14030fe0a
252
252
253 Add invalid tags:
253 Add invalid tags:
254
254
255 $ echo "spam" >> .hgtags
255 $ echo "spam" >> .hgtags
256 $ echo >> .hgtags
256 $ echo >> .hgtags
257 $ echo "foo bar" >> .hgtags
257 $ echo "foo bar" >> .hgtags
258 $ echo "a5a5 invalid" >> .hg/localtags
258 $ echo "a5a5 invalid" >> .hg/localtags
259 $ cat .hgtags
259 $ cat .hgtags
260 acb14030fe0a21b60322c440ad2d20cf7685a376 first
260 acb14030fe0a21b60322c440ad2d20cf7685a376 first
261 spam
261 spam
262
262
263 foo bar
263 foo bar
264 $ hg commit -m "tags"
264 $ hg commit -m "tags"
265
265
266 Report tag parse error on other head:
266 Report tag parse error on other head:
267
267
268 $ hg up 3
268 $ hg up 3
269 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
269 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
270 $ echo 'x y' >> .hgtags
270 $ echo 'x y' >> .hgtags
271 $ hg commit -m "head"
271 $ hg commit -m "head"
272 created new head
272 created new head
273
273
274 $ hg tags --debug
274 $ hg tags --debug
275 .hgtags@75d9f02dfe28, line 2: cannot parse entry
275 .hgtags@75d9f02dfe28, line 2: cannot parse entry
276 .hgtags@75d9f02dfe28, line 4: node 'foo' is not well formed
276 .hgtags@75d9f02dfe28, line 4: node 'foo' is not well formed
277 .hgtags@c4be69a18c11, line 2: node 'x' is not well formed
277 .hgtags@c4be69a18c11, line 2: node 'x' is not well formed
278 tip 8:c4be69a18c11e8bc3a5fdbb576017c25f7d84663
278 tip 8:c4be69a18c11e8bc3a5fdbb576017c25f7d84663
279 first 0:acb14030fe0a21b60322c440ad2d20cf7685a376
279 first 0:acb14030fe0a21b60322c440ad2d20cf7685a376
280 $ hg tip
280 $ hg tip
281 changeset: 8:c4be69a18c11
281 changeset: 8:c4be69a18c11
282 tag: tip
282 tag: tip
283 parent: 3:ac5e980c4dc0
283 parent: 3:ac5e980c4dc0
284 user: test
284 user: test
285 date: Thu Jan 01 00:00:00 1970 +0000
285 date: Thu Jan 01 00:00:00 1970 +0000
286 summary: head
286 summary: head
287
287
288
288
289 Test tag precedence rules:
289 Test tag precedence rules:
290
290
291 $ cd ..
291 $ cd ..
292 $ hg init t2
292 $ hg init t2
293 $ cd t2
293 $ cd t2
294 $ echo foo > foo
294 $ echo foo > foo
295 $ hg add foo
295 $ hg add foo
296 $ hg ci -m 'add foo' # rev 0
296 $ hg ci -m 'add foo' # rev 0
297 $ hg tag bar # rev 1
297 $ hg tag bar # rev 1
298 $ echo >> foo
298 $ echo >> foo
299 $ hg ci -m 'change foo 1' # rev 2
299 $ hg ci -m 'change foo 1' # rev 2
300 $ hg up -C 1
300 $ hg up -C 1
301 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
301 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
302 $ hg tag -r 1 -f bar # rev 3
302 $ hg tag -r 1 -f bar # rev 3
303 $ hg up -C 1
303 $ hg up -C 1
304 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
304 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
305 $ echo >> foo
305 $ echo >> foo
306 $ hg ci -m 'change foo 2' # rev 4
306 $ hg ci -m 'change foo 2' # rev 4
307 created new head
307 created new head
308 $ hg tags
308 $ hg tags
309 tip 4:0c192d7d5e6b
309 tip 4:0c192d7d5e6b
310 bar 1:78391a272241
310 bar 1:78391a272241
311
311
312 Repeat in case of cache effects:
312 Repeat in case of cache effects:
313
313
314 $ hg tags
314 $ hg tags
315 tip 4:0c192d7d5e6b
315 tip 4:0c192d7d5e6b
316 bar 1:78391a272241
316 bar 1:78391a272241
317
317
318 Detailed dump of tag info:
318 Detailed dump of tag info:
319
319
320 $ hg heads -q # expect 4, 3, 2
320 $ hg heads -q # expect 4, 3, 2
321 4:0c192d7d5e6b
321 4:0c192d7d5e6b
322 3:6fa450212aeb
322 3:6fa450212aeb
323 2:7a94127795a3
323 2:7a94127795a3
324 $ dumptags 2
324 $ dumptags 2
325 rev 2: .hgtags:
325 rev 2: .hgtags:
326 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
326 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
327 $ dumptags 3
327 $ dumptags 3
328 rev 3: .hgtags:
328 rev 3: .hgtags:
329 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
329 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
330 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
330 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
331 78391a272241d70354aa14c874552cad6b51bb42 bar
331 78391a272241d70354aa14c874552cad6b51bb42 bar
332 $ dumptags 4
332 $ dumptags 4
333 rev 4: .hgtags:
333 rev 4: .hgtags:
334 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
334 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
335
335
336 Dump cache:
336 Dump cache:
337
337
338 $ cat .hg/cache/tags2-visible
338 $ cat .hg/cache/tags2-visible
339 4 0c192d7d5e6b78a714de54a2e9627952a877e25a
339 4 0c192d7d5e6b78a714de54a2e9627952a877e25a
340 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
340 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
341 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
341 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
342 78391a272241d70354aa14c874552cad6b51bb42 bar
342 78391a272241d70354aa14c874552cad6b51bb42 bar
343
343
344 $ f --size --hexdump .hg/cache/hgtagsfnodes1
344 $ f --size --hexdump .hg/cache/hgtagsfnodes1
345 .hg/cache/hgtagsfnodes1: size=120
345 .hg/cache/hgtagsfnodes1: size=120
346 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
346 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
347 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
347 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
348 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
348 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
349 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
349 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
350 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
350 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
351 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
351 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
352 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
352 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
353 0070: 78 ee 5a 2d ad bc 94 3d |x.Z-...=|
353 0070: 78 ee 5a 2d ad bc 94 3d |x.Z-...=|
354
354
355 Corrupt the .hgtags fnodes cache
355 Corrupt the .hgtags fnodes cache
356 Extra junk data at the end should get overwritten on next cache update
356 Extra junk data at the end should get overwritten on next cache update
357
357
358 $ echo extra >> .hg/cache/hgtagsfnodes1
358 $ echo extra >> .hg/cache/hgtagsfnodes1
359 $ echo dummy1 > foo
359 $ echo dummy1 > foo
360 $ hg commit -m throwaway1
360 $ hg commit -m throwaway1
361
361
362 $ hg tags
362 $ hg tags
363 tip 5:8dbfe60eff30
363 tip 5:8dbfe60eff30
364 bar 1:78391a272241
364 bar 1:78391a272241
365
365
366 $ hg blackbox -l 6
366 $ hg blackbox -l 6
367 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags
367 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags
368 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing 24 bytes to cache/hgtagsfnodes1
368 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing 24 bytes to cache/hgtagsfnodes1
369 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> 3/4 cache hits/lookups in * seconds (glob)
369 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> 3/4 cache hits/lookups in * seconds (glob)
370 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing .hg/cache/tags2-visible with 1 tags
370 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> writing .hg/cache/tags2-visible with 1 tags
371 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags exited 0 after * seconds (glob)
371 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> tags exited 0 after * seconds (glob)
372 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> blackbox -l 6
372 1970-01-01 00:00:00.000 bob @8dbfe60eff306a54259cfe007db9e330e7ecf866 (5000)> blackbox -l 6
373
373
374 On junk data + missing cache entries, hg also overwrites the junk.
374 On junk data + missing cache entries, hg also overwrites the junk.
375
375
376 $ rm -f .hg/cache/tags2-visible
376 $ rm -f .hg/cache/tags2-visible
377 >>> import os
377 >>> import os
378 >>> with open(".hg/cache/hgtagsfnodes1", "ab+") as fp:
378 >>> with open(".hg/cache/hgtagsfnodes1", "ab+") as fp:
379 ... fp.seek(-10, os.SEEK_END) and None
379 ... fp.seek(-10, os.SEEK_END) and None
380 ... fp.truncate() and None
380 ... fp.truncate() and None
381
381
382 $ hg debugtagscache | tail -2
382 $ hg debugtagscache | tail -2
383 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
383 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
384 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 missing
384 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 missing
385 $ hg tags
385 $ hg tags
386 tip 5:8dbfe60eff30
386 tip 5:8dbfe60eff30
387 bar 1:78391a272241
387 bar 1:78391a272241
388 $ hg debugtagscache | tail -2
388 $ hg debugtagscache | tail -2
389 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
389 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
390 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8af31de17fab7422878ee5a2dadbc943d
390 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8af31de17fab7422878ee5a2dadbc943d
391
391
392 If the 4 bytes of node hash for a record don't match an existing node, the entry
392 If the 4 bytes of node hash for a record don't match an existing node, the entry
393 is flagged as invalid.
393 is flagged as invalid.
394
394
395 >>> import os
395 >>> import os
396 >>> with open(".hg/cache/hgtagsfnodes1", "rb+") as fp:
396 >>> with open(".hg/cache/hgtagsfnodes1", "rb+") as fp:
397 ... fp.seek(-24, os.SEEK_END) and None
397 ... fp.seek(-24, os.SEEK_END) and None
398 ... fp.write(b'\xde\xad') and None
398 ... fp.write(b'\xde\xad') and None
399
399
400 $ f --size --hexdump .hg/cache/hgtagsfnodes1
400 $ f --size --hexdump .hg/cache/hgtagsfnodes1
401 .hg/cache/hgtagsfnodes1: size=144
401 .hg/cache/hgtagsfnodes1: size=144
402 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
402 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
403 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
403 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
404 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
404 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
405 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
405 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
406 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
406 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
407 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
407 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
408 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
408 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
409 0070: 78 ee 5a 2d ad bc 94 3d de ad e6 0e 0c 04 f2 a8 |x.Z-...=........|
409 0070: 78 ee 5a 2d ad bc 94 3d de ad e6 0e 0c 04 f2 a8 |x.Z-...=........|
410 0080: af 31 de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |.1....B(x.Z-...=|
410 0080: af 31 de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |.1....B(x.Z-...=|
411
411
412 $ hg debugtagscache | tail -2
412 $ hg debugtagscache | tail -2
413 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
413 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
414 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 invalid
414 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 invalid
415
415
416 $ hg tags
416 $ hg tags
417 tip 5:8dbfe60eff30
417 tip 5:8dbfe60eff30
418 bar 1:78391a272241
418 bar 1:78391a272241
419
419
420 BUG: If the filenode part of an entry in hgtagsfnodes is corrupt and
420 BUG: If the filenode part of an entry in hgtagsfnodes is corrupt and
421 tags2-visible is missing, `hg tags` aborts. Corrupting the leading 4 bytes of
421 tags2-visible is missing, `hg tags` aborts. Corrupting the leading 4 bytes of
422 node hash (as above) doesn't seem to trigger the issue. Also note that the
422 node hash (as above) doesn't seem to trigger the issue. Also note that the
423 debug command hides the corruption, both with and without tags2-visible.
423 debug command hides the corruption, both with and without tags2-visible.
424
424
425 $ mv .hg/cache/hgtagsfnodes1 .hg/cache/hgtagsfnodes1.bak
425 $ mv .hg/cache/hgtagsfnodes1 .hg/cache/hgtagsfnodes1.bak
426 $ hg debugupdatecaches
426 $ hg debugupdatecaches
427
427
428 >>> import os
428 >>> import os
429 >>> with open(".hg/cache/hgtagsfnodes1", "rb+") as fp:
429 >>> with open(".hg/cache/hgtagsfnodes1", "rb+") as fp:
430 ... fp.seek(-16, os.SEEK_END) and None
430 ... fp.seek(-16, os.SEEK_END) and None
431 ... fp.write(b'\xde\xad') and None
431 ... fp.write(b'\xde\xad') and None
432
432
433 $ f --size --hexdump .hg/cache/hgtagsfnodes1
433 $ f --size --hexdump .hg/cache/hgtagsfnodes1
434 .hg/cache/hgtagsfnodes1: size=144
434 .hg/cache/hgtagsfnodes1: size=144
435 0000: bb d1 79 df 00 00 00 00 00 00 00 00 00 00 00 00 |..y.............|
435 0000: bb d1 79 df 00 00 00 00 00 00 00 00 00 00 00 00 |..y.............|
436 0010: 00 00 00 00 00 00 00 00 78 39 1a 27 0c 04 f2 a8 |........x9.'....|
436 0010: 00 00 00 00 00 00 00 00 78 39 1a 27 0c 04 f2 a8 |........x9.'....|
437 0020: af 31 de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |.1....B(x.Z-...=|
437 0020: af 31 de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |.1....B(x.Z-...=|
438 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
438 0030: 7a 94 12 77 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |z..w.....1....B(|
439 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
439 0040: 78 ee 5a 2d ad bc 94 3d 6f a4 50 21 7d 3b 71 8c |x.Z-...=o.P!};q.|
440 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
440 0050: 96 4e f3 7b 89 e5 50 eb da fd 57 89 e7 6c e1 b0 |.N.{..P...W..l..|
441 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
441 0060: 0c 19 2d 7d 0c 04 f2 a8 af 31 de 17 fa b7 42 28 |..-}.....1....B(|
442 0070: 78 ee 5a 2d ad bc 94 3d 8d bf e6 0e 0c 04 f2 a8 |x.Z-...=........|
442 0070: 78 ee 5a 2d ad bc 94 3d 8d bf e6 0e 0c 04 f2 a8 |x.Z-...=........|
443 0080: de ad de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |......B(x.Z-...=|
443 0080: de ad de 17 fa b7 42 28 78 ee 5a 2d ad bc 94 3d |......B(x.Z-...=|
444
444
445 $ hg debugtagscache | tail -2
445 $ hg debugtagscache | tail -2
446 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
446 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
447 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8deadde17fab7422878ee5a2dadbc943d (unknown node)
447 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8deadde17fab7422878ee5a2dadbc943d (unknown node)
448
448
449 $ rm -f .hg/cache/tags2-visible
449 $ rm -f .hg/cache/tags2-visible
450 $ hg debugtagscache | tail -2
450 $ hg debugtagscache | tail -2
451 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
451 4 0c192d7d5e6b78a714de54a2e9627952a877e25a 0c04f2a8af31de17fab7422878ee5a2dadbc943d
452 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8deadde17fab7422878ee5a2dadbc943d (unknown node)
452 5 8dbfe60eff306a54259cfe007db9e330e7ecf866 0c04f2a8deadde17fab7422878ee5a2dadbc943d (unknown node)
453
453
454 $ hg tags
454 $ hg tags
455 tip 5:8dbfe60eff30
455 tip 5:8dbfe60eff30
456 bar 1:78391a272241
456 bar 1:78391a272241
457
457
458 BUG: Unless this file is restored, the `hg tags` in the next unix-permissions
458 BUG: Unless this file is restored, the `hg tags` in the next unix-permissions
459 conditional will fail: "abort: data/.hgtags.i@0c04f2a8dead: no match found"
459 conditional will fail: "abort: data/.hgtags.i@0c04f2a8dead: no match found"
460
460
461 $ mv .hg/cache/hgtagsfnodes1.bak .hg/cache/hgtagsfnodes1
461 $ mv .hg/cache/hgtagsfnodes1.bak .hg/cache/hgtagsfnodes1
462
462
463 #if unix-permissions no-root
463 #if unix-permissions no-root
464 Errors writing to .hgtags fnodes cache are silently ignored
464 Errors writing to .hgtags fnodes cache are silently ignored
465
465
466 $ echo dummy2 > foo
466 $ echo dummy2 > foo
467 $ hg commit -m throwaway2
467 $ hg commit -m throwaway2
468
468
469 $ chmod a-w .hg/cache/hgtagsfnodes1
469 $ chmod a-w .hg/cache/hgtagsfnodes1
470 $ rm -f .hg/cache/tags2-visible
470 $ rm -f .hg/cache/tags2-visible
471
471
472 $ hg tags
472 $ hg tags
473 tip 6:b968051b5cf3
473 tip 6:b968051b5cf3
474 bar 1:78391a272241
474 bar 1:78391a272241
475
475
476 $ hg blackbox -l 6
476 $ hg blackbox -l 6
477 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
477 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
478 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> couldn't write cache/hgtagsfnodes1: [Errno *] * (glob)
478 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> couldn't write cache/hgtagsfnodes1: [Errno *] * (glob)
479 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 2/4 cache hits/lookups in * seconds (glob)
479 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 2/4 cache hits/lookups in * seconds (glob)
480 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
480 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
481 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
481 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
482 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
482 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
483
483
484 $ chmod a+w .hg/cache/hgtagsfnodes1
484 $ chmod a+w .hg/cache/hgtagsfnodes1
485
485
486 $ rm -f .hg/cache/tags2-visible
486 $ rm -f .hg/cache/tags2-visible
487 $ hg tags
487 $ hg tags
488 tip 6:b968051b5cf3
488 tip 6:b968051b5cf3
489 bar 1:78391a272241
489 bar 1:78391a272241
490
490
491 $ hg blackbox -l 6
491 $ hg blackbox -l 6
492 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
492 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags
493 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing 24 bytes to cache/hgtagsfnodes1
493 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing 24 bytes to cache/hgtagsfnodes1
494 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 2/4 cache hits/lookups in * seconds (glob)
494 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> 2/4 cache hits/lookups in * seconds (glob)
495 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
495 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> writing .hg/cache/tags2-visible with 1 tags
496 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
496 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> tags exited 0 after * seconds (glob)
497 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
497 1970-01-01 00:00:00.000 bob @b968051b5cf3f624b771779c6d5f84f1d4c3fb5d (5000)> blackbox -l 6
498
498
499 $ f --size .hg/cache/hgtagsfnodes1
499 $ f --size .hg/cache/hgtagsfnodes1
500 .hg/cache/hgtagsfnodes1: size=168
500 .hg/cache/hgtagsfnodes1: size=168
501
501
502 $ hg -q --config extensions.strip= strip -r 6 --no-backup
502 $ hg -q --config extensions.strip= strip -r 6 --no-backup
503 #endif
503 #endif
504
504
505 Stripping doesn't truncate the tags cache until new data is available
505 Stripping doesn't truncate the tags cache until new data is available
506
506
507 $ rm -f .hg/cache/hgtagsfnodes1 .hg/cache/tags2-visible
507 $ rm -f .hg/cache/hgtagsfnodes1 .hg/cache/tags2-visible
508 $ hg tags
508 $ hg tags
509 tip 5:8dbfe60eff30
509 tip 5:8dbfe60eff30
510 bar 1:78391a272241
510 bar 1:78391a272241
511
511
512 $ f --size .hg/cache/hgtagsfnodes1
512 $ f --size .hg/cache/hgtagsfnodes1
513 .hg/cache/hgtagsfnodes1: size=144
513 .hg/cache/hgtagsfnodes1: size=144
514
514
515 $ hg -q --config extensions.strip= strip -r 5 --no-backup
515 $ hg -q --config extensions.strip= strip -r 5 --no-backup
516 $ hg tags
516 $ hg tags
517 tip 4:0c192d7d5e6b
517 tip 4:0c192d7d5e6b
518 bar 1:78391a272241
518 bar 1:78391a272241
519
519
520 $ hg blackbox -l 5
520 $ hg blackbox -l 5
521 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing 24 bytes to cache/hgtagsfnodes1
521 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing 24 bytes to cache/hgtagsfnodes1
522 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> 2/4 cache hits/lookups in * seconds (glob)
522 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> 2/4 cache hits/lookups in * seconds (glob)
523 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing .hg/cache/tags2-visible with 1 tags
523 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> writing .hg/cache/tags2-visible with 1 tags
524 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> tags exited 0 after * seconds (glob)
524 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> tags exited 0 after * seconds (glob)
525 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> blackbox -l 5
525 1970-01-01 00:00:00.000 bob @0c192d7d5e6b78a714de54a2e9627952a877e25a (5000)> blackbox -l 5
526
526
527 $ f --size .hg/cache/hgtagsfnodes1
527 $ f --size .hg/cache/hgtagsfnodes1
528 .hg/cache/hgtagsfnodes1: size=120
528 .hg/cache/hgtagsfnodes1: size=120
529
529
530 $ echo dummy > foo
530 $ echo dummy > foo
531 $ hg commit -m throwaway3
531 $ hg commit -m throwaway3
532
532
533 $ hg tags
533 $ hg tags
534 tip 5:035f65efb448
534 tip 5:035f65efb448
535 bar 1:78391a272241
535 bar 1:78391a272241
536
536
537 $ hg blackbox -l 6
537 $ hg blackbox -l 6
538 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags
538 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags
539 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing 24 bytes to cache/hgtagsfnodes1
539 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing 24 bytes to cache/hgtagsfnodes1
540 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> 3/4 cache hits/lookups in * seconds (glob)
540 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> 3/4 cache hits/lookups in * seconds (glob)
541 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing .hg/cache/tags2-visible with 1 tags
541 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> writing .hg/cache/tags2-visible with 1 tags
542 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags exited 0 after * seconds (glob)
542 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> tags exited 0 after * seconds (glob)
543 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> blackbox -l 6
543 1970-01-01 00:00:00.000 bob @035f65efb448350f4772141702a81ab1df48c465 (5000)> blackbox -l 6
544 $ f --size .hg/cache/hgtagsfnodes1
544 $ f --size .hg/cache/hgtagsfnodes1
545 .hg/cache/hgtagsfnodes1: size=144
545 .hg/cache/hgtagsfnodes1: size=144
546
546
547 $ hg -q --config extensions.strip= strip -r 5 --no-backup
547 $ hg -q --config extensions.strip= strip -r 5 --no-backup
548
548
549 Test tag removal:
549 Test tag removal:
550
550
551 $ hg tag --remove bar # rev 5
551 $ hg tag --remove bar # rev 5
552 $ hg tip -vp
552 $ hg tip -vp
553 changeset: 5:5f6e8655b1c7
553 changeset: 5:5f6e8655b1c7
554 tag: tip
554 tag: tip
555 user: test
555 user: test
556 date: Thu Jan 01 00:00:00 1970 +0000
556 date: Thu Jan 01 00:00:00 1970 +0000
557 files: .hgtags
557 files: .hgtags
558 description:
558 description:
559 Removed tag bar
559 Removed tag bar
560
560
561
561
562 diff -r 0c192d7d5e6b -r 5f6e8655b1c7 .hgtags
562 diff -r 0c192d7d5e6b -r 5f6e8655b1c7 .hgtags
563 --- a/.hgtags Thu Jan 01 00:00:00 1970 +0000
563 --- a/.hgtags Thu Jan 01 00:00:00 1970 +0000
564 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
564 +++ b/.hgtags Thu Jan 01 00:00:00 1970 +0000
565 @@ -1,1 +1,3 @@
565 @@ -1,1 +1,3 @@
566 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
566 bbd179dfa0a71671c253b3ae0aa1513b60d199fa bar
567 +78391a272241d70354aa14c874552cad6b51bb42 bar
567 +78391a272241d70354aa14c874552cad6b51bb42 bar
568 +0000000000000000000000000000000000000000 bar
568 +0000000000000000000000000000000000000000 bar
569
569
570 $ hg tags
570 $ hg tags
571 tip 5:5f6e8655b1c7
571 tip 5:5f6e8655b1c7
572 $ hg tags # again, try to expose cache bugs
572 $ hg tags # again, try to expose cache bugs
573 tip 5:5f6e8655b1c7
573 tip 5:5f6e8655b1c7
574
574
575 Remove nonexistent tag:
575 Remove nonexistent tag:
576
576
577 $ hg tag --remove foobar
577 $ hg tag --remove foobar
578 abort: tag 'foobar' does not exist
578 abort: tag 'foobar' does not exist
579 [10]
579 [10]
580 $ hg tip
580 $ hg tip
581 changeset: 5:5f6e8655b1c7
581 changeset: 5:5f6e8655b1c7
582 tag: tip
582 tag: tip
583 user: test
583 user: test
584 date: Thu Jan 01 00:00:00 1970 +0000
584 date: Thu Jan 01 00:00:00 1970 +0000
585 summary: Removed tag bar
585 summary: Removed tag bar
586
586
587
587
588 Undo a tag with rollback:
588 Undo a tag with rollback:
589
589
590 $ hg rollback # destroy rev 5 (restore bar)
590 $ hg rollback # destroy rev 5 (restore bar)
591 repository tip rolled back to revision 4 (undo commit)
591 repository tip rolled back to revision 4 (undo commit)
592 working directory now based on revision 4
592 working directory now based on revision 4
593 $ hg tags
593 $ hg tags
594 tip 4:0c192d7d5e6b
594 tip 4:0c192d7d5e6b
595 bar 1:78391a272241
595 bar 1:78391a272241
596 $ hg tags
596 $ hg tags
597 tip 4:0c192d7d5e6b
597 tip 4:0c192d7d5e6b
598 bar 1:78391a272241
598 bar 1:78391a272241
599
599
600 Test tag rank:
600 Test tag rank:
601
601
602 $ cd ..
602 $ cd ..
603 $ hg init t3
603 $ hg init t3
604 $ cd t3
604 $ cd t3
605 $ echo foo > foo
605 $ echo foo > foo
606 $ hg add foo
606 $ hg add foo
607 $ hg ci -m 'add foo' # rev 0
607 $ hg ci -m 'add foo' # rev 0
608 $ hg tag -f bar # rev 1 bar -> 0
608 $ hg tag -f bar # rev 1 bar -> 0
609 $ hg tag -f bar # rev 2 bar -> 1
609 $ hg tag -f bar # rev 2 bar -> 1
610 $ hg tag -fr 0 bar # rev 3 bar -> 0
610 $ hg tag -fr 0 bar # rev 3 bar -> 0
611 $ hg tag -fr 1 bar # rev 4 bar -> 1
611 $ hg tag -fr 1 bar # rev 4 bar -> 1
612 $ hg tag -fr 0 bar # rev 5 bar -> 0
612 $ hg tag -fr 0 bar # rev 5 bar -> 0
613 $ hg tags
613 $ hg tags
614 tip 5:85f05169d91d
614 tip 5:85f05169d91d
615 bar 0:bbd179dfa0a7
615 bar 0:bbd179dfa0a7
616 $ hg co 3
616 $ hg co 3
617 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
617 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
618 $ echo barbar > foo
618 $ echo barbar > foo
619 $ hg ci -m 'change foo' # rev 6
619 $ hg ci -m 'change foo' # rev 6
620 created new head
620 created new head
621 $ hg tags
621 $ hg tags
622 tip 6:735c3ca72986
622 tip 6:735c3ca72986
623 bar 0:bbd179dfa0a7
623 bar 0:bbd179dfa0a7
624
624
625 Don't allow moving tag without -f:
625 Don't allow moving tag without -f:
626
626
627 $ hg tag -r 3 bar
627 $ hg tag -r 3 bar
628 abort: tag 'bar' already exists (use -f to force)
628 abort: tag 'bar' already exists (use -f to force)
629 [10]
629 [10]
630 $ hg tags
630 $ hg tags
631 tip 6:735c3ca72986
631 tip 6:735c3ca72986
632 bar 0:bbd179dfa0a7
632 bar 0:bbd179dfa0a7
633
633
634 Strip 1: expose an old head:
634 Strip 1: expose an old head:
635
635
636 $ hg --config extensions.mq= strip 5
636 $ hg --config extensions.mq= strip 5
637 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
637 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
638 $ hg tags # partly stale cache
638 $ hg tags # partly stale cache
639 tip 5:735c3ca72986
639 tip 5:735c3ca72986
640 bar 1:78391a272241
640 bar 1:78391a272241
641 $ hg tags # up-to-date cache
641 $ hg tags # up-to-date cache
642 tip 5:735c3ca72986
642 tip 5:735c3ca72986
643 bar 1:78391a272241
643 bar 1:78391a272241
644
644
645 Strip 2: destroy whole branch, no old head exposed
645 Strip 2: destroy whole branch, no old head exposed
646
646
647 $ hg --config extensions.mq= strip 4
647 $ hg --config extensions.mq= strip 4
648 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
648 saved backup bundle to $TESTTMP/t3/.hg/strip-backup/*-backup.hg (glob)
649 $ hg tags # partly stale
649 $ hg tags # partly stale
650 tip 4:735c3ca72986
650 tip 4:735c3ca72986
651 bar 0:bbd179dfa0a7
651 bar 0:bbd179dfa0a7
652 $ rm -f .hg/cache/tags2-visible
652 $ rm -f .hg/cache/tags2-visible
653 $ hg tags # cold cache
653 $ hg tags # cold cache
654 tip 4:735c3ca72986
654 tip 4:735c3ca72986
655 bar 0:bbd179dfa0a7
655 bar 0:bbd179dfa0a7
656
656
657 Test tag rank with 3 heads:
657 Test tag rank with 3 heads:
658
658
659 $ cd ..
659 $ cd ..
660 $ hg init t4
660 $ hg init t4
661 $ cd t4
661 $ cd t4
662 $ echo foo > foo
662 $ echo foo > foo
663 $ hg add
663 $ hg add
664 adding foo
664 adding foo
665 $ hg ci -m 'add foo' # rev 0
665 $ hg ci -m 'add foo' # rev 0
666 $ hg tag bar # rev 1 bar -> 0
666 $ hg tag bar # rev 1 bar -> 0
667 $ hg tag -f bar # rev 2 bar -> 1
667 $ hg tag -f bar # rev 2 bar -> 1
668 $ hg up -qC 0
668 $ hg up -qC 0
669 $ hg tag -fr 2 bar # rev 3 bar -> 2
669 $ hg tag -fr 2 bar # rev 3 bar -> 2
670 $ hg tags
670 $ hg tags
671 tip 3:197c21bbbf2c
671 tip 3:197c21bbbf2c
672 bar 2:6fa450212aeb
672 bar 2:6fa450212aeb
673 $ hg up -qC 0
673 $ hg up -qC 0
674 $ hg tag -m 'retag rev 0' -fr 0 bar # rev 4 bar -> 0, but bar stays at 2
674 $ hg tag -m 'retag rev 0' -fr 0 bar # rev 4 bar -> 0, but bar stays at 2
675
675
676 Bar should still point to rev 2:
676 Bar should still point to rev 2:
677
677
678 $ hg tags
678 $ hg tags
679 tip 4:3b4b14ed0202
679 tip 4:3b4b14ed0202
680 bar 2:6fa450212aeb
680 bar 2:6fa450212aeb
681
681
682 Test that removing global/local tags does not get confused when trying
682 Test that removing global/local tags does not get confused when trying
683 to remove a tag of type X which actually only exists as a type Y:
683 to remove a tag of type X which actually only exists as a type Y:
684
684
685 $ cd ..
685 $ cd ..
686 $ hg init t5
686 $ hg init t5
687 $ cd t5
687 $ cd t5
688 $ echo foo > foo
688 $ echo foo > foo
689 $ hg add
689 $ hg add
690 adding foo
690 adding foo
691 $ hg ci -m 'add foo' # rev 0
691 $ hg ci -m 'add foo' # rev 0
692
692
693 $ hg tag -r 0 -l localtag
693 $ hg tag -r 0 -l localtag
694 $ hg tag --remove localtag
694 $ hg tag --remove localtag
695 abort: tag 'localtag' is not a global tag
695 abort: tag 'localtag' is not a global tag
696 [10]
696 [10]
697 $
697 $
698 $ hg tag -r 0 globaltag
698 $ hg tag -r 0 globaltag
699 $ hg tag --remove -l globaltag
699 $ hg tag --remove -l globaltag
700 abort: tag 'globaltag' is not a local tag
700 abort: tag 'globaltag' is not a local tag
701 [10]
701 [10]
702 $ hg tags -v
702 $ hg tags -v
703 tip 1:a0b6fe111088
703 tip 1:a0b6fe111088
704 localtag 0:bbd179dfa0a7 local
704 localtag 0:bbd179dfa0a7 local
705 globaltag 0:bbd179dfa0a7
705 globaltag 0:bbd179dfa0a7
706
706
707 Templated output:
707 Templated output:
708
708
709 (immediate values)
709 (immediate values)
710
710
711 $ hg tags -T '{pad(tag, 9)} {rev}:{node} ({type})\n'
711 $ hg tags -T '{pad(tag, 9)} {rev}:{node} ({type})\n'
712 tip 1:a0b6fe111088c8c29567d3876cc466aa02927cae ()
712 tip 1:a0b6fe111088c8c29567d3876cc466aa02927cae ()
713 localtag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa (local)
713 localtag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa (local)
714 globaltag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa ()
714 globaltag 0:bbd179dfa0a71671c253b3ae0aa1513b60d199fa ()
715
715
716 (ctx/revcache dependent)
716 (ctx/revcache dependent)
717
717
718 $ hg tags -T '{pad(tag, 9)} {rev} {file_adds}\n'
718 $ hg tags -T '{pad(tag, 9)} {rev} {file_adds}\n'
719 tip 1 .hgtags
719 tip 1 .hgtags
720 localtag 0 foo
720 localtag 0 foo
721 globaltag 0 foo
721 globaltag 0 foo
722
722
723 $ hg tags -T '{pad(tag, 9)} {rev}:{node|shortest}\n'
723 $ hg tags -T '{pad(tag, 9)} {rev}:{node|shortest}\n'
724 tip 1:a0b6
724 tip 1:a0b6
725 localtag 0:bbd1
725 localtag 0:bbd1
726 globaltag 0:bbd1
726 globaltag 0:bbd1
727
727
728 Test for issue3911
728 Test for issue3911
729
729
730 $ hg tag -r 0 -l localtag2
730 $ hg tag -r 0 -l localtag2
731 $ hg tag -l --remove localtag2
731 $ hg tag -l --remove localtag2
732 $ hg tags -v
732 $ hg tags -v
733 tip 1:a0b6fe111088
733 tip 1:a0b6fe111088
734 localtag 0:bbd179dfa0a7 local
734 localtag 0:bbd179dfa0a7 local
735 globaltag 0:bbd179dfa0a7
735 globaltag 0:bbd179dfa0a7
736
736
737 $ hg tag -r 1 -f localtag
737 $ hg tag -r 1 -f localtag
738 $ hg tags -v
738 $ hg tags -v
739 tip 2:5c70a037bb37
739 tip 2:5c70a037bb37
740 localtag 1:a0b6fe111088
740 localtag 1:a0b6fe111088
741 globaltag 0:bbd179dfa0a7
741 globaltag 0:bbd179dfa0a7
742
742
743 $ hg tags -v
743 $ hg tags -v
744 tip 2:5c70a037bb37
744 tip 2:5c70a037bb37
745 localtag 1:a0b6fe111088
745 localtag 1:a0b6fe111088
746 globaltag 0:bbd179dfa0a7
746 globaltag 0:bbd179dfa0a7
747
747
748 $ hg tag -r 1 localtag2
748 $ hg tag -r 1 localtag2
749 $ hg tags -v
749 $ hg tags -v
750 tip 3:bbfb8cd42be2
750 tip 3:bbfb8cd42be2
751 localtag2 1:a0b6fe111088
751 localtag2 1:a0b6fe111088
752 localtag 1:a0b6fe111088
752 localtag 1:a0b6fe111088
753 globaltag 0:bbd179dfa0a7
753 globaltag 0:bbd179dfa0a7
754
754
755 $ hg tags -v
755 $ hg tags -v
756 tip 3:bbfb8cd42be2
756 tip 3:bbfb8cd42be2
757 localtag2 1:a0b6fe111088
757 localtag2 1:a0b6fe111088
758 localtag 1:a0b6fe111088
758 localtag 1:a0b6fe111088
759 globaltag 0:bbd179dfa0a7
759 globaltag 0:bbd179dfa0a7
760
760
761 $ cd ..
761 $ cd ..
762
762
763 Create a repository with tags data to test .hgtags fnodes transfer
763 Create a repository with tags data to test .hgtags fnodes transfer
764
764
765 $ hg init tagsserver
765 $ hg init tagsserver
766 $ cd tagsserver
766 $ cd tagsserver
767 $ touch foo
767 $ touch foo
768 $ hg -q commit -A -m initial
768 $ hg -q commit -A -m initial
769 $ hg tag -m 'tag 0.1' 0.1
769 $ hg tag -m 'tag 0.1' 0.1
770 $ echo second > foo
770 $ echo second > foo
771 $ hg commit -m second
771 $ hg commit -m second
772 $ hg tag -m 'tag 0.2' 0.2
772 $ hg tag -m 'tag 0.2' 0.2
773 $ hg tags
773 $ hg tags
774 tip 3:40f0358cb314
774 tip 3:40f0358cb314
775 0.2 2:f63cc8fe54e4
775 0.2 2:f63cc8fe54e4
776 0.1 0:96ee1d7354c4
776 0.1 0:96ee1d7354c4
777 $ cd ..
777 $ cd ..
778
778
779 Cloning should pull down hgtags fnodes mappings and write the cache file
779 Cloning should pull down hgtags fnodes mappings and write the cache file
780
780
781 $ hg clone --pull tagsserver tagsclient
781 $ hg clone --pull tagsserver tagsclient
782 requesting all changes
782 requesting all changes
783 adding changesets
783 adding changesets
784 adding manifests
784 adding manifests
785 adding file changes
785 adding file changes
786 added 4 changesets with 4 changes to 2 files
786 added 4 changesets with 4 changes to 2 files
787 new changesets 96ee1d7354c4:40f0358cb314
787 new changesets 96ee1d7354c4:40f0358cb314
788 updating to branch default
788 updating to branch default
789 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
789 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
790
790
791 Missing tags2* files means the cache wasn't written through the normal mechanism.
791 Missing tags2* files means the cache wasn't written through the normal mechanism.
792
792
793 $ ls tagsclient/.hg/cache
793 $ ls tagsclient/.hg/cache
794 branch2-base
794 branch2-base
795 branch2-immutable
795 branch2-immutable
796 branch2-served
796 branch2-served
797 branch2-served.hidden
797 branch2-served.hidden
798 branch2-visible
798 branch2-visible
799 branch2-visible-hidden
799 branch2-visible-hidden
800 hgtagsfnodes1
800 hgtagsfnodes1
801 rbc-names-v1
801 rbc-names-v1
802 rbc-revs-v1
802 rbc-revs-v1
803 tags2
803 tags2
804 tags2-served
804 tags2-served
805
805
806 Cache should contain the head only, even though other nodes have tags data
806 Cache should contain the head only, even though other nodes have tags data
807
807
808 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
808 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
809 tagsclient/.hg/cache/hgtagsfnodes1: size=96
809 tagsclient/.hg/cache/hgtagsfnodes1: size=96
810 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
810 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
811 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
811 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
812 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
812 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
813 0030: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
813 0030: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
814 0040: ff ff ff ff ff ff ff ff 40 f0 35 8c 19 e0 a7 d3 |........@.5.....|
814 0040: ff ff ff ff ff ff ff ff 40 f0 35 8c 19 e0 a7 d3 |........@.5.....|
815 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
815 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
816
816
817 Running hg tags should produce tags2* file and not change cache
817 Running hg tags should produce tags2* file and not change cache
818
818
819 $ hg -R tagsclient tags
819 $ hg -R tagsclient tags
820 tip 3:40f0358cb314
820 tip 3:40f0358cb314
821 0.2 2:f63cc8fe54e4
821 0.2 2:f63cc8fe54e4
822 0.1 0:96ee1d7354c4
822 0.1 0:96ee1d7354c4
823
823
824 $ ls tagsclient/.hg/cache
824 $ ls tagsclient/.hg/cache
825 branch2-base
825 branch2-base
826 branch2-immutable
826 branch2-immutable
827 branch2-served
827 branch2-served
828 branch2-served.hidden
828 branch2-served.hidden
829 branch2-visible
829 branch2-visible
830 branch2-visible-hidden
830 branch2-visible-hidden
831 hgtagsfnodes1
831 hgtagsfnodes1
832 rbc-names-v1
832 rbc-names-v1
833 rbc-revs-v1
833 rbc-revs-v1
834 tags2
834 tags2
835 tags2-served
835 tags2-served
836 tags2-visible
836 tags2-visible
837
837
838 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
838 $ f --size --hexdump tagsclient/.hg/cache/hgtagsfnodes1
839 tagsclient/.hg/cache/hgtagsfnodes1: size=96
839 tagsclient/.hg/cache/hgtagsfnodes1: size=96
840 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
840 0000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
841 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
841 0010: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
842 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
842 0020: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
843 0030: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
843 0030: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff |................|
844 0040: ff ff ff ff ff ff ff ff 40 f0 35 8c 19 e0 a7 d3 |........@.5.....|
844 0040: ff ff ff ff ff ff ff ff 40 f0 35 8c 19 e0 a7 d3 |........@.5.....|
845 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
845 0050: 8a 5c 6a 82 4d cf fb a5 87 d0 2f a3 1e 4f 2f 8a |.\j.M...../..O/.|
846
846
847 Check that the bundle includes cache data
847 Check that the bundle includes cache data
848
848
849 $ hg -R tagsclient bundle --all ./test-cache-in-bundle-all-rev.hg
849 $ hg -R tagsclient bundle --all ./test-cache-in-bundle-all-rev.hg
850 4 changesets found
850 4 changesets found
851 $ hg debugbundle ./test-cache-in-bundle-all-rev.hg
851 $ hg debugbundle ./test-cache-in-bundle-all-rev.hg
852 Stream params: {Compression: BZ}
852 Stream params: {Compression: BZ}
853 changegroup -- {nbchanges: 4, version: 02} (mandatory: True)
853 changegroup -- {nbchanges: 4, version: 02} (mandatory: True)
854 96ee1d7354c4ad7372047672c36a1f561e3a6a4c
854 96ee1d7354c4ad7372047672c36a1f561e3a6a4c
855 c4dab0c2fd337eb9191f80c3024830a4889a8f34
855 c4dab0c2fd337eb9191f80c3024830a4889a8f34
856 f63cc8fe54e4d326f8d692805d70e092f851ddb1
856 f63cc8fe54e4d326f8d692805d70e092f851ddb1
857 40f0358cb314c824a5929ee527308d90e023bc10
857 40f0358cb314c824a5929ee527308d90e023bc10
858 hgtagsfnodes -- {} (mandatory: True)
858 hgtagsfnodes -- {} (mandatory: True)
859 cache:rev-branch-cache -- {} (mandatory: False)
859 cache:rev-branch-cache -- {} (mandatory: False)
860
860
861 Check that local clone includes cache data
861 Check that local clone includes cache data
862
862
863 $ hg clone tagsclient tags-local-clone
863 $ hg clone tagsclient tags-local-clone
864 updating to branch default
864 updating to branch default
865 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
865 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
866 $ (cd tags-local-clone/.hg/cache/; ls -1 tag*)
866 $ (cd tags-local-clone/.hg/cache/; ls -1 tag*)
867 tags2
867 tags2
868 tags2-served
868 tags2-served
869 tags2-visible
869 tags2-visible
870
870
871 Avoid writing logs on trying to delete an already deleted tag
871 Avoid writing logs on trying to delete an already deleted tag
872 $ hg init issue5752
872 $ hg init issue5752
873 $ cd issue5752
873 $ cd issue5752
874 $ echo > a
874 $ echo > a
875 $ hg commit -Am 'add a'
875 $ hg commit -Am 'add a'
876 adding a
876 adding a
877 $ hg tag a
877 $ hg tag a
878 $ hg tags
878 $ hg tags
879 tip 1:bd7ee4f3939b
879 tip 1:bd7ee4f3939b
880 a 0:a8a82d372bb3
880 a 0:a8a82d372bb3
881 $ hg log
881 $ hg log
882 changeset: 1:bd7ee4f3939b
882 changeset: 1:bd7ee4f3939b
883 tag: tip
883 tag: tip
884 user: test
884 user: test
885 date: Thu Jan 01 00:00:00 1970 +0000
885 date: Thu Jan 01 00:00:00 1970 +0000
886 summary: Added tag a for changeset a8a82d372bb3
886 summary: Added tag a for changeset a8a82d372bb3
887
887
888 changeset: 0:a8a82d372bb3
888 changeset: 0:a8a82d372bb3
889 tag: a
889 tag: a
890 user: test
890 user: test
891 date: Thu Jan 01 00:00:00 1970 +0000
891 date: Thu Jan 01 00:00:00 1970 +0000
892 summary: add a
892 summary: add a
893
893
894 $ hg tag --remove a
894 $ hg tag --remove a
895 $ hg log
895 $ hg log
896 changeset: 2:e7feacc7ec9e
896 changeset: 2:e7feacc7ec9e
897 tag: tip
897 tag: tip
898 user: test
898 user: test
899 date: Thu Jan 01 00:00:00 1970 +0000
899 date: Thu Jan 01 00:00:00 1970 +0000
900 summary: Removed tag a
900 summary: Removed tag a
901
901
902 changeset: 1:bd7ee4f3939b
902 changeset: 1:bd7ee4f3939b
903 user: test
903 user: test
904 date: Thu Jan 01 00:00:00 1970 +0000
904 date: Thu Jan 01 00:00:00 1970 +0000
905 summary: Added tag a for changeset a8a82d372bb3
905 summary: Added tag a for changeset a8a82d372bb3
906
906
907 changeset: 0:a8a82d372bb3
907 changeset: 0:a8a82d372bb3
908 user: test
908 user: test
909 date: Thu Jan 01 00:00:00 1970 +0000
909 date: Thu Jan 01 00:00:00 1970 +0000
910 summary: add a
910 summary: add a
911
911
912 $ hg tag --remove a
912 $ hg tag --remove a
913 abort: tag 'a' is already removed
913 abort: tag 'a' is already removed
914 [10]
914 [10]
915 $ hg log
915 $ hg log
916 changeset: 2:e7feacc7ec9e
916 changeset: 2:e7feacc7ec9e
917 tag: tip
917 tag: tip
918 user: test
918 user: test
919 date: Thu Jan 01 00:00:00 1970 +0000
919 date: Thu Jan 01 00:00:00 1970 +0000
920 summary: Removed tag a
920 summary: Removed tag a
921
921
922 changeset: 1:bd7ee4f3939b
922 changeset: 1:bd7ee4f3939b
923 user: test
923 user: test
924 date: Thu Jan 01 00:00:00 1970 +0000
924 date: Thu Jan 01 00:00:00 1970 +0000
925 summary: Added tag a for changeset a8a82d372bb3
925 summary: Added tag a for changeset a8a82d372bb3
926
926
927 changeset: 0:a8a82d372bb3
927 changeset: 0:a8a82d372bb3
928 user: test
928 user: test
929 date: Thu Jan 01 00:00:00 1970 +0000
929 date: Thu Jan 01 00:00:00 1970 +0000
930 summary: add a
930 summary: add a
931
931
932 $ cat .hgtags
932 $ cat .hgtags
933 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
933 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
934 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
934 a8a82d372bb35b42ff736e74f07c23bcd99c371f a
935 0000000000000000000000000000000000000000 a
935 0000000000000000000000000000000000000000 a
936
937 $ cd ..
938
939 .hgtags fnode should be properly resolved at merge revision (issue6673)
940
941 $ hg init issue6673
942 $ cd issue6673
943
944 $ touch a
945 $ hg ci -qAm a
946 $ hg branch -q stable
947 $ hg ci -m branch
948
949 $ hg up -q default
950 $ hg merge -q stable
951 $ hg ci -m merge
952
953 add tag to stable branch:
954
955 $ hg up -q stable
956 $ echo a >> a
957 $ hg ci -m a
958 $ hg tag whatever
959 $ hg log -GT'{rev} {tags}\n'
960 @ 4 tip
961 |
962 o 3 whatever
963 |
964 | o 2
965 |/|
966 o | 1
967 |/
968 o 0
969
970
971 merge tagged stable into default:
972
973 $ hg up -q default
974 $ hg merge -q stable
975 $ hg ci -m merge
976 $ hg log -GT'{rev} {tags}\n'
977 @ 5 tip
978 |\
979 | o 4
980 | |
981 | o 3 whatever
982 | |
983 o | 2
984 |\|
985 | o 1
986 |/
987 o 0
988
989
990 $ cd ..
General Comments 0
You need to be logged in to leave comments. Login now