##// END OF EJS Templates
inline-changelog: fix a critical bug in write_pending that delete data...
marmoute -
r52530:3cf9e52f stable
parent child Browse files
Show More
1 NO CONTENT: new file 100644, binary diff hidden
NO CONTENT: new file 100644, binary diff hidden
@@ -0,0 +1,307 b''
1 ======================================================
2 Test operation on repository with an inlined changelog
3 ======================================================
4
5 Inlined revlog has been a bag of complexity for a long time and the combination
6 with special transaction logic on the changelog was a long source of bugs
7 poorly covered by the test suites.
8
9 We stopped doing any usage of inlined-revlog for changelog in a93e52f0b6ff,
10 upgrading legacy inlined version as soon as possible when we see them. However
11 this Mercurial does not produce such inlined-changelog that case is very poorly
12 covered in the test suites. This test file aims at covering these cases.
13
14 Double checking test data
15 =========================
16
17 We should have a repository around
18
19 $ mkdir sanity-check
20 $ cd sanity-check
21 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
22 $ cd inlined-changelog
23 $ hg root
24 $TESTTMP/sanity-check/inlined-changelog
25
26 The repository should not be corrupted initially
27
28 $ hg verify
29 checking changesets
30 checking manifests
31 crosschecking files in changesets and manifests
32 checking files
33 checking dirstate
34 checked 1 changesets with 1 changes to 1 files
35
36 The changelog of that repository MUST be inlined
37
38 $ hg debugrevlog -c | grep -E '^flags\b'
39 flags : inline
40
41 Touching that repository MUST split that inlined changelog
42
43 $ hg branch foo --quiet
44 $ hg commit -m foo --quiet
45 $ hg debugrevlog -c | grep -E '^flags\b'
46 flags : (none)
47
48 $ cd ../..
49
50 Test doing a simple commit
51 ==========================
52
53 Simple commit
54 -------------
55
56 $ mkdir simple-commit
57 $ cd simple-commit
58 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
59 $ cd inlined-changelog
60 $ hg up --quiet
61 $ hg log -GT '[{rev}] {desc}\n'
62 @ [0] first commit
63
64 $ echo b > b
65 $ hg add b
66 $ hg commit -m "second changeset"
67 $ hg verify
68 checking changesets
69 checking manifests
70 crosschecking files in changesets and manifests
71 checking files
72 checking dirstate
73 checked 2 changesets with 2 changes to 2 files
74 $ hg log -GT '[{rev}] {desc}\n'
75 @ [1] second changeset
76 |
77 o [0] first commit
78
79 $ cd ../..
80
81 Simple commit with a pretxn hook configured
82 -------------------------------------------
83
84 Before 6.7.3 this used to delete the changelog index
85
86 $ mkdir pretxnclose-commit
87 $ cd pretxnclose-commit
88 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
89 $ cat >> inlined-changelog/.hg/hgrc <<EOF
90 > [hooks]
91 > pretxnclose=hg log -r tip -T "pre-txn tip rev: {rev}\n"
92 > EOF
93 $ cd inlined-changelog
94 $ hg up --quiet
95 $ hg log -GT '[{rev}] {desc}\n'
96 @ [0] first commit
97
98 $ echo b > b
99 $ hg add b
100 $ hg commit -m "second changeset"
101 pre-txn tip rev: 1 (missing-correct-output !)
102 warning: ignoring unknown working parent 11b63e930bf2! (known-bad-output !)
103 pre-txn tip rev: 0 (known-bad-output !)
104 $ hg verify
105 checking changesets
106 checking manifests
107 crosschecking files in changesets and manifests
108 checking files
109 checking dirstate
110 checked 2 changesets with 2 changes to 2 files
111 $ hg log -GT '[{rev}] {desc}\n'
112 @ [1] second changeset
113 |
114 o [0] first commit
115
116 $ cd ../..
117
118 Test pushing to a repository with a repository revlog
119 =====================================================
120
121 Simple local push
122 -----------------
123
124 $ mkdir simple-local-push
125 $ cd simple-local-push
126 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
127 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
128 [0] first commit
129
130 $ hg clone --pull inlined-changelog client
131 requesting all changes
132 adding changesets
133 adding manifests
134 adding file changes
135 added 1 changesets with 1 changes to 1 files
136 new changesets 827f11bfd362
137 updating to branch default
138 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
139 $ cd client
140 $ echo b > b
141 $ hg add b
142 $ hg commit -m "second changeset"
143 $ hg push
144 pushing to $TESTTMP/*/inlined-changelog (glob)
145 searching for changes
146 adding changesets
147 adding manifests
148 adding file changes
149 added 1 changesets with 1 changes to 1 files
150 $ cd ..
151
152 $ hg verify -R inlined-changelog
153 checking changesets
154 checking manifests
155 crosschecking files in changesets and manifests
156 checking files
157 checking dirstate
158 checked 2 changesets with 2 changes to 2 files
159 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
160 [1] second changeset
161 [0] first commit
162 $ cd ..
163
164 Simple local push with a pretxnchangegroup hook
165 -----------------------------------------------
166
167 Before 6.7.3 this used to delete the server changelog
168
169 $ mkdir pretxnchangegroup-local-push
170 $ cd pretxnchangegroup-local-push
171 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
172 $ cat >> inlined-changelog/.hg/hgrc <<EOF
173 > [hooks]
174 > pretxnchangegroup=hg log -r tip -T "pre-txn tip rev: {rev}\n"
175 > EOF
176 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
177 [0] first commit
178
179 $ hg clone --pull inlined-changelog client
180 requesting all changes
181 adding changesets
182 adding manifests
183 adding file changes
184 added 1 changesets with 1 changes to 1 files
185 new changesets 827f11bfd362
186 updating to branch default
187 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
188 $ cd client
189 $ echo b > b
190 $ hg add b
191 $ hg commit -m "second changeset"
192 $ hg push
193 pushing to $TESTTMP/*/inlined-changelog (glob)
194 searching for changes
195 adding changesets
196 adding manifests
197 adding file changes
198 pre-txn tip rev: 1 (missing-correct-output !)
199 pre-txn tip rev: 0 (known-bad-output !)
200 added 1 changesets with 1 changes to 1 files
201 $ cd ..
202
203 $ hg verify -R inlined-changelog
204 checking changesets
205 checking manifests
206 crosschecking files in changesets and manifests
207 checking files
208 checking dirstate
209 checked 2 changesets with 2 changes to 2 files
210 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
211 [1] second changeset
212 [0] first commit
213 $ cd ..
214
215 Simple ssh push
216 -----------------
217
218 $ mkdir simple-ssh-push
219 $ cd simple-ssh-push
220 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
221 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
222 [0] first commit
223
224 $ hg clone ssh://user@dummy/"`pwd`"/inlined-changelog client
225 requesting all changes
226 adding changesets
227 adding manifests
228 adding file changes
229 added 1 changesets with 1 changes to 1 files
230 new changesets 827f11bfd362
231 updating to branch default
232 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
233 $ cd client
234 $ echo b > b
235 $ hg add b
236 $ hg commit -m "second changeset"
237 $ hg push
238 pushing to ssh://user@dummy/$TESTTMP/simple-ssh-push/inlined-changelog
239 searching for changes
240 remote: adding changesets
241 remote: adding manifests
242 remote: adding file changes
243 remote: added 1 changesets with 1 changes to 1 files
244 $ cd ..
245
246 $ hg verify -R inlined-changelog
247 checking changesets
248 checking manifests
249 crosschecking files in changesets and manifests
250 checking files
251 checking dirstate
252 checked 2 changesets with 2 changes to 2 files
253 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
254 [1] second changeset
255 [0] first commit
256 $ cd ..
257
258 Simple ssh push with a pretxnchangegroup hook
259 -----------------------------------------------
260
261 Before 6.7.3 this used to delete the server changelog
262
263 $ mkdir pretxnchangegroup-ssh-push
264 $ cd pretxnchangegroup-ssh-push
265 $ tar xf $TESTDIR/bundles/inlined-changelog.tar
266 $ cat >> inlined-changelog/.hg/hgrc <<EOF
267 > [hooks]
268 > pretxnchangegroup=hg log -r tip -T "pre-txn tip rev: {rev}\n"
269 > EOF
270 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
271 [0] first commit
272
273 $ hg clone ssh://user@dummy/"`pwd`"/inlined-changelog client
274 requesting all changes
275 adding changesets
276 adding manifests
277 adding file changes
278 added 1 changesets with 1 changes to 1 files
279 new changesets 827f11bfd362
280 updating to branch default
281 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
282 $ cd client
283 $ echo b > b
284 $ hg add b
285 $ hg commit -m "second changeset"
286 $ hg push
287 pushing to ssh://user@dummy/$TESTTMP/pretxnchangegroup-ssh-push/inlined-changelog
288 searching for changes
289 remote: adding changesets
290 remote: adding manifests
291 remote: adding file changes
292 remote: pre-txn tip rev: 1 (missing-correct-output !)
293 remote: pre-txn tip rev: 0 (known-bad-output !)
294 remote: added 1 changesets with 1 changes to 1 files
295 $ cd ..
296
297 $ hg verify -R inlined-changelog
298 checking changesets
299 checking manifests
300 crosschecking files in changesets and manifests
301 checking files
302 checking dirstate
303 checked 2 changesets with 2 changes to 2 files
304 $ hg log -R inlined-changelog -T '[{rev}] {desc}\n'
305 [1] second changeset
306 [0] first commit
307 $ cd ..
@@ -1,507 +1,509 b''
1 # changelog.py - changelog class for mercurial
1 # changelog.py - changelog class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
3 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8
8
9 from .i18n import _
9 from .i18n import _
10 from .node import (
10 from .node import (
11 bin,
11 bin,
12 hex,
12 hex,
13 )
13 )
14 from .thirdparty import attr
14 from .thirdparty import attr
15
15
16 from . import (
16 from . import (
17 encoding,
17 encoding,
18 error,
18 error,
19 metadata,
19 metadata,
20 pycompat,
20 pycompat,
21 revlog,
21 revlog,
22 )
22 )
23 from .utils import (
23 from .utils import (
24 dateutil,
24 dateutil,
25 stringutil,
25 stringutil,
26 )
26 )
27 from .revlogutils import (
27 from .revlogutils import (
28 constants as revlog_constants,
28 constants as revlog_constants,
29 flagutil,
29 flagutil,
30 )
30 )
31
31
32 _defaultextra = {b'branch': b'default'}
32 _defaultextra = {b'branch': b'default'}
33
33
34
34
35 def _string_escape(text):
35 def _string_escape(text):
36 """
36 """
37 >>> from .pycompat import bytechr as chr
37 >>> from .pycompat import bytechr as chr
38 >>> d = {b'nl': chr(10), b'bs': chr(92), b'cr': chr(13), b'nul': chr(0)}
38 >>> d = {b'nl': chr(10), b'bs': chr(92), b'cr': chr(13), b'nul': chr(0)}
39 >>> s = b"ab%(nl)scd%(bs)s%(bs)sn%(nul)s12ab%(cr)scd%(bs)s%(nl)s" % d
39 >>> s = b"ab%(nl)scd%(bs)s%(bs)sn%(nul)s12ab%(cr)scd%(bs)s%(nl)s" % d
40 >>> s
40 >>> s
41 'ab\\ncd\\\\\\\\n\\x0012ab\\rcd\\\\\\n'
41 'ab\\ncd\\\\\\\\n\\x0012ab\\rcd\\\\\\n'
42 >>> res = _string_escape(s)
42 >>> res = _string_escape(s)
43 >>> s == _string_unescape(res)
43 >>> s == _string_unescape(res)
44 True
44 True
45 """
45 """
46 # subset of the string_escape codec
46 # subset of the string_escape codec
47 text = (
47 text = (
48 text.replace(b'\\', b'\\\\')
48 text.replace(b'\\', b'\\\\')
49 .replace(b'\n', b'\\n')
49 .replace(b'\n', b'\\n')
50 .replace(b'\r', b'\\r')
50 .replace(b'\r', b'\\r')
51 )
51 )
52 return text.replace(b'\0', b'\\0')
52 return text.replace(b'\0', b'\\0')
53
53
54
54
55 def _string_unescape(text):
55 def _string_unescape(text):
56 if b'\\0' in text:
56 if b'\\0' in text:
57 # fix up \0 without getting into trouble with \\0
57 # fix up \0 without getting into trouble with \\0
58 text = text.replace(b'\\\\', b'\\\\\n')
58 text = text.replace(b'\\\\', b'\\\\\n')
59 text = text.replace(b'\\0', b'\0')
59 text = text.replace(b'\\0', b'\0')
60 text = text.replace(b'\n', b'')
60 text = text.replace(b'\n', b'')
61 return stringutil.unescapestr(text)
61 return stringutil.unescapestr(text)
62
62
63
63
64 def decodeextra(text):
64 def decodeextra(text):
65 """
65 """
66 >>> from .pycompat import bytechr as chr
66 >>> from .pycompat import bytechr as chr
67 >>> sorted(decodeextra(encodeextra({b'foo': b'bar', b'baz': chr(0) + b'2'})
67 >>> sorted(decodeextra(encodeextra({b'foo': b'bar', b'baz': chr(0) + b'2'})
68 ... ).items())
68 ... ).items())
69 [('baz', '\\x002'), ('branch', 'default'), ('foo', 'bar')]
69 [('baz', '\\x002'), ('branch', 'default'), ('foo', 'bar')]
70 >>> sorted(decodeextra(encodeextra({b'foo': b'bar',
70 >>> sorted(decodeextra(encodeextra({b'foo': b'bar',
71 ... b'baz': chr(92) + chr(0) + b'2'})
71 ... b'baz': chr(92) + chr(0) + b'2'})
72 ... ).items())
72 ... ).items())
73 [('baz', '\\\\\\x002'), ('branch', 'default'), ('foo', 'bar')]
73 [('baz', '\\\\\\x002'), ('branch', 'default'), ('foo', 'bar')]
74 """
74 """
75 extra = _defaultextra.copy()
75 extra = _defaultextra.copy()
76 for l in text.split(b'\0'):
76 for l in text.split(b'\0'):
77 if l:
77 if l:
78 k, v = _string_unescape(l).split(b':', 1)
78 k, v = _string_unescape(l).split(b':', 1)
79 extra[k] = v
79 extra[k] = v
80 return extra
80 return extra
81
81
82
82
83 def encodeextra(d):
83 def encodeextra(d):
84 # keys must be sorted to produce a deterministic changelog entry
84 # keys must be sorted to produce a deterministic changelog entry
85 items = [_string_escape(b'%s:%s' % (k, d[k])) for k in sorted(d)]
85 items = [_string_escape(b'%s:%s' % (k, d[k])) for k in sorted(d)]
86 return b"\0".join(items)
86 return b"\0".join(items)
87
87
88
88
89 def stripdesc(desc):
89 def stripdesc(desc):
90 """strip trailing whitespace and leading and trailing empty lines"""
90 """strip trailing whitespace and leading and trailing empty lines"""
91 return b'\n'.join([l.rstrip() for l in desc.splitlines()]).strip(b'\n')
91 return b'\n'.join([l.rstrip() for l in desc.splitlines()]).strip(b'\n')
92
92
93
93
94 @attr.s
94 @attr.s
95 class _changelogrevision:
95 class _changelogrevision:
96 # Extensions might modify _defaultextra, so let the constructor below pass
96 # Extensions might modify _defaultextra, so let the constructor below pass
97 # it in
97 # it in
98 extra = attr.ib()
98 extra = attr.ib()
99 manifest = attr.ib()
99 manifest = attr.ib()
100 user = attr.ib(default=b'')
100 user = attr.ib(default=b'')
101 date = attr.ib(default=(0, 0))
101 date = attr.ib(default=(0, 0))
102 files = attr.ib(default=attr.Factory(list))
102 files = attr.ib(default=attr.Factory(list))
103 filesadded = attr.ib(default=None)
103 filesadded = attr.ib(default=None)
104 filesremoved = attr.ib(default=None)
104 filesremoved = attr.ib(default=None)
105 p1copies = attr.ib(default=None)
105 p1copies = attr.ib(default=None)
106 p2copies = attr.ib(default=None)
106 p2copies = attr.ib(default=None)
107 description = attr.ib(default=b'')
107 description = attr.ib(default=b'')
108 branchinfo = attr.ib(default=(_defaultextra[b'branch'], False))
108 branchinfo = attr.ib(default=(_defaultextra[b'branch'], False))
109
109
110
110
111 class changelogrevision:
111 class changelogrevision:
112 """Holds results of a parsed changelog revision.
112 """Holds results of a parsed changelog revision.
113
113
114 Changelog revisions consist of multiple pieces of data, including
114 Changelog revisions consist of multiple pieces of data, including
115 the manifest node, user, and date. This object exposes a view into
115 the manifest node, user, and date. This object exposes a view into
116 the parsed object.
116 the parsed object.
117 """
117 """
118
118
119 __slots__ = (
119 __slots__ = (
120 '_offsets',
120 '_offsets',
121 '_text',
121 '_text',
122 '_sidedata',
122 '_sidedata',
123 '_cpsd',
123 '_cpsd',
124 '_changes',
124 '_changes',
125 )
125 )
126
126
127 def __new__(cls, cl, text, sidedata, cpsd):
127 def __new__(cls, cl, text, sidedata, cpsd):
128 if not text:
128 if not text:
129 return _changelogrevision(extra=_defaultextra, manifest=cl.nullid)
129 return _changelogrevision(extra=_defaultextra, manifest=cl.nullid)
130
130
131 self = super(changelogrevision, cls).__new__(cls)
131 self = super(changelogrevision, cls).__new__(cls)
132 # We could return here and implement the following as an __init__.
132 # We could return here and implement the following as an __init__.
133 # But doing it here is equivalent and saves an extra function call.
133 # But doing it here is equivalent and saves an extra function call.
134
134
135 # format used:
135 # format used:
136 # nodeid\n : manifest node in ascii
136 # nodeid\n : manifest node in ascii
137 # user\n : user, no \n or \r allowed
137 # user\n : user, no \n or \r allowed
138 # time tz extra\n : date (time is int or float, timezone is int)
138 # time tz extra\n : date (time is int or float, timezone is int)
139 # : extra is metadata, encoded and separated by '\0'
139 # : extra is metadata, encoded and separated by '\0'
140 # : older versions ignore it
140 # : older versions ignore it
141 # files\n\n : files modified by the cset, no \n or \r allowed
141 # files\n\n : files modified by the cset, no \n or \r allowed
142 # (.*) : comment (free text, ideally utf-8)
142 # (.*) : comment (free text, ideally utf-8)
143 #
143 #
144 # changelog v0 doesn't use extra
144 # changelog v0 doesn't use extra
145
145
146 nl1 = text.index(b'\n')
146 nl1 = text.index(b'\n')
147 nl2 = text.index(b'\n', nl1 + 1)
147 nl2 = text.index(b'\n', nl1 + 1)
148 nl3 = text.index(b'\n', nl2 + 1)
148 nl3 = text.index(b'\n', nl2 + 1)
149
149
150 # The list of files may be empty. Which means nl3 is the first of the
150 # The list of files may be empty. Which means nl3 is the first of the
151 # double newline that precedes the description.
151 # double newline that precedes the description.
152 if text[nl3 + 1 : nl3 + 2] == b'\n':
152 if text[nl3 + 1 : nl3 + 2] == b'\n':
153 doublenl = nl3
153 doublenl = nl3
154 else:
154 else:
155 doublenl = text.index(b'\n\n', nl3 + 1)
155 doublenl = text.index(b'\n\n', nl3 + 1)
156
156
157 self._offsets = (nl1, nl2, nl3, doublenl)
157 self._offsets = (nl1, nl2, nl3, doublenl)
158 self._text = text
158 self._text = text
159 self._sidedata = sidedata
159 self._sidedata = sidedata
160 self._cpsd = cpsd
160 self._cpsd = cpsd
161 self._changes = None
161 self._changes = None
162
162
163 return self
163 return self
164
164
165 @property
165 @property
166 def manifest(self):
166 def manifest(self):
167 return bin(self._text[0 : self._offsets[0]])
167 return bin(self._text[0 : self._offsets[0]])
168
168
169 @property
169 @property
170 def user(self):
170 def user(self):
171 off = self._offsets
171 off = self._offsets
172 return encoding.tolocal(self._text[off[0] + 1 : off[1]])
172 return encoding.tolocal(self._text[off[0] + 1 : off[1]])
173
173
174 @property
174 @property
175 def _rawdate(self):
175 def _rawdate(self):
176 off = self._offsets
176 off = self._offsets
177 dateextra = self._text[off[1] + 1 : off[2]]
177 dateextra = self._text[off[1] + 1 : off[2]]
178 return dateextra.split(b' ', 2)[0:2]
178 return dateextra.split(b' ', 2)[0:2]
179
179
180 @property
180 @property
181 def _rawextra(self):
181 def _rawextra(self):
182 off = self._offsets
182 off = self._offsets
183 dateextra = self._text[off[1] + 1 : off[2]]
183 dateextra = self._text[off[1] + 1 : off[2]]
184 fields = dateextra.split(b' ', 2)
184 fields = dateextra.split(b' ', 2)
185 if len(fields) != 3:
185 if len(fields) != 3:
186 return None
186 return None
187
187
188 return fields[2]
188 return fields[2]
189
189
190 @property
190 @property
191 def date(self):
191 def date(self):
192 raw = self._rawdate
192 raw = self._rawdate
193 time = float(raw[0])
193 time = float(raw[0])
194 # Various tools did silly things with the timezone.
194 # Various tools did silly things with the timezone.
195 try:
195 try:
196 timezone = int(raw[1])
196 timezone = int(raw[1])
197 except ValueError:
197 except ValueError:
198 timezone = 0
198 timezone = 0
199
199
200 return time, timezone
200 return time, timezone
201
201
202 @property
202 @property
203 def extra(self):
203 def extra(self):
204 raw = self._rawextra
204 raw = self._rawextra
205 if raw is None:
205 if raw is None:
206 return _defaultextra
206 return _defaultextra
207
207
208 return decodeextra(raw)
208 return decodeextra(raw)
209
209
210 @property
210 @property
211 def changes(self):
211 def changes(self):
212 if self._changes is not None:
212 if self._changes is not None:
213 return self._changes
213 return self._changes
214 if self._cpsd:
214 if self._cpsd:
215 changes = metadata.decode_files_sidedata(self._sidedata)
215 changes = metadata.decode_files_sidedata(self._sidedata)
216 else:
216 else:
217 changes = metadata.ChangingFiles(
217 changes = metadata.ChangingFiles(
218 touched=self.files or (),
218 touched=self.files or (),
219 added=self.filesadded or (),
219 added=self.filesadded or (),
220 removed=self.filesremoved or (),
220 removed=self.filesremoved or (),
221 p1_copies=self.p1copies or {},
221 p1_copies=self.p1copies or {},
222 p2_copies=self.p2copies or {},
222 p2_copies=self.p2copies or {},
223 )
223 )
224 self._changes = changes
224 self._changes = changes
225 return changes
225 return changes
226
226
227 @property
227 @property
228 def files(self):
228 def files(self):
229 if self._cpsd:
229 if self._cpsd:
230 return sorted(self.changes.touched)
230 return sorted(self.changes.touched)
231 off = self._offsets
231 off = self._offsets
232 if off[2] == off[3]:
232 if off[2] == off[3]:
233 return []
233 return []
234
234
235 return self._text[off[2] + 1 : off[3]].split(b'\n')
235 return self._text[off[2] + 1 : off[3]].split(b'\n')
236
236
237 @property
237 @property
238 def filesadded(self):
238 def filesadded(self):
239 if self._cpsd:
239 if self._cpsd:
240 return self.changes.added
240 return self.changes.added
241 else:
241 else:
242 rawindices = self.extra.get(b'filesadded')
242 rawindices = self.extra.get(b'filesadded')
243 if rawindices is None:
243 if rawindices is None:
244 return None
244 return None
245 return metadata.decodefileindices(self.files, rawindices)
245 return metadata.decodefileindices(self.files, rawindices)
246
246
247 @property
247 @property
248 def filesremoved(self):
248 def filesremoved(self):
249 if self._cpsd:
249 if self._cpsd:
250 return self.changes.removed
250 return self.changes.removed
251 else:
251 else:
252 rawindices = self.extra.get(b'filesremoved')
252 rawindices = self.extra.get(b'filesremoved')
253 if rawindices is None:
253 if rawindices is None:
254 return None
254 return None
255 return metadata.decodefileindices(self.files, rawindices)
255 return metadata.decodefileindices(self.files, rawindices)
256
256
257 @property
257 @property
258 def p1copies(self):
258 def p1copies(self):
259 if self._cpsd:
259 if self._cpsd:
260 return self.changes.copied_from_p1
260 return self.changes.copied_from_p1
261 else:
261 else:
262 rawcopies = self.extra.get(b'p1copies')
262 rawcopies = self.extra.get(b'p1copies')
263 if rawcopies is None:
263 if rawcopies is None:
264 return None
264 return None
265 return metadata.decodecopies(self.files, rawcopies)
265 return metadata.decodecopies(self.files, rawcopies)
266
266
267 @property
267 @property
268 def p2copies(self):
268 def p2copies(self):
269 if self._cpsd:
269 if self._cpsd:
270 return self.changes.copied_from_p2
270 return self.changes.copied_from_p2
271 else:
271 else:
272 rawcopies = self.extra.get(b'p2copies')
272 rawcopies = self.extra.get(b'p2copies')
273 if rawcopies is None:
273 if rawcopies is None:
274 return None
274 return None
275 return metadata.decodecopies(self.files, rawcopies)
275 return metadata.decodecopies(self.files, rawcopies)
276
276
277 @property
277 @property
278 def description(self):
278 def description(self):
279 return encoding.tolocal(self._text[self._offsets[3] + 2 :])
279 return encoding.tolocal(self._text[self._offsets[3] + 2 :])
280
280
281 @property
281 @property
282 def branchinfo(self):
282 def branchinfo(self):
283 extra = self.extra
283 extra = self.extra
284 return encoding.tolocal(extra.get(b"branch")), b'close' in extra
284 return encoding.tolocal(extra.get(b"branch")), b'close' in extra
285
285
286
286
287 class changelog(revlog.revlog):
287 class changelog(revlog.revlog):
288 def __init__(self, opener, trypending=False, concurrencychecker=None):
288 def __init__(self, opener, trypending=False, concurrencychecker=None):
289 """Load a changelog revlog using an opener.
289 """Load a changelog revlog using an opener.
290
290
291 If ``trypending`` is true, we attempt to load the index from a
291 If ``trypending`` is true, we attempt to load the index from a
292 ``00changelog.i.a`` file instead of the default ``00changelog.i``.
292 ``00changelog.i.a`` file instead of the default ``00changelog.i``.
293 The ``00changelog.i.a`` file contains index (and possibly inline
293 The ``00changelog.i.a`` file contains index (and possibly inline
294 revision) data for a transaction that hasn't been finalized yet.
294 revision) data for a transaction that hasn't been finalized yet.
295 It exists in a separate file to facilitate readers (such as
295 It exists in a separate file to facilitate readers (such as
296 hooks processes) accessing data before a transaction is finalized.
296 hooks processes) accessing data before a transaction is finalized.
297
297
298 ``concurrencychecker`` will be passed to the revlog init function, see
298 ``concurrencychecker`` will be passed to the revlog init function, see
299 the documentation there.
299 the documentation there.
300 """
300 """
301 revlog.revlog.__init__(
301 revlog.revlog.__init__(
302 self,
302 self,
303 opener,
303 opener,
304 target=(revlog_constants.KIND_CHANGELOG, None),
304 target=(revlog_constants.KIND_CHANGELOG, None),
305 radix=b'00changelog',
305 radix=b'00changelog',
306 checkambig=True,
306 checkambig=True,
307 mmaplargeindex=True,
307 mmaplargeindex=True,
308 persistentnodemap=opener.options.get(b'persistent-nodemap', False),
308 persistentnodemap=opener.options.get(b'persistent-nodemap', False),
309 concurrencychecker=concurrencychecker,
309 concurrencychecker=concurrencychecker,
310 trypending=trypending,
310 trypending=trypending,
311 may_inline=False,
311 may_inline=False,
312 )
312 )
313
313
314 if self._initempty and (self._format_version == revlog.REVLOGV1):
314 if self._initempty and (self._format_version == revlog.REVLOGV1):
315 # changelogs don't benefit from generaldelta.
315 # changelogs don't benefit from generaldelta.
316
316
317 self._format_flags &= ~revlog.FLAG_GENERALDELTA
317 self._format_flags &= ~revlog.FLAG_GENERALDELTA
318 self.delta_config.general_delta = False
318 self.delta_config.general_delta = False
319
319
320 # Delta chains for changelogs tend to be very small because entries
320 # Delta chains for changelogs tend to be very small because entries
321 # tend to be small and don't delta well with each. So disable delta
321 # tend to be small and don't delta well with each. So disable delta
322 # chains.
322 # chains.
323 self._storedeltachains = False
323 self._storedeltachains = False
324
324
325 self._v2_delayed = False
325 self._v2_delayed = False
326 self._filteredrevs = frozenset()
326 self._filteredrevs = frozenset()
327 self._filteredrevs_hashcache = {}
327 self._filteredrevs_hashcache = {}
328 self._copiesstorage = opener.options.get(b'copies-storage')
328 self._copiesstorage = opener.options.get(b'copies-storage')
329
329
330 @property
330 @property
331 def filteredrevs(self):
331 def filteredrevs(self):
332 return self._filteredrevs
332 return self._filteredrevs
333
333
334 @filteredrevs.setter
334 @filteredrevs.setter
335 def filteredrevs(self, val):
335 def filteredrevs(self, val):
336 # Ensure all updates go through this function
336 # Ensure all updates go through this function
337 assert isinstance(val, frozenset)
337 assert isinstance(val, frozenset)
338 self._filteredrevs = val
338 self._filteredrevs = val
339 self._filteredrevs_hashcache = {}
339 self._filteredrevs_hashcache = {}
340
340
341 def _write_docket(self, tr):
341 def _write_docket(self, tr):
342 if not self._v2_delayed:
342 if not self._v2_delayed:
343 super(changelog, self)._write_docket(tr)
343 super(changelog, self)._write_docket(tr)
344
344
345 def delayupdate(self, tr):
345 def delayupdate(self, tr):
346 """delay visibility of index updates to other readers"""
346 """delay visibility of index updates to other readers"""
347 assert not self._inner.is_open
347 assert not self._inner.is_open
348 assert not self._may_inline
348 assert not self._may_inline
349 # enforce that older changelog that are still inline are split at the
349 # enforce that older changelog that are still inline are split at the
350 # first opportunity.
350 # first opportunity.
351 if self._inline:
351 if self._inline:
352 self._enforceinlinesize(tr)
352 self._enforceinlinesize(tr)
353 if self._docket is not None:
353 if self._docket is not None:
354 self._v2_delayed = True
354 self._v2_delayed = True
355 else:
355 else:
356 new_index = self._inner.delay()
356 new_index = self._inner.delay()
357 if new_index is not None:
357 if new_index is not None:
358 self._indexfile = new_index
358 self._indexfile = new_index
359 tr.registertmp(new_index)
359 tr.registertmp(new_index)
360 tr.addpending(b'cl-%i' % id(self), self._writepending)
360 # use "000" as prefix to make sure we run before the spliting of legacy
361 tr.addfinalize(b'cl-%i' % id(self), self._finalize)
361 # inline changelog..
362 tr.addpending(b'000-cl-%i' % id(self), self._writepending)
363 tr.addfinalize(b'000-cl-%i' % id(self), self._finalize)
362
364
363 def _finalize(self, tr):
365 def _finalize(self, tr):
364 """finalize index updates"""
366 """finalize index updates"""
365 assert not self._inner.is_open
367 assert not self._inner.is_open
366 if self._docket is not None:
368 if self._docket is not None:
367 self._docket.write(tr)
369 self._docket.write(tr)
368 self._v2_delayed = False
370 self._v2_delayed = False
369 else:
371 else:
370 new_index_file = self._inner.finalize_pending()
372 new_index_file = self._inner.finalize_pending()
371 self._indexfile = new_index_file
373 self._indexfile = new_index_file
372 if self._inline:
374 if self._inline:
373 msg = 'changelog should not be inline at that point'
375 msg = 'changelog should not be inline at that point'
374 raise error.ProgrammingError(msg)
376 raise error.ProgrammingError(msg)
375
377
376 def _writepending(self, tr):
378 def _writepending(self, tr):
377 """create a file containing the unfinalized state for
379 """create a file containing the unfinalized state for
378 pretxnchangegroup"""
380 pretxnchangegroup"""
379 assert not self._inner.is_open
381 assert not self._inner.is_open
380 if self._docket:
382 if self._docket:
381 any_pending = self._docket.write(tr, pending=True)
383 any_pending = self._docket.write(tr, pending=True)
382 self._v2_delayed = False
384 self._v2_delayed = False
383 else:
385 else:
384 new_index, any_pending = self._inner.write_pending()
386 new_index, any_pending = self._inner.write_pending()
385 if new_index is not None:
387 if new_index is not None:
386 self._indexfile = new_index
388 self._indexfile = new_index
387 tr.registertmp(new_index)
389 tr.registertmp(new_index)
388 return any_pending
390 return any_pending
389
391
390 def _enforceinlinesize(self, tr):
392 def _enforceinlinesize(self, tr):
391 if not self.is_delaying:
393 if not self.is_delaying:
392 revlog.revlog._enforceinlinesize(self, tr)
394 revlog.revlog._enforceinlinesize(self, tr)
393
395
394 def read(self, nodeorrev):
396 def read(self, nodeorrev):
395 """Obtain data from a parsed changelog revision.
397 """Obtain data from a parsed changelog revision.
396
398
397 Returns a 6-tuple of:
399 Returns a 6-tuple of:
398
400
399 - manifest node in binary
401 - manifest node in binary
400 - author/user as a localstr
402 - author/user as a localstr
401 - date as a 2-tuple of (time, timezone)
403 - date as a 2-tuple of (time, timezone)
402 - list of files
404 - list of files
403 - commit message as a localstr
405 - commit message as a localstr
404 - dict of extra metadata
406 - dict of extra metadata
405
407
406 Unless you need to access all fields, consider calling
408 Unless you need to access all fields, consider calling
407 ``changelogrevision`` instead, as it is faster for partial object
409 ``changelogrevision`` instead, as it is faster for partial object
408 access.
410 access.
409 """
411 """
410 d = self._revisiondata(nodeorrev)
412 d = self._revisiondata(nodeorrev)
411 sidedata = self.sidedata(nodeorrev)
413 sidedata = self.sidedata(nodeorrev)
412 copy_sd = self._copiesstorage == b'changeset-sidedata'
414 copy_sd = self._copiesstorage == b'changeset-sidedata'
413 c = changelogrevision(self, d, sidedata, copy_sd)
415 c = changelogrevision(self, d, sidedata, copy_sd)
414 return (c.manifest, c.user, c.date, c.files, c.description, c.extra)
416 return (c.manifest, c.user, c.date, c.files, c.description, c.extra)
415
417
416 def changelogrevision(self, nodeorrev):
418 def changelogrevision(self, nodeorrev):
417 """Obtain a ``changelogrevision`` for a node or revision."""
419 """Obtain a ``changelogrevision`` for a node or revision."""
418 text = self._revisiondata(nodeorrev)
420 text = self._revisiondata(nodeorrev)
419 sidedata = self.sidedata(nodeorrev)
421 sidedata = self.sidedata(nodeorrev)
420 return changelogrevision(
422 return changelogrevision(
421 self, text, sidedata, self._copiesstorage == b'changeset-sidedata'
423 self, text, sidedata, self._copiesstorage == b'changeset-sidedata'
422 )
424 )
423
425
424 def readfiles(self, nodeorrev):
426 def readfiles(self, nodeorrev):
425 """
427 """
426 short version of read that only returns the files modified by the cset
428 short version of read that only returns the files modified by the cset
427 """
429 """
428 text = self.revision(nodeorrev)
430 text = self.revision(nodeorrev)
429 if not text:
431 if not text:
430 return []
432 return []
431 last = text.index(b"\n\n")
433 last = text.index(b"\n\n")
432 l = text[:last].split(b'\n')
434 l = text[:last].split(b'\n')
433 return l[3:]
435 return l[3:]
434
436
435 def add(
437 def add(
436 self,
438 self,
437 manifest,
439 manifest,
438 files,
440 files,
439 desc,
441 desc,
440 transaction,
442 transaction,
441 p1,
443 p1,
442 p2,
444 p2,
443 user,
445 user,
444 date=None,
446 date=None,
445 extra=None,
447 extra=None,
446 ):
448 ):
447 # Convert to UTF-8 encoded bytestrings as the very first
449 # Convert to UTF-8 encoded bytestrings as the very first
448 # thing: calling any method on a localstr object will turn it
450 # thing: calling any method on a localstr object will turn it
449 # into a str object and the cached UTF-8 string is thus lost.
451 # into a str object and the cached UTF-8 string is thus lost.
450 user, desc = encoding.fromlocal(user), encoding.fromlocal(desc)
452 user, desc = encoding.fromlocal(user), encoding.fromlocal(desc)
451
453
452 user = user.strip()
454 user = user.strip()
453 # An empty username or a username with a "\n" will make the
455 # An empty username or a username with a "\n" will make the
454 # revision text contain two "\n\n" sequences -> corrupt
456 # revision text contain two "\n\n" sequences -> corrupt
455 # repository since read cannot unpack the revision.
457 # repository since read cannot unpack the revision.
456 if not user:
458 if not user:
457 raise error.StorageError(_(b"empty username"))
459 raise error.StorageError(_(b"empty username"))
458 if b"\n" in user:
460 if b"\n" in user:
459 raise error.StorageError(
461 raise error.StorageError(
460 _(b"username %r contains a newline") % pycompat.bytestr(user)
462 _(b"username %r contains a newline") % pycompat.bytestr(user)
461 )
463 )
462
464
463 desc = stripdesc(desc)
465 desc = stripdesc(desc)
464
466
465 if date:
467 if date:
466 parseddate = b"%d %d" % dateutil.parsedate(date)
468 parseddate = b"%d %d" % dateutil.parsedate(date)
467 else:
469 else:
468 parseddate = b"%d %d" % dateutil.makedate()
470 parseddate = b"%d %d" % dateutil.makedate()
469 if extra:
471 if extra:
470 branch = extra.get(b"branch")
472 branch = extra.get(b"branch")
471 if branch in (b"default", b""):
473 if branch in (b"default", b""):
472 del extra[b"branch"]
474 del extra[b"branch"]
473 elif branch in (b".", b"null", b"tip"):
475 elif branch in (b".", b"null", b"tip"):
474 raise error.StorageError(
476 raise error.StorageError(
475 _(b'the name \'%s\' is reserved') % branch
477 _(b'the name \'%s\' is reserved') % branch
476 )
478 )
477 sortedfiles = sorted(files.touched)
479 sortedfiles = sorted(files.touched)
478 flags = 0
480 flags = 0
479 sidedata = None
481 sidedata = None
480 if self._copiesstorage == b'changeset-sidedata':
482 if self._copiesstorage == b'changeset-sidedata':
481 if files.has_copies_info:
483 if files.has_copies_info:
482 flags |= flagutil.REVIDX_HASCOPIESINFO
484 flags |= flagutil.REVIDX_HASCOPIESINFO
483 sidedata = metadata.encode_files_sidedata(files)
485 sidedata = metadata.encode_files_sidedata(files)
484
486
485 if extra:
487 if extra:
486 extra = encodeextra(extra)
488 extra = encodeextra(extra)
487 parseddate = b"%s %s" % (parseddate, extra)
489 parseddate = b"%s %s" % (parseddate, extra)
488 l = [hex(manifest), user, parseddate] + sortedfiles + [b"", desc]
490 l = [hex(manifest), user, parseddate] + sortedfiles + [b"", desc]
489 text = b"\n".join(l)
491 text = b"\n".join(l)
490 rev = self.addrevision(
492 rev = self.addrevision(
491 text, transaction, len(self), p1, p2, sidedata=sidedata, flags=flags
493 text, transaction, len(self), p1, p2, sidedata=sidedata, flags=flags
492 )
494 )
493 return self.node(rev)
495 return self.node(rev)
494
496
495 def branchinfo(self, rev):
497 def branchinfo(self, rev):
496 """return the branch name and open/close state of a revision
498 """return the branch name and open/close state of a revision
497
499
498 This function exists because creating a changectx object
500 This function exists because creating a changectx object
499 just to access this is costly."""
501 just to access this is costly."""
500 return self.changelogrevision(rev).branchinfo
502 return self.changelogrevision(rev).branchinfo
501
503
502 def _nodeduplicatecallback(self, transaction, rev):
504 def _nodeduplicatecallback(self, transaction, rev):
503 # keep track of revisions that got "re-added", eg: unbunde of know rev.
505 # keep track of revisions that got "re-added", eg: unbunde of know rev.
504 #
506 #
505 # We track them in a list to preserve their order from the source bundle
507 # We track them in a list to preserve their order from the source bundle
506 duplicates = transaction.changes.setdefault(b'revduplicates', [])
508 duplicates = transaction.changes.setdefault(b'revduplicates', [])
507 duplicates.append(rev)
509 duplicates.append(rev)
@@ -1,4078 +1,4081 b''
1 # revlog.py - storage back-end for mercurial
1 # revlog.py - storage back-end for mercurial
2 # coding: utf8
2 # coding: utf8
3 #
3 #
4 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
4 # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Storage back-end for Mercurial.
9 """Storage back-end for Mercurial.
10
10
11 This provides efficient delta storage with O(1) retrieve and append
11 This provides efficient delta storage with O(1) retrieve and append
12 and O(changes) merge between branches.
12 and O(changes) merge between branches.
13 """
13 """
14
14
15
15
16 import binascii
16 import binascii
17 import collections
17 import collections
18 import contextlib
18 import contextlib
19 import functools
19 import functools
20 import io
20 import io
21 import os
21 import os
22 import struct
22 import struct
23 import weakref
23 import weakref
24 import zlib
24 import zlib
25
25
26 # import stuff from node for others to import from revlog
26 # import stuff from node for others to import from revlog
27 from .node import (
27 from .node import (
28 bin,
28 bin,
29 hex,
29 hex,
30 nullrev,
30 nullrev,
31 sha1nodeconstants,
31 sha1nodeconstants,
32 short,
32 short,
33 wdirrev,
33 wdirrev,
34 )
34 )
35 from .i18n import _
35 from .i18n import _
36 from .revlogutils.constants import (
36 from .revlogutils.constants import (
37 ALL_KINDS,
37 ALL_KINDS,
38 CHANGELOGV2,
38 CHANGELOGV2,
39 COMP_MODE_DEFAULT,
39 COMP_MODE_DEFAULT,
40 COMP_MODE_INLINE,
40 COMP_MODE_INLINE,
41 COMP_MODE_PLAIN,
41 COMP_MODE_PLAIN,
42 DELTA_BASE_REUSE_NO,
42 DELTA_BASE_REUSE_NO,
43 DELTA_BASE_REUSE_TRY,
43 DELTA_BASE_REUSE_TRY,
44 ENTRY_RANK,
44 ENTRY_RANK,
45 FEATURES_BY_VERSION,
45 FEATURES_BY_VERSION,
46 FLAG_GENERALDELTA,
46 FLAG_GENERALDELTA,
47 FLAG_INLINE_DATA,
47 FLAG_INLINE_DATA,
48 INDEX_HEADER,
48 INDEX_HEADER,
49 KIND_CHANGELOG,
49 KIND_CHANGELOG,
50 KIND_FILELOG,
50 KIND_FILELOG,
51 RANK_UNKNOWN,
51 RANK_UNKNOWN,
52 REVLOGV0,
52 REVLOGV0,
53 REVLOGV1,
53 REVLOGV1,
54 REVLOGV1_FLAGS,
54 REVLOGV1_FLAGS,
55 REVLOGV2,
55 REVLOGV2,
56 REVLOGV2_FLAGS,
56 REVLOGV2_FLAGS,
57 REVLOG_DEFAULT_FLAGS,
57 REVLOG_DEFAULT_FLAGS,
58 REVLOG_DEFAULT_FORMAT,
58 REVLOG_DEFAULT_FORMAT,
59 REVLOG_DEFAULT_VERSION,
59 REVLOG_DEFAULT_VERSION,
60 SUPPORTED_FLAGS,
60 SUPPORTED_FLAGS,
61 )
61 )
62 from .revlogutils.flagutil import (
62 from .revlogutils.flagutil import (
63 REVIDX_DEFAULT_FLAGS,
63 REVIDX_DEFAULT_FLAGS,
64 REVIDX_ELLIPSIS,
64 REVIDX_ELLIPSIS,
65 REVIDX_EXTSTORED,
65 REVIDX_EXTSTORED,
66 REVIDX_FLAGS_ORDER,
66 REVIDX_FLAGS_ORDER,
67 REVIDX_HASCOPIESINFO,
67 REVIDX_HASCOPIESINFO,
68 REVIDX_ISCENSORED,
68 REVIDX_ISCENSORED,
69 REVIDX_RAWTEXT_CHANGING_FLAGS,
69 REVIDX_RAWTEXT_CHANGING_FLAGS,
70 )
70 )
71 from .thirdparty import attr
71 from .thirdparty import attr
72 from . import (
72 from . import (
73 ancestor,
73 ancestor,
74 dagop,
74 dagop,
75 error,
75 error,
76 mdiff,
76 mdiff,
77 policy,
77 policy,
78 pycompat,
78 pycompat,
79 revlogutils,
79 revlogutils,
80 templatefilters,
80 templatefilters,
81 util,
81 util,
82 )
82 )
83 from .interfaces import (
83 from .interfaces import (
84 repository,
84 repository,
85 util as interfaceutil,
85 util as interfaceutil,
86 )
86 )
87 from .revlogutils import (
87 from .revlogutils import (
88 deltas as deltautil,
88 deltas as deltautil,
89 docket as docketutil,
89 docket as docketutil,
90 flagutil,
90 flagutil,
91 nodemap as nodemaputil,
91 nodemap as nodemaputil,
92 randomaccessfile,
92 randomaccessfile,
93 revlogv0,
93 revlogv0,
94 rewrite,
94 rewrite,
95 sidedata as sidedatautil,
95 sidedata as sidedatautil,
96 )
96 )
97 from .utils import (
97 from .utils import (
98 storageutil,
98 storageutil,
99 stringutil,
99 stringutil,
100 )
100 )
101
101
102 # blanked usage of all the name to prevent pyflakes constraints
102 # blanked usage of all the name to prevent pyflakes constraints
103 # We need these name available in the module for extensions.
103 # We need these name available in the module for extensions.
104
104
105 REVLOGV0
105 REVLOGV0
106 REVLOGV1
106 REVLOGV1
107 REVLOGV2
107 REVLOGV2
108 CHANGELOGV2
108 CHANGELOGV2
109 FLAG_INLINE_DATA
109 FLAG_INLINE_DATA
110 FLAG_GENERALDELTA
110 FLAG_GENERALDELTA
111 REVLOG_DEFAULT_FLAGS
111 REVLOG_DEFAULT_FLAGS
112 REVLOG_DEFAULT_FORMAT
112 REVLOG_DEFAULT_FORMAT
113 REVLOG_DEFAULT_VERSION
113 REVLOG_DEFAULT_VERSION
114 REVLOGV1_FLAGS
114 REVLOGV1_FLAGS
115 REVLOGV2_FLAGS
115 REVLOGV2_FLAGS
116 REVIDX_ISCENSORED
116 REVIDX_ISCENSORED
117 REVIDX_ELLIPSIS
117 REVIDX_ELLIPSIS
118 REVIDX_HASCOPIESINFO
118 REVIDX_HASCOPIESINFO
119 REVIDX_EXTSTORED
119 REVIDX_EXTSTORED
120 REVIDX_DEFAULT_FLAGS
120 REVIDX_DEFAULT_FLAGS
121 REVIDX_FLAGS_ORDER
121 REVIDX_FLAGS_ORDER
122 REVIDX_RAWTEXT_CHANGING_FLAGS
122 REVIDX_RAWTEXT_CHANGING_FLAGS
123
123
124 parsers = policy.importmod('parsers')
124 parsers = policy.importmod('parsers')
125 rustancestor = policy.importrust('ancestor')
125 rustancestor = policy.importrust('ancestor')
126 rustdagop = policy.importrust('dagop')
126 rustdagop = policy.importrust('dagop')
127 rustrevlog = policy.importrust('revlog')
127 rustrevlog = policy.importrust('revlog')
128
128
129 # Aliased for performance.
129 # Aliased for performance.
130 _zlibdecompress = zlib.decompress
130 _zlibdecompress = zlib.decompress
131
131
132 # max size of inline data embedded into a revlog
132 # max size of inline data embedded into a revlog
133 _maxinline = 131072
133 _maxinline = 131072
134
134
135
135
136 # Flag processors for REVIDX_ELLIPSIS.
136 # Flag processors for REVIDX_ELLIPSIS.
137 def ellipsisreadprocessor(rl, text):
137 def ellipsisreadprocessor(rl, text):
138 return text, False
138 return text, False
139
139
140
140
141 def ellipsiswriteprocessor(rl, text):
141 def ellipsiswriteprocessor(rl, text):
142 return text, False
142 return text, False
143
143
144
144
145 def ellipsisrawprocessor(rl, text):
145 def ellipsisrawprocessor(rl, text):
146 return False
146 return False
147
147
148
148
149 ellipsisprocessor = (
149 ellipsisprocessor = (
150 ellipsisreadprocessor,
150 ellipsisreadprocessor,
151 ellipsiswriteprocessor,
151 ellipsiswriteprocessor,
152 ellipsisrawprocessor,
152 ellipsisrawprocessor,
153 )
153 )
154
154
155
155
156 def _verify_revision(rl, skipflags, state, node):
156 def _verify_revision(rl, skipflags, state, node):
157 """Verify the integrity of the given revlog ``node`` while providing a hook
157 """Verify the integrity of the given revlog ``node`` while providing a hook
158 point for extensions to influence the operation."""
158 point for extensions to influence the operation."""
159 if skipflags:
159 if skipflags:
160 state[b'skipread'].add(node)
160 state[b'skipread'].add(node)
161 else:
161 else:
162 # Side-effect: read content and verify hash.
162 # Side-effect: read content and verify hash.
163 rl.revision(node)
163 rl.revision(node)
164
164
165
165
166 # True if a fast implementation for persistent-nodemap is available
166 # True if a fast implementation for persistent-nodemap is available
167 #
167 #
168 # We also consider we have a "fast" implementation in "pure" python because
168 # We also consider we have a "fast" implementation in "pure" python because
169 # people using pure don't really have performance consideration (and a
169 # people using pure don't really have performance consideration (and a
170 # wheelbarrow of other slowness source)
170 # wheelbarrow of other slowness source)
171 HAS_FAST_PERSISTENT_NODEMAP = rustrevlog is not None or hasattr(
171 HAS_FAST_PERSISTENT_NODEMAP = rustrevlog is not None or hasattr(
172 parsers, 'BaseIndexObject'
172 parsers, 'BaseIndexObject'
173 )
173 )
174
174
175
175
176 @interfaceutil.implementer(repository.irevisiondelta)
176 @interfaceutil.implementer(repository.irevisiondelta)
177 @attr.s(slots=True)
177 @attr.s(slots=True)
178 class revlogrevisiondelta:
178 class revlogrevisiondelta:
179 node = attr.ib()
179 node = attr.ib()
180 p1node = attr.ib()
180 p1node = attr.ib()
181 p2node = attr.ib()
181 p2node = attr.ib()
182 basenode = attr.ib()
182 basenode = attr.ib()
183 flags = attr.ib()
183 flags = attr.ib()
184 baserevisionsize = attr.ib()
184 baserevisionsize = attr.ib()
185 revision = attr.ib()
185 revision = attr.ib()
186 delta = attr.ib()
186 delta = attr.ib()
187 sidedata = attr.ib()
187 sidedata = attr.ib()
188 protocol_flags = attr.ib()
188 protocol_flags = attr.ib()
189 linknode = attr.ib(default=None)
189 linknode = attr.ib(default=None)
190
190
191
191
192 @interfaceutil.implementer(repository.iverifyproblem)
192 @interfaceutil.implementer(repository.iverifyproblem)
193 @attr.s(frozen=True)
193 @attr.s(frozen=True)
194 class revlogproblem:
194 class revlogproblem:
195 warning = attr.ib(default=None)
195 warning = attr.ib(default=None)
196 error = attr.ib(default=None)
196 error = attr.ib(default=None)
197 node = attr.ib(default=None)
197 node = attr.ib(default=None)
198
198
199
199
200 def parse_index_v1(data, inline):
200 def parse_index_v1(data, inline):
201 # call the C implementation to parse the index data
201 # call the C implementation to parse the index data
202 index, cache = parsers.parse_index2(data, inline)
202 index, cache = parsers.parse_index2(data, inline)
203 return index, cache
203 return index, cache
204
204
205
205
206 def parse_index_v2(data, inline):
206 def parse_index_v2(data, inline):
207 # call the C implementation to parse the index data
207 # call the C implementation to parse the index data
208 index, cache = parsers.parse_index2(data, inline, format=REVLOGV2)
208 index, cache = parsers.parse_index2(data, inline, format=REVLOGV2)
209 return index, cache
209 return index, cache
210
210
211
211
212 def parse_index_cl_v2(data, inline):
212 def parse_index_cl_v2(data, inline):
213 # call the C implementation to parse the index data
213 # call the C implementation to parse the index data
214 index, cache = parsers.parse_index2(data, inline, format=CHANGELOGV2)
214 index, cache = parsers.parse_index2(data, inline, format=CHANGELOGV2)
215 return index, cache
215 return index, cache
216
216
217
217
218 if hasattr(parsers, 'parse_index_devel_nodemap'):
218 if hasattr(parsers, 'parse_index_devel_nodemap'):
219
219
220 def parse_index_v1_nodemap(data, inline):
220 def parse_index_v1_nodemap(data, inline):
221 index, cache = parsers.parse_index_devel_nodemap(data, inline)
221 index, cache = parsers.parse_index_devel_nodemap(data, inline)
222 return index, cache
222 return index, cache
223
223
224
224
225 else:
225 else:
226 parse_index_v1_nodemap = None
226 parse_index_v1_nodemap = None
227
227
228
228
229 def parse_index_v1_rust(data, inline, default_header):
229 def parse_index_v1_rust(data, inline, default_header):
230 cache = (0, data) if inline else None
230 cache = (0, data) if inline else None
231 return rustrevlog.Index(data, default_header), cache
231 return rustrevlog.Index(data, default_header), cache
232
232
233
233
234 # corresponds to uncompressed length of indexformatng (2 gigs, 4-byte
234 # corresponds to uncompressed length of indexformatng (2 gigs, 4-byte
235 # signed integer)
235 # signed integer)
236 _maxentrysize = 0x7FFFFFFF
236 _maxentrysize = 0x7FFFFFFF
237
237
238 FILE_TOO_SHORT_MSG = _(
238 FILE_TOO_SHORT_MSG = _(
239 b'cannot read from revlog %s;'
239 b'cannot read from revlog %s;'
240 b' expected %d bytes from offset %d, data size is %d'
240 b' expected %d bytes from offset %d, data size is %d'
241 )
241 )
242
242
243 hexdigits = b'0123456789abcdefABCDEF'
243 hexdigits = b'0123456789abcdefABCDEF'
244
244
245
245
246 class _Config:
246 class _Config:
247 def copy(self):
247 def copy(self):
248 return self.__class__(**self.__dict__)
248 return self.__class__(**self.__dict__)
249
249
250
250
251 @attr.s()
251 @attr.s()
252 class FeatureConfig(_Config):
252 class FeatureConfig(_Config):
253 """Hold configuration values about the available revlog features"""
253 """Hold configuration values about the available revlog features"""
254
254
255 # the default compression engine
255 # the default compression engine
256 compression_engine = attr.ib(default=b'zlib')
256 compression_engine = attr.ib(default=b'zlib')
257 # compression engines options
257 # compression engines options
258 compression_engine_options = attr.ib(default=attr.Factory(dict))
258 compression_engine_options = attr.ib(default=attr.Factory(dict))
259
259
260 # can we use censor on this revlog
260 # can we use censor on this revlog
261 censorable = attr.ib(default=False)
261 censorable = attr.ib(default=False)
262 # does this revlog use the "side data" feature
262 # does this revlog use the "side data" feature
263 has_side_data = attr.ib(default=False)
263 has_side_data = attr.ib(default=False)
264 # might remove rank configuration once the computation has no impact
264 # might remove rank configuration once the computation has no impact
265 compute_rank = attr.ib(default=False)
265 compute_rank = attr.ib(default=False)
266 # parent order is supposed to be semantically irrelevant, so we
266 # parent order is supposed to be semantically irrelevant, so we
267 # normally resort parents to ensure that the first parent is non-null,
267 # normally resort parents to ensure that the first parent is non-null,
268 # if there is a non-null parent at all.
268 # if there is a non-null parent at all.
269 # filelog abuses the parent order as flag to mark some instances of
269 # filelog abuses the parent order as flag to mark some instances of
270 # meta-encoded files, so allow it to disable this behavior.
270 # meta-encoded files, so allow it to disable this behavior.
271 canonical_parent_order = attr.ib(default=False)
271 canonical_parent_order = attr.ib(default=False)
272 # can ellipsis commit be used
272 # can ellipsis commit be used
273 enable_ellipsis = attr.ib(default=False)
273 enable_ellipsis = attr.ib(default=False)
274
274
275 def copy(self):
275 def copy(self):
276 new = super().copy()
276 new = super().copy()
277 new.compression_engine_options = self.compression_engine_options.copy()
277 new.compression_engine_options = self.compression_engine_options.copy()
278 return new
278 return new
279
279
280
280
281 @attr.s()
281 @attr.s()
282 class DataConfig(_Config):
282 class DataConfig(_Config):
283 """Hold configuration value about how the revlog data are read"""
283 """Hold configuration value about how the revlog data are read"""
284
284
285 # should we try to open the "pending" version of the revlog
285 # should we try to open the "pending" version of the revlog
286 try_pending = attr.ib(default=False)
286 try_pending = attr.ib(default=False)
287 # should we try to open the "splitted" version of the revlog
287 # should we try to open the "splitted" version of the revlog
288 try_split = attr.ib(default=False)
288 try_split = attr.ib(default=False)
289 # When True, indexfile should be opened with checkambig=True at writing,
289 # When True, indexfile should be opened with checkambig=True at writing,
290 # to avoid file stat ambiguity.
290 # to avoid file stat ambiguity.
291 check_ambig = attr.ib(default=False)
291 check_ambig = attr.ib(default=False)
292
292
293 # If true, use mmap instead of reading to deal with large index
293 # If true, use mmap instead of reading to deal with large index
294 mmap_large_index = attr.ib(default=False)
294 mmap_large_index = attr.ib(default=False)
295 # how much data is large
295 # how much data is large
296 mmap_index_threshold = attr.ib(default=None)
296 mmap_index_threshold = attr.ib(default=None)
297 # How much data to read and cache into the raw revlog data cache.
297 # How much data to read and cache into the raw revlog data cache.
298 chunk_cache_size = attr.ib(default=65536)
298 chunk_cache_size = attr.ib(default=65536)
299
299
300 # The size of the uncompressed cache compared to the largest revision seen.
300 # The size of the uncompressed cache compared to the largest revision seen.
301 uncompressed_cache_factor = attr.ib(default=None)
301 uncompressed_cache_factor = attr.ib(default=None)
302
302
303 # The number of chunk cached
303 # The number of chunk cached
304 uncompressed_cache_count = attr.ib(default=None)
304 uncompressed_cache_count = attr.ib(default=None)
305
305
306 # Allow sparse reading of the revlog data
306 # Allow sparse reading of the revlog data
307 with_sparse_read = attr.ib(default=False)
307 with_sparse_read = attr.ib(default=False)
308 # minimal density of a sparse read chunk
308 # minimal density of a sparse read chunk
309 sr_density_threshold = attr.ib(default=0.50)
309 sr_density_threshold = attr.ib(default=0.50)
310 # minimal size of data we skip when performing sparse read
310 # minimal size of data we skip when performing sparse read
311 sr_min_gap_size = attr.ib(default=262144)
311 sr_min_gap_size = attr.ib(default=262144)
312
312
313 # are delta encoded against arbitrary bases.
313 # are delta encoded against arbitrary bases.
314 generaldelta = attr.ib(default=False)
314 generaldelta = attr.ib(default=False)
315
315
316
316
317 @attr.s()
317 @attr.s()
318 class DeltaConfig(_Config):
318 class DeltaConfig(_Config):
319 """Hold configuration value about how new delta are computed
319 """Hold configuration value about how new delta are computed
320
320
321 Some attributes are duplicated from DataConfig to help havign each object
321 Some attributes are duplicated from DataConfig to help havign each object
322 self contained.
322 self contained.
323 """
323 """
324
324
325 # can delta be encoded against arbitrary bases.
325 # can delta be encoded against arbitrary bases.
326 general_delta = attr.ib(default=False)
326 general_delta = attr.ib(default=False)
327 # Allow sparse writing of the revlog data
327 # Allow sparse writing of the revlog data
328 sparse_revlog = attr.ib(default=False)
328 sparse_revlog = attr.ib(default=False)
329 # maximum length of a delta chain
329 # maximum length of a delta chain
330 max_chain_len = attr.ib(default=None)
330 max_chain_len = attr.ib(default=None)
331 # Maximum distance between delta chain base start and end
331 # Maximum distance between delta chain base start and end
332 max_deltachain_span = attr.ib(default=-1)
332 max_deltachain_span = attr.ib(default=-1)
333 # If `upper_bound_comp` is not None, this is the expected maximal gain from
333 # If `upper_bound_comp` is not None, this is the expected maximal gain from
334 # compression for the data content.
334 # compression for the data content.
335 upper_bound_comp = attr.ib(default=None)
335 upper_bound_comp = attr.ib(default=None)
336 # Should we try a delta against both parent
336 # Should we try a delta against both parent
337 delta_both_parents = attr.ib(default=True)
337 delta_both_parents = attr.ib(default=True)
338 # Test delta base candidate group by chunk of this maximal size.
338 # Test delta base candidate group by chunk of this maximal size.
339 candidate_group_chunk_size = attr.ib(default=0)
339 candidate_group_chunk_size = attr.ib(default=0)
340 # Should we display debug information about delta computation
340 # Should we display debug information about delta computation
341 debug_delta = attr.ib(default=False)
341 debug_delta = attr.ib(default=False)
342 # trust incoming delta by default
342 # trust incoming delta by default
343 lazy_delta = attr.ib(default=True)
343 lazy_delta = attr.ib(default=True)
344 # trust the base of incoming delta by default
344 # trust the base of incoming delta by default
345 lazy_delta_base = attr.ib(default=False)
345 lazy_delta_base = attr.ib(default=False)
346
346
347
347
348 class _InnerRevlog:
348 class _InnerRevlog:
349 """An inner layer of the revlog object
349 """An inner layer of the revlog object
350
350
351 That layer exist to be able to delegate some operation to Rust, its
351 That layer exist to be able to delegate some operation to Rust, its
352 boundaries are arbitrary and based on what we can delegate to Rust.
352 boundaries are arbitrary and based on what we can delegate to Rust.
353 """
353 """
354
354
355 def __init__(
355 def __init__(
356 self,
356 self,
357 opener,
357 opener,
358 index,
358 index,
359 index_file,
359 index_file,
360 data_file,
360 data_file,
361 sidedata_file,
361 sidedata_file,
362 inline,
362 inline,
363 data_config,
363 data_config,
364 delta_config,
364 delta_config,
365 feature_config,
365 feature_config,
366 chunk_cache,
366 chunk_cache,
367 default_compression_header,
367 default_compression_header,
368 ):
368 ):
369 self.opener = opener
369 self.opener = opener
370 self.index = index
370 self.index = index
371
371
372 self.index_file = index_file
372 self.index_file = index_file
373 self.data_file = data_file
373 self.data_file = data_file
374 self.sidedata_file = sidedata_file
374 self.sidedata_file = sidedata_file
375 self.inline = inline
375 self.inline = inline
376 self.data_config = data_config
376 self.data_config = data_config
377 self.delta_config = delta_config
377 self.delta_config = delta_config
378 self.feature_config = feature_config
378 self.feature_config = feature_config
379
379
380 # used during diverted write.
380 # used during diverted write.
381 self._orig_index_file = None
381 self._orig_index_file = None
382
382
383 self._default_compression_header = default_compression_header
383 self._default_compression_header = default_compression_header
384
384
385 # index
385 # index
386
386
387 # 3-tuple of file handles being used for active writing.
387 # 3-tuple of file handles being used for active writing.
388 self._writinghandles = None
388 self._writinghandles = None
389
389
390 self._segmentfile = randomaccessfile.randomaccessfile(
390 self._segmentfile = randomaccessfile.randomaccessfile(
391 self.opener,
391 self.opener,
392 (self.index_file if self.inline else self.data_file),
392 (self.index_file if self.inline else self.data_file),
393 self.data_config.chunk_cache_size,
393 self.data_config.chunk_cache_size,
394 chunk_cache,
394 chunk_cache,
395 )
395 )
396 self._segmentfile_sidedata = randomaccessfile.randomaccessfile(
396 self._segmentfile_sidedata = randomaccessfile.randomaccessfile(
397 self.opener,
397 self.opener,
398 self.sidedata_file,
398 self.sidedata_file,
399 self.data_config.chunk_cache_size,
399 self.data_config.chunk_cache_size,
400 )
400 )
401
401
402 # revlog header -> revlog compressor
402 # revlog header -> revlog compressor
403 self._decompressors = {}
403 self._decompressors = {}
404 # 3-tuple of (node, rev, text) for a raw revision.
404 # 3-tuple of (node, rev, text) for a raw revision.
405 self._revisioncache = None
405 self._revisioncache = None
406
406
407 # cache some uncompressed chunks
407 # cache some uncompressed chunks
408 # rev β†’ uncompressed_chunk
408 # rev β†’ uncompressed_chunk
409 #
409 #
410 # the max cost is dynamically updated to be proportionnal to the
410 # the max cost is dynamically updated to be proportionnal to the
411 # size of revision we actually encounter.
411 # size of revision we actually encounter.
412 self._uncompressed_chunk_cache = None
412 self._uncompressed_chunk_cache = None
413 if self.data_config.uncompressed_cache_factor is not None:
413 if self.data_config.uncompressed_cache_factor is not None:
414 self._uncompressed_chunk_cache = util.lrucachedict(
414 self._uncompressed_chunk_cache = util.lrucachedict(
415 self.data_config.uncompressed_cache_count,
415 self.data_config.uncompressed_cache_count,
416 maxcost=65536, # some arbitrary initial value
416 maxcost=65536, # some arbitrary initial value
417 )
417 )
418
418
419 self._delay_buffer = None
419 self._delay_buffer = None
420
420
421 def __len__(self):
421 def __len__(self):
422 return len(self.index)
422 return len(self.index)
423
423
424 def clear_cache(self):
424 def clear_cache(self):
425 assert not self.is_delaying
425 assert not self.is_delaying
426 self._revisioncache = None
426 self._revisioncache = None
427 if self._uncompressed_chunk_cache is not None:
427 if self._uncompressed_chunk_cache is not None:
428 self._uncompressed_chunk_cache.clear()
428 self._uncompressed_chunk_cache.clear()
429 self._segmentfile.clear_cache()
429 self._segmentfile.clear_cache()
430 self._segmentfile_sidedata.clear_cache()
430 self._segmentfile_sidedata.clear_cache()
431
431
432 @property
432 @property
433 def canonical_index_file(self):
433 def canonical_index_file(self):
434 if self._orig_index_file is not None:
434 if self._orig_index_file is not None:
435 return self._orig_index_file
435 return self._orig_index_file
436 return self.index_file
436 return self.index_file
437
437
438 @property
438 @property
439 def is_delaying(self):
439 def is_delaying(self):
440 """is the revlog is currently delaying the visibility of written data?
440 """is the revlog is currently delaying the visibility of written data?
441
441
442 The delaying mechanism can be either in-memory or written on disk in a
442 The delaying mechanism can be either in-memory or written on disk in a
443 side-file."""
443 side-file."""
444 return (self._delay_buffer is not None) or (
444 return (self._delay_buffer is not None) or (
445 self._orig_index_file is not None
445 self._orig_index_file is not None
446 )
446 )
447
447
448 # Derived from index values.
448 # Derived from index values.
449
449
450 def start(self, rev):
450 def start(self, rev):
451 """the offset of the data chunk for this revision"""
451 """the offset of the data chunk for this revision"""
452 return int(self.index[rev][0] >> 16)
452 return int(self.index[rev][0] >> 16)
453
453
454 def length(self, rev):
454 def length(self, rev):
455 """the length of the data chunk for this revision"""
455 """the length of the data chunk for this revision"""
456 return self.index[rev][1]
456 return self.index[rev][1]
457
457
458 def end(self, rev):
458 def end(self, rev):
459 """the end of the data chunk for this revision"""
459 """the end of the data chunk for this revision"""
460 return self.start(rev) + self.length(rev)
460 return self.start(rev) + self.length(rev)
461
461
462 def deltaparent(self, rev):
462 def deltaparent(self, rev):
463 """return deltaparent of the given revision"""
463 """return deltaparent of the given revision"""
464 base = self.index[rev][3]
464 base = self.index[rev][3]
465 if base == rev:
465 if base == rev:
466 return nullrev
466 return nullrev
467 elif self.delta_config.general_delta:
467 elif self.delta_config.general_delta:
468 return base
468 return base
469 else:
469 else:
470 return rev - 1
470 return rev - 1
471
471
472 def issnapshot(self, rev):
472 def issnapshot(self, rev):
473 """tells whether rev is a snapshot"""
473 """tells whether rev is a snapshot"""
474 if not self.delta_config.sparse_revlog:
474 if not self.delta_config.sparse_revlog:
475 return self.deltaparent(rev) == nullrev
475 return self.deltaparent(rev) == nullrev
476 elif hasattr(self.index, 'issnapshot'):
476 elif hasattr(self.index, 'issnapshot'):
477 # directly assign the method to cache the testing and access
477 # directly assign the method to cache the testing and access
478 self.issnapshot = self.index.issnapshot
478 self.issnapshot = self.index.issnapshot
479 return self.issnapshot(rev)
479 return self.issnapshot(rev)
480 if rev == nullrev:
480 if rev == nullrev:
481 return True
481 return True
482 entry = self.index[rev]
482 entry = self.index[rev]
483 base = entry[3]
483 base = entry[3]
484 if base == rev:
484 if base == rev:
485 return True
485 return True
486 if base == nullrev:
486 if base == nullrev:
487 return True
487 return True
488 p1 = entry[5]
488 p1 = entry[5]
489 while self.length(p1) == 0:
489 while self.length(p1) == 0:
490 b = self.deltaparent(p1)
490 b = self.deltaparent(p1)
491 if b == p1:
491 if b == p1:
492 break
492 break
493 p1 = b
493 p1 = b
494 p2 = entry[6]
494 p2 = entry[6]
495 while self.length(p2) == 0:
495 while self.length(p2) == 0:
496 b = self.deltaparent(p2)
496 b = self.deltaparent(p2)
497 if b == p2:
497 if b == p2:
498 break
498 break
499 p2 = b
499 p2 = b
500 if base == p1 or base == p2:
500 if base == p1 or base == p2:
501 return False
501 return False
502 return self.issnapshot(base)
502 return self.issnapshot(base)
503
503
504 def _deltachain(self, rev, stoprev=None):
504 def _deltachain(self, rev, stoprev=None):
505 """Obtain the delta chain for a revision.
505 """Obtain the delta chain for a revision.
506
506
507 ``stoprev`` specifies a revision to stop at. If not specified, we
507 ``stoprev`` specifies a revision to stop at. If not specified, we
508 stop at the base of the chain.
508 stop at the base of the chain.
509
509
510 Returns a 2-tuple of (chain, stopped) where ``chain`` is a list of
510 Returns a 2-tuple of (chain, stopped) where ``chain`` is a list of
511 revs in ascending order and ``stopped`` is a bool indicating whether
511 revs in ascending order and ``stopped`` is a bool indicating whether
512 ``stoprev`` was hit.
512 ``stoprev`` was hit.
513 """
513 """
514 generaldelta = self.delta_config.general_delta
514 generaldelta = self.delta_config.general_delta
515 # Try C implementation.
515 # Try C implementation.
516 try:
516 try:
517 return self.index.deltachain(rev, stoprev, generaldelta)
517 return self.index.deltachain(rev, stoprev, generaldelta)
518 except AttributeError:
518 except AttributeError:
519 pass
519 pass
520
520
521 chain = []
521 chain = []
522
522
523 # Alias to prevent attribute lookup in tight loop.
523 # Alias to prevent attribute lookup in tight loop.
524 index = self.index
524 index = self.index
525
525
526 iterrev = rev
526 iterrev = rev
527 e = index[iterrev]
527 e = index[iterrev]
528 while iterrev != e[3] and iterrev != stoprev:
528 while iterrev != e[3] and iterrev != stoprev:
529 chain.append(iterrev)
529 chain.append(iterrev)
530 if generaldelta:
530 if generaldelta:
531 iterrev = e[3]
531 iterrev = e[3]
532 else:
532 else:
533 iterrev -= 1
533 iterrev -= 1
534 e = index[iterrev]
534 e = index[iterrev]
535
535
536 if iterrev == stoprev:
536 if iterrev == stoprev:
537 stopped = True
537 stopped = True
538 else:
538 else:
539 chain.append(iterrev)
539 chain.append(iterrev)
540 stopped = False
540 stopped = False
541
541
542 chain.reverse()
542 chain.reverse()
543 return chain, stopped
543 return chain, stopped
544
544
545 @util.propertycache
545 @util.propertycache
546 def _compressor(self):
546 def _compressor(self):
547 engine = util.compengines[self.feature_config.compression_engine]
547 engine = util.compengines[self.feature_config.compression_engine]
548 return engine.revlogcompressor(
548 return engine.revlogcompressor(
549 self.feature_config.compression_engine_options
549 self.feature_config.compression_engine_options
550 )
550 )
551
551
552 @util.propertycache
552 @util.propertycache
553 def _decompressor(self):
553 def _decompressor(self):
554 """the default decompressor"""
554 """the default decompressor"""
555 if self._default_compression_header is None:
555 if self._default_compression_header is None:
556 return None
556 return None
557 t = self._default_compression_header
557 t = self._default_compression_header
558 c = self._get_decompressor(t)
558 c = self._get_decompressor(t)
559 return c.decompress
559 return c.decompress
560
560
561 def _get_decompressor(self, t):
561 def _get_decompressor(self, t):
562 try:
562 try:
563 compressor = self._decompressors[t]
563 compressor = self._decompressors[t]
564 except KeyError:
564 except KeyError:
565 try:
565 try:
566 engine = util.compengines.forrevlogheader(t)
566 engine = util.compengines.forrevlogheader(t)
567 compressor = engine.revlogcompressor(
567 compressor = engine.revlogcompressor(
568 self.feature_config.compression_engine_options
568 self.feature_config.compression_engine_options
569 )
569 )
570 self._decompressors[t] = compressor
570 self._decompressors[t] = compressor
571 except KeyError:
571 except KeyError:
572 raise error.RevlogError(
572 raise error.RevlogError(
573 _(b'unknown compression type %s') % binascii.hexlify(t)
573 _(b'unknown compression type %s') % binascii.hexlify(t)
574 )
574 )
575 return compressor
575 return compressor
576
576
577 def compress(self, data):
577 def compress(self, data):
578 """Generate a possibly-compressed representation of data."""
578 """Generate a possibly-compressed representation of data."""
579 if not data:
579 if not data:
580 return b'', data
580 return b'', data
581
581
582 compressed = self._compressor.compress(data)
582 compressed = self._compressor.compress(data)
583
583
584 if compressed:
584 if compressed:
585 # The revlog compressor added the header in the returned data.
585 # The revlog compressor added the header in the returned data.
586 return b'', compressed
586 return b'', compressed
587
587
588 if data[0:1] == b'\0':
588 if data[0:1] == b'\0':
589 return b'', data
589 return b'', data
590 return b'u', data
590 return b'u', data
591
591
592 def decompress(self, data):
592 def decompress(self, data):
593 """Decompress a revlog chunk.
593 """Decompress a revlog chunk.
594
594
595 The chunk is expected to begin with a header identifying the
595 The chunk is expected to begin with a header identifying the
596 format type so it can be routed to an appropriate decompressor.
596 format type so it can be routed to an appropriate decompressor.
597 """
597 """
598 if not data:
598 if not data:
599 return data
599 return data
600
600
601 # Revlogs are read much more frequently than they are written and many
601 # Revlogs are read much more frequently than they are written and many
602 # chunks only take microseconds to decompress, so performance is
602 # chunks only take microseconds to decompress, so performance is
603 # important here.
603 # important here.
604 #
604 #
605 # We can make a few assumptions about revlogs:
605 # We can make a few assumptions about revlogs:
606 #
606 #
607 # 1) the majority of chunks will be compressed (as opposed to inline
607 # 1) the majority of chunks will be compressed (as opposed to inline
608 # raw data).
608 # raw data).
609 # 2) decompressing *any* data will likely by at least 10x slower than
609 # 2) decompressing *any* data will likely by at least 10x slower than
610 # returning raw inline data.
610 # returning raw inline data.
611 # 3) we want to prioritize common and officially supported compression
611 # 3) we want to prioritize common and officially supported compression
612 # engines
612 # engines
613 #
613 #
614 # It follows that we want to optimize for "decompress compressed data
614 # It follows that we want to optimize for "decompress compressed data
615 # when encoded with common and officially supported compression engines"
615 # when encoded with common and officially supported compression engines"
616 # case over "raw data" and "data encoded by less common or non-official
616 # case over "raw data" and "data encoded by less common or non-official
617 # compression engines." That is why we have the inline lookup first
617 # compression engines." That is why we have the inline lookup first
618 # followed by the compengines lookup.
618 # followed by the compengines lookup.
619 #
619 #
620 # According to `hg perfrevlogchunks`, this is ~0.5% faster for zlib
620 # According to `hg perfrevlogchunks`, this is ~0.5% faster for zlib
621 # compressed chunks. And this matters for changelog and manifest reads.
621 # compressed chunks. And this matters for changelog and manifest reads.
622 t = data[0:1]
622 t = data[0:1]
623
623
624 if t == b'x':
624 if t == b'x':
625 try:
625 try:
626 return _zlibdecompress(data)
626 return _zlibdecompress(data)
627 except zlib.error as e:
627 except zlib.error as e:
628 raise error.RevlogError(
628 raise error.RevlogError(
629 _(b'revlog decompress error: %s')
629 _(b'revlog decompress error: %s')
630 % stringutil.forcebytestr(e)
630 % stringutil.forcebytestr(e)
631 )
631 )
632 # '\0' is more common than 'u' so it goes first.
632 # '\0' is more common than 'u' so it goes first.
633 elif t == b'\0':
633 elif t == b'\0':
634 return data
634 return data
635 elif t == b'u':
635 elif t == b'u':
636 return util.buffer(data, 1)
636 return util.buffer(data, 1)
637
637
638 compressor = self._get_decompressor(t)
638 compressor = self._get_decompressor(t)
639
639
640 return compressor.decompress(data)
640 return compressor.decompress(data)
641
641
642 @contextlib.contextmanager
642 @contextlib.contextmanager
643 def reading(self):
643 def reading(self):
644 """Context manager that keeps data and sidedata files open for reading"""
644 """Context manager that keeps data and sidedata files open for reading"""
645 if len(self.index) == 0:
645 if len(self.index) == 0:
646 yield # nothing to be read
646 yield # nothing to be read
647 elif self._delay_buffer is not None and self.inline:
647 elif self._delay_buffer is not None and self.inline:
648 msg = "revlog with delayed write should not be inline"
648 msg = "revlog with delayed write should not be inline"
649 raise error.ProgrammingError(msg)
649 raise error.ProgrammingError(msg)
650 else:
650 else:
651 with self._segmentfile.reading():
651 with self._segmentfile.reading():
652 with self._segmentfile_sidedata.reading():
652 with self._segmentfile_sidedata.reading():
653 yield
653 yield
654
654
655 @property
655 @property
656 def is_writing(self):
656 def is_writing(self):
657 """True is a writing context is open"""
657 """True is a writing context is open"""
658 return self._writinghandles is not None
658 return self._writinghandles is not None
659
659
660 @property
660 @property
661 def is_open(self):
661 def is_open(self):
662 """True if any file handle is being held
662 """True if any file handle is being held
663
663
664 Used for assert and debug in the python code"""
664 Used for assert and debug in the python code"""
665 return self._segmentfile.is_open or self._segmentfile_sidedata.is_open
665 return self._segmentfile.is_open or self._segmentfile_sidedata.is_open
666
666
667 @contextlib.contextmanager
667 @contextlib.contextmanager
668 def writing(self, transaction, data_end=None, sidedata_end=None):
668 def writing(self, transaction, data_end=None, sidedata_end=None):
669 """Open the revlog files for writing
669 """Open the revlog files for writing
670
670
671 Add content to a revlog should be done within such context.
671 Add content to a revlog should be done within such context.
672 """
672 """
673 if self.is_writing:
673 if self.is_writing:
674 yield
674 yield
675 else:
675 else:
676 ifh = dfh = sdfh = None
676 ifh = dfh = sdfh = None
677 try:
677 try:
678 r = len(self.index)
678 r = len(self.index)
679 # opening the data file.
679 # opening the data file.
680 dsize = 0
680 dsize = 0
681 if r:
681 if r:
682 dsize = self.end(r - 1)
682 dsize = self.end(r - 1)
683 dfh = None
683 dfh = None
684 if not self.inline:
684 if not self.inline:
685 try:
685 try:
686 dfh = self.opener(self.data_file, mode=b"r+")
686 dfh = self.opener(self.data_file, mode=b"r+")
687 if data_end is None:
687 if data_end is None:
688 dfh.seek(0, os.SEEK_END)
688 dfh.seek(0, os.SEEK_END)
689 else:
689 else:
690 dfh.seek(data_end, os.SEEK_SET)
690 dfh.seek(data_end, os.SEEK_SET)
691 except FileNotFoundError:
691 except FileNotFoundError:
692 dfh = self.opener(self.data_file, mode=b"w+")
692 dfh = self.opener(self.data_file, mode=b"w+")
693 transaction.add(self.data_file, dsize)
693 transaction.add(self.data_file, dsize)
694 if self.sidedata_file is not None:
694 if self.sidedata_file is not None:
695 assert sidedata_end is not None
695 assert sidedata_end is not None
696 # revlog-v2 does not inline, help Pytype
696 # revlog-v2 does not inline, help Pytype
697 assert dfh is not None
697 assert dfh is not None
698 try:
698 try:
699 sdfh = self.opener(self.sidedata_file, mode=b"r+")
699 sdfh = self.opener(self.sidedata_file, mode=b"r+")
700 dfh.seek(sidedata_end, os.SEEK_SET)
700 dfh.seek(sidedata_end, os.SEEK_SET)
701 except FileNotFoundError:
701 except FileNotFoundError:
702 sdfh = self.opener(self.sidedata_file, mode=b"w+")
702 sdfh = self.opener(self.sidedata_file, mode=b"w+")
703 transaction.add(self.sidedata_file, sidedata_end)
703 transaction.add(self.sidedata_file, sidedata_end)
704
704
705 # opening the index file.
705 # opening the index file.
706 isize = r * self.index.entry_size
706 isize = r * self.index.entry_size
707 ifh = self.__index_write_fp()
707 ifh = self.__index_write_fp()
708 if self.inline:
708 if self.inline:
709 transaction.add(self.index_file, dsize + isize)
709 transaction.add(self.index_file, dsize + isize)
710 else:
710 else:
711 transaction.add(self.index_file, isize)
711 transaction.add(self.index_file, isize)
712 # exposing all file handle for writing.
712 # exposing all file handle for writing.
713 self._writinghandles = (ifh, dfh, sdfh)
713 self._writinghandles = (ifh, dfh, sdfh)
714 self._segmentfile.writing_handle = ifh if self.inline else dfh
714 self._segmentfile.writing_handle = ifh if self.inline else dfh
715 self._segmentfile_sidedata.writing_handle = sdfh
715 self._segmentfile_sidedata.writing_handle = sdfh
716 yield
716 yield
717 finally:
717 finally:
718 self._writinghandles = None
718 self._writinghandles = None
719 self._segmentfile.writing_handle = None
719 self._segmentfile.writing_handle = None
720 self._segmentfile_sidedata.writing_handle = None
720 self._segmentfile_sidedata.writing_handle = None
721 if dfh is not None:
721 if dfh is not None:
722 dfh.close()
722 dfh.close()
723 if sdfh is not None:
723 if sdfh is not None:
724 sdfh.close()
724 sdfh.close()
725 # closing the index file last to avoid exposing referent to
725 # closing the index file last to avoid exposing referent to
726 # potential unflushed data content.
726 # potential unflushed data content.
727 if ifh is not None:
727 if ifh is not None:
728 ifh.close()
728 ifh.close()
729
729
730 def __index_write_fp(self, index_end=None):
730 def __index_write_fp(self, index_end=None):
731 """internal method to open the index file for writing
731 """internal method to open the index file for writing
732
732
733 You should not use this directly and use `_writing` instead
733 You should not use this directly and use `_writing` instead
734 """
734 """
735 try:
735 try:
736 if self._delay_buffer is None:
736 if self._delay_buffer is None:
737 f = self.opener(
737 f = self.opener(
738 self.index_file,
738 self.index_file,
739 mode=b"r+",
739 mode=b"r+",
740 checkambig=self.data_config.check_ambig,
740 checkambig=self.data_config.check_ambig,
741 )
741 )
742 else:
742 else:
743 # check_ambig affect we way we open file for writing, however
743 # check_ambig affect we way we open file for writing, however
744 # here, we do not actually open a file for writting as write
744 # here, we do not actually open a file for writting as write
745 # will appened to a delay_buffer. So check_ambig is not
745 # will appened to a delay_buffer. So check_ambig is not
746 # meaningful and unneeded here.
746 # meaningful and unneeded here.
747 f = randomaccessfile.appender(
747 f = randomaccessfile.appender(
748 self.opener, self.index_file, b"r+", self._delay_buffer
748 self.opener, self.index_file, b"r+", self._delay_buffer
749 )
749 )
750 if index_end is None:
750 if index_end is None:
751 f.seek(0, os.SEEK_END)
751 f.seek(0, os.SEEK_END)
752 else:
752 else:
753 f.seek(index_end, os.SEEK_SET)
753 f.seek(index_end, os.SEEK_SET)
754 return f
754 return f
755 except FileNotFoundError:
755 except FileNotFoundError:
756 if self._delay_buffer is None:
756 if self._delay_buffer is None:
757 return self.opener(
757 return self.opener(
758 self.index_file,
758 self.index_file,
759 mode=b"w+",
759 mode=b"w+",
760 checkambig=self.data_config.check_ambig,
760 checkambig=self.data_config.check_ambig,
761 )
761 )
762 else:
762 else:
763 return randomaccessfile.appender(
763 return randomaccessfile.appender(
764 self.opener, self.index_file, b"w+", self._delay_buffer
764 self.opener, self.index_file, b"w+", self._delay_buffer
765 )
765 )
766
766
767 def __index_new_fp(self):
767 def __index_new_fp(self):
768 """internal method to create a new index file for writing
768 """internal method to create a new index file for writing
769
769
770 You should not use this unless you are upgrading from inline revlog
770 You should not use this unless you are upgrading from inline revlog
771 """
771 """
772 return self.opener(
772 return self.opener(
773 self.index_file,
773 self.index_file,
774 mode=b"w",
774 mode=b"w",
775 checkambig=self.data_config.check_ambig,
775 checkambig=self.data_config.check_ambig,
776 )
776 )
777
777
778 def split_inline(self, tr, header, new_index_file_path=None):
778 def split_inline(self, tr, header, new_index_file_path=None):
779 """split the data of an inline revlog into an index and a data file"""
779 """split the data of an inline revlog into an index and a data file"""
780 assert self._delay_buffer is None
780 assert self._delay_buffer is None
781 existing_handles = False
781 existing_handles = False
782 if self._writinghandles is not None:
782 if self._writinghandles is not None:
783 existing_handles = True
783 existing_handles = True
784 fp = self._writinghandles[0]
784 fp = self._writinghandles[0]
785 fp.flush()
785 fp.flush()
786 fp.close()
786 fp.close()
787 # We can't use the cached file handle after close(). So prevent
787 # We can't use the cached file handle after close(). So prevent
788 # its usage.
788 # its usage.
789 self._writinghandles = None
789 self._writinghandles = None
790 self._segmentfile.writing_handle = None
790 self._segmentfile.writing_handle = None
791 # No need to deal with sidedata writing handle as it is only
791 # No need to deal with sidedata writing handle as it is only
792 # relevant with revlog-v2 which is never inline, not reaching
792 # relevant with revlog-v2 which is never inline, not reaching
793 # this code
793 # this code
794
794
795 new_dfh = self.opener(self.data_file, mode=b"w+")
795 new_dfh = self.opener(self.data_file, mode=b"w+")
796 new_dfh.truncate(0) # drop any potentially existing data
796 new_dfh.truncate(0) # drop any potentially existing data
797 try:
797 try:
798 with self.reading():
798 with self.reading():
799 for r in range(len(self.index)):
799 for r in range(len(self.index)):
800 new_dfh.write(self.get_segment_for_revs(r, r)[1])
800 new_dfh.write(self.get_segment_for_revs(r, r)[1])
801 new_dfh.flush()
801 new_dfh.flush()
802
802
803 if new_index_file_path is not None:
803 if new_index_file_path is not None:
804 self.index_file = new_index_file_path
804 self.index_file = new_index_file_path
805 with self.__index_new_fp() as fp:
805 with self.__index_new_fp() as fp:
806 self.inline = False
806 self.inline = False
807 for i in range(len(self.index)):
807 for i in range(len(self.index)):
808 e = self.index.entry_binary(i)
808 e = self.index.entry_binary(i)
809 if i == 0:
809 if i == 0:
810 packed_header = self.index.pack_header(header)
810 packed_header = self.index.pack_header(header)
811 e = packed_header + e
811 e = packed_header + e
812 fp.write(e)
812 fp.write(e)
813
813
814 # If we don't use side-write, the temp file replace the real
814 # If we don't use side-write, the temp file replace the real
815 # index when we exit the context manager
815 # index when we exit the context manager
816
816
817 self._segmentfile = randomaccessfile.randomaccessfile(
817 self._segmentfile = randomaccessfile.randomaccessfile(
818 self.opener,
818 self.opener,
819 self.data_file,
819 self.data_file,
820 self.data_config.chunk_cache_size,
820 self.data_config.chunk_cache_size,
821 )
821 )
822
822
823 if existing_handles:
823 if existing_handles:
824 # switched from inline to conventional reopen the index
824 # switched from inline to conventional reopen the index
825 ifh = self.__index_write_fp()
825 ifh = self.__index_write_fp()
826 self._writinghandles = (ifh, new_dfh, None)
826 self._writinghandles = (ifh, new_dfh, None)
827 self._segmentfile.writing_handle = new_dfh
827 self._segmentfile.writing_handle = new_dfh
828 new_dfh = None
828 new_dfh = None
829 # No need to deal with sidedata writing handle as it is only
829 # No need to deal with sidedata writing handle as it is only
830 # relevant with revlog-v2 which is never inline, not reaching
830 # relevant with revlog-v2 which is never inline, not reaching
831 # this code
831 # this code
832 finally:
832 finally:
833 if new_dfh is not None:
833 if new_dfh is not None:
834 new_dfh.close()
834 new_dfh.close()
835 return self.index_file
835 return self.index_file
836
836
837 def get_segment_for_revs(self, startrev, endrev):
837 def get_segment_for_revs(self, startrev, endrev):
838 """Obtain a segment of raw data corresponding to a range of revisions.
838 """Obtain a segment of raw data corresponding to a range of revisions.
839
839
840 Accepts the start and end revisions and an optional already-open
840 Accepts the start and end revisions and an optional already-open
841 file handle to be used for reading. If the file handle is read, its
841 file handle to be used for reading. If the file handle is read, its
842 seek position will not be preserved.
842 seek position will not be preserved.
843
843
844 Requests for data may be satisfied by a cache.
844 Requests for data may be satisfied by a cache.
845
845
846 Returns a 2-tuple of (offset, data) for the requested range of
846 Returns a 2-tuple of (offset, data) for the requested range of
847 revisions. Offset is the integer offset from the beginning of the
847 revisions. Offset is the integer offset from the beginning of the
848 revlog and data is a str or buffer of the raw byte data.
848 revlog and data is a str or buffer of the raw byte data.
849
849
850 Callers will need to call ``self.start(rev)`` and ``self.length(rev)``
850 Callers will need to call ``self.start(rev)`` and ``self.length(rev)``
851 to determine where each revision's data begins and ends.
851 to determine where each revision's data begins and ends.
852
852
853 API: we should consider making this a private part of the InnerRevlog
853 API: we should consider making this a private part of the InnerRevlog
854 at some point.
854 at some point.
855 """
855 """
856 # Inlined self.start(startrev) & self.end(endrev) for perf reasons
856 # Inlined self.start(startrev) & self.end(endrev) for perf reasons
857 # (functions are expensive).
857 # (functions are expensive).
858 index = self.index
858 index = self.index
859 istart = index[startrev]
859 istart = index[startrev]
860 start = int(istart[0] >> 16)
860 start = int(istart[0] >> 16)
861 if startrev == endrev:
861 if startrev == endrev:
862 end = start + istart[1]
862 end = start + istart[1]
863 else:
863 else:
864 iend = index[endrev]
864 iend = index[endrev]
865 end = int(iend[0] >> 16) + iend[1]
865 end = int(iend[0] >> 16) + iend[1]
866
866
867 if self.inline:
867 if self.inline:
868 start += (startrev + 1) * self.index.entry_size
868 start += (startrev + 1) * self.index.entry_size
869 end += (endrev + 1) * self.index.entry_size
869 end += (endrev + 1) * self.index.entry_size
870 length = end - start
870 length = end - start
871
871
872 return start, self._segmentfile.read_chunk(start, length)
872 return start, self._segmentfile.read_chunk(start, length)
873
873
874 def _chunk(self, rev):
874 def _chunk(self, rev):
875 """Obtain a single decompressed chunk for a revision.
875 """Obtain a single decompressed chunk for a revision.
876
876
877 Accepts an integer revision and an optional already-open file handle
877 Accepts an integer revision and an optional already-open file handle
878 to be used for reading. If used, the seek position of the file will not
878 to be used for reading. If used, the seek position of the file will not
879 be preserved.
879 be preserved.
880
880
881 Returns a str holding uncompressed data for the requested revision.
881 Returns a str holding uncompressed data for the requested revision.
882 """
882 """
883 if self._uncompressed_chunk_cache is not None:
883 if self._uncompressed_chunk_cache is not None:
884 uncomp = self._uncompressed_chunk_cache.get(rev)
884 uncomp = self._uncompressed_chunk_cache.get(rev)
885 if uncomp is not None:
885 if uncomp is not None:
886 return uncomp
886 return uncomp
887
887
888 compression_mode = self.index[rev][10]
888 compression_mode = self.index[rev][10]
889 data = self.get_segment_for_revs(rev, rev)[1]
889 data = self.get_segment_for_revs(rev, rev)[1]
890 if compression_mode == COMP_MODE_PLAIN:
890 if compression_mode == COMP_MODE_PLAIN:
891 uncomp = data
891 uncomp = data
892 elif compression_mode == COMP_MODE_DEFAULT:
892 elif compression_mode == COMP_MODE_DEFAULT:
893 uncomp = self._decompressor(data)
893 uncomp = self._decompressor(data)
894 elif compression_mode == COMP_MODE_INLINE:
894 elif compression_mode == COMP_MODE_INLINE:
895 uncomp = self.decompress(data)
895 uncomp = self.decompress(data)
896 else:
896 else:
897 msg = b'unknown compression mode %d'
897 msg = b'unknown compression mode %d'
898 msg %= compression_mode
898 msg %= compression_mode
899 raise error.RevlogError(msg)
899 raise error.RevlogError(msg)
900 if self._uncompressed_chunk_cache is not None:
900 if self._uncompressed_chunk_cache is not None:
901 self._uncompressed_chunk_cache.insert(rev, uncomp, cost=len(uncomp))
901 self._uncompressed_chunk_cache.insert(rev, uncomp, cost=len(uncomp))
902 return uncomp
902 return uncomp
903
903
904 def _chunks(self, revs, targetsize=None):
904 def _chunks(self, revs, targetsize=None):
905 """Obtain decompressed chunks for the specified revisions.
905 """Obtain decompressed chunks for the specified revisions.
906
906
907 Accepts an iterable of numeric revisions that are assumed to be in
907 Accepts an iterable of numeric revisions that are assumed to be in
908 ascending order. Also accepts an optional already-open file handle
908 ascending order. Also accepts an optional already-open file handle
909 to be used for reading. If used, the seek position of the file will
909 to be used for reading. If used, the seek position of the file will
910 not be preserved.
910 not be preserved.
911
911
912 This function is similar to calling ``self._chunk()`` multiple times,
912 This function is similar to calling ``self._chunk()`` multiple times,
913 but is faster.
913 but is faster.
914
914
915 Returns a list with decompressed data for each requested revision.
915 Returns a list with decompressed data for each requested revision.
916 """
916 """
917 if not revs:
917 if not revs:
918 return []
918 return []
919 start = self.start
919 start = self.start
920 length = self.length
920 length = self.length
921 inline = self.inline
921 inline = self.inline
922 iosize = self.index.entry_size
922 iosize = self.index.entry_size
923 buffer = util.buffer
923 buffer = util.buffer
924
924
925 fetched_revs = []
925 fetched_revs = []
926 fadd = fetched_revs.append
926 fadd = fetched_revs.append
927
927
928 chunks = []
928 chunks = []
929 ladd = chunks.append
929 ladd = chunks.append
930
930
931 if self._uncompressed_chunk_cache is None:
931 if self._uncompressed_chunk_cache is None:
932 fetched_revs = revs
932 fetched_revs = revs
933 else:
933 else:
934 for rev in revs:
934 for rev in revs:
935 cached_value = self._uncompressed_chunk_cache.get(rev)
935 cached_value = self._uncompressed_chunk_cache.get(rev)
936 if cached_value is None:
936 if cached_value is None:
937 fadd(rev)
937 fadd(rev)
938 else:
938 else:
939 ladd((rev, cached_value))
939 ladd((rev, cached_value))
940
940
941 if not fetched_revs:
941 if not fetched_revs:
942 slicedchunks = ()
942 slicedchunks = ()
943 elif not self.data_config.with_sparse_read:
943 elif not self.data_config.with_sparse_read:
944 slicedchunks = (fetched_revs,)
944 slicedchunks = (fetched_revs,)
945 else:
945 else:
946 slicedchunks = deltautil.slicechunk(
946 slicedchunks = deltautil.slicechunk(
947 self,
947 self,
948 fetched_revs,
948 fetched_revs,
949 targetsize=targetsize,
949 targetsize=targetsize,
950 )
950 )
951
951
952 for revschunk in slicedchunks:
952 for revschunk in slicedchunks:
953 firstrev = revschunk[0]
953 firstrev = revschunk[0]
954 # Skip trailing revisions with empty diff
954 # Skip trailing revisions with empty diff
955 for lastrev in revschunk[::-1]:
955 for lastrev in revschunk[::-1]:
956 if length(lastrev) != 0:
956 if length(lastrev) != 0:
957 break
957 break
958
958
959 try:
959 try:
960 offset, data = self.get_segment_for_revs(firstrev, lastrev)
960 offset, data = self.get_segment_for_revs(firstrev, lastrev)
961 except OverflowError:
961 except OverflowError:
962 # issue4215 - we can't cache a run of chunks greater than
962 # issue4215 - we can't cache a run of chunks greater than
963 # 2G on Windows
963 # 2G on Windows
964 for rev in revschunk:
964 for rev in revschunk:
965 ladd((rev, self._chunk(rev)))
965 ladd((rev, self._chunk(rev)))
966
966
967 decomp = self.decompress
967 decomp = self.decompress
968 # self._decompressor might be None, but will not be used in that case
968 # self._decompressor might be None, but will not be used in that case
969 def_decomp = self._decompressor
969 def_decomp = self._decompressor
970 for rev in revschunk:
970 for rev in revschunk:
971 chunkstart = start(rev)
971 chunkstart = start(rev)
972 if inline:
972 if inline:
973 chunkstart += (rev + 1) * iosize
973 chunkstart += (rev + 1) * iosize
974 chunklength = length(rev)
974 chunklength = length(rev)
975 comp_mode = self.index[rev][10]
975 comp_mode = self.index[rev][10]
976 c = buffer(data, chunkstart - offset, chunklength)
976 c = buffer(data, chunkstart - offset, chunklength)
977 if comp_mode == COMP_MODE_PLAIN:
977 if comp_mode == COMP_MODE_PLAIN:
978 c = c
978 c = c
979 elif comp_mode == COMP_MODE_INLINE:
979 elif comp_mode == COMP_MODE_INLINE:
980 c = decomp(c)
980 c = decomp(c)
981 elif comp_mode == COMP_MODE_DEFAULT:
981 elif comp_mode == COMP_MODE_DEFAULT:
982 c = def_decomp(c)
982 c = def_decomp(c)
983 else:
983 else:
984 msg = b'unknown compression mode %d'
984 msg = b'unknown compression mode %d'
985 msg %= comp_mode
985 msg %= comp_mode
986 raise error.RevlogError(msg)
986 raise error.RevlogError(msg)
987 ladd((rev, c))
987 ladd((rev, c))
988 if self._uncompressed_chunk_cache is not None:
988 if self._uncompressed_chunk_cache is not None:
989 self._uncompressed_chunk_cache.insert(rev, c, len(c))
989 self._uncompressed_chunk_cache.insert(rev, c, len(c))
990
990
991 chunks.sort()
991 chunks.sort()
992 return [x[1] for x in chunks]
992 return [x[1] for x in chunks]
993
993
994 def raw_text(self, node, rev):
994 def raw_text(self, node, rev):
995 """return the possibly unvalidated rawtext for a revision
995 """return the possibly unvalidated rawtext for a revision
996
996
997 returns (rev, rawtext, validated)
997 returns (rev, rawtext, validated)
998 """
998 """
999
999
1000 # revision in the cache (could be useful to apply delta)
1000 # revision in the cache (could be useful to apply delta)
1001 cachedrev = None
1001 cachedrev = None
1002 # An intermediate text to apply deltas to
1002 # An intermediate text to apply deltas to
1003 basetext = None
1003 basetext = None
1004
1004
1005 # Check if we have the entry in cache
1005 # Check if we have the entry in cache
1006 # The cache entry looks like (node, rev, rawtext)
1006 # The cache entry looks like (node, rev, rawtext)
1007 if self._revisioncache:
1007 if self._revisioncache:
1008 cachedrev = self._revisioncache[1]
1008 cachedrev = self._revisioncache[1]
1009
1009
1010 chain, stopped = self._deltachain(rev, stoprev=cachedrev)
1010 chain, stopped = self._deltachain(rev, stoprev=cachedrev)
1011 if stopped:
1011 if stopped:
1012 basetext = self._revisioncache[2]
1012 basetext = self._revisioncache[2]
1013
1013
1014 # drop cache to save memory, the caller is expected to
1014 # drop cache to save memory, the caller is expected to
1015 # update self._inner._revisioncache after validating the text
1015 # update self._inner._revisioncache after validating the text
1016 self._revisioncache = None
1016 self._revisioncache = None
1017
1017
1018 targetsize = None
1018 targetsize = None
1019 rawsize = self.index[rev][2]
1019 rawsize = self.index[rev][2]
1020 if 0 <= rawsize:
1020 if 0 <= rawsize:
1021 targetsize = 4 * rawsize
1021 targetsize = 4 * rawsize
1022
1022
1023 if self._uncompressed_chunk_cache is not None:
1023 if self._uncompressed_chunk_cache is not None:
1024 # dynamically update the uncompressed_chunk_cache size to the
1024 # dynamically update the uncompressed_chunk_cache size to the
1025 # largest revision we saw in this revlog.
1025 # largest revision we saw in this revlog.
1026 factor = self.data_config.uncompressed_cache_factor
1026 factor = self.data_config.uncompressed_cache_factor
1027 candidate_size = rawsize * factor
1027 candidate_size = rawsize * factor
1028 if candidate_size > self._uncompressed_chunk_cache.maxcost:
1028 if candidate_size > self._uncompressed_chunk_cache.maxcost:
1029 self._uncompressed_chunk_cache.maxcost = candidate_size
1029 self._uncompressed_chunk_cache.maxcost = candidate_size
1030
1030
1031 bins = self._chunks(chain, targetsize=targetsize)
1031 bins = self._chunks(chain, targetsize=targetsize)
1032 if basetext is None:
1032 if basetext is None:
1033 basetext = bytes(bins[0])
1033 basetext = bytes(bins[0])
1034 bins = bins[1:]
1034 bins = bins[1:]
1035
1035
1036 rawtext = mdiff.patches(basetext, bins)
1036 rawtext = mdiff.patches(basetext, bins)
1037 del basetext # let us have a chance to free memory early
1037 del basetext # let us have a chance to free memory early
1038 return (rev, rawtext, False)
1038 return (rev, rawtext, False)
1039
1039
1040 def sidedata(self, rev, sidedata_end):
1040 def sidedata(self, rev, sidedata_end):
1041 """Return the sidedata for a given revision number."""
1041 """Return the sidedata for a given revision number."""
1042 index_entry = self.index[rev]
1042 index_entry = self.index[rev]
1043 sidedata_offset = index_entry[8]
1043 sidedata_offset = index_entry[8]
1044 sidedata_size = index_entry[9]
1044 sidedata_size = index_entry[9]
1045
1045
1046 if self.inline:
1046 if self.inline:
1047 sidedata_offset += self.index.entry_size * (1 + rev)
1047 sidedata_offset += self.index.entry_size * (1 + rev)
1048 if sidedata_size == 0:
1048 if sidedata_size == 0:
1049 return {}
1049 return {}
1050
1050
1051 if sidedata_end < sidedata_offset + sidedata_size:
1051 if sidedata_end < sidedata_offset + sidedata_size:
1052 filename = self.sidedata_file
1052 filename = self.sidedata_file
1053 end = sidedata_end
1053 end = sidedata_end
1054 offset = sidedata_offset
1054 offset = sidedata_offset
1055 length = sidedata_size
1055 length = sidedata_size
1056 m = FILE_TOO_SHORT_MSG % (filename, length, offset, end)
1056 m = FILE_TOO_SHORT_MSG % (filename, length, offset, end)
1057 raise error.RevlogError(m)
1057 raise error.RevlogError(m)
1058
1058
1059 comp_segment = self._segmentfile_sidedata.read_chunk(
1059 comp_segment = self._segmentfile_sidedata.read_chunk(
1060 sidedata_offset, sidedata_size
1060 sidedata_offset, sidedata_size
1061 )
1061 )
1062
1062
1063 comp = self.index[rev][11]
1063 comp = self.index[rev][11]
1064 if comp == COMP_MODE_PLAIN:
1064 if comp == COMP_MODE_PLAIN:
1065 segment = comp_segment
1065 segment = comp_segment
1066 elif comp == COMP_MODE_DEFAULT:
1066 elif comp == COMP_MODE_DEFAULT:
1067 segment = self._decompressor(comp_segment)
1067 segment = self._decompressor(comp_segment)
1068 elif comp == COMP_MODE_INLINE:
1068 elif comp == COMP_MODE_INLINE:
1069 segment = self.decompress(comp_segment)
1069 segment = self.decompress(comp_segment)
1070 else:
1070 else:
1071 msg = b'unknown compression mode %d'
1071 msg = b'unknown compression mode %d'
1072 msg %= comp
1072 msg %= comp
1073 raise error.RevlogError(msg)
1073 raise error.RevlogError(msg)
1074
1074
1075 sidedata = sidedatautil.deserialize_sidedata(segment)
1075 sidedata = sidedatautil.deserialize_sidedata(segment)
1076 return sidedata
1076 return sidedata
1077
1077
1078 def write_entry(
1078 def write_entry(
1079 self,
1079 self,
1080 transaction,
1080 transaction,
1081 entry,
1081 entry,
1082 data,
1082 data,
1083 link,
1083 link,
1084 offset,
1084 offset,
1085 sidedata,
1085 sidedata,
1086 sidedata_offset,
1086 sidedata_offset,
1087 index_end,
1087 index_end,
1088 data_end,
1088 data_end,
1089 sidedata_end,
1089 sidedata_end,
1090 ):
1090 ):
1091 # Files opened in a+ mode have inconsistent behavior on various
1091 # Files opened in a+ mode have inconsistent behavior on various
1092 # platforms. Windows requires that a file positioning call be made
1092 # platforms. Windows requires that a file positioning call be made
1093 # when the file handle transitions between reads and writes. See
1093 # when the file handle transitions between reads and writes. See
1094 # 3686fa2b8eee and the mixedfilemodewrapper in windows.py. On other
1094 # 3686fa2b8eee and the mixedfilemodewrapper in windows.py. On other
1095 # platforms, Python or the platform itself can be buggy. Some versions
1095 # platforms, Python or the platform itself can be buggy. Some versions
1096 # of Solaris have been observed to not append at the end of the file
1096 # of Solaris have been observed to not append at the end of the file
1097 # if the file was seeked to before the end. See issue4943 for more.
1097 # if the file was seeked to before the end. See issue4943 for more.
1098 #
1098 #
1099 # We work around this issue by inserting a seek() before writing.
1099 # We work around this issue by inserting a seek() before writing.
1100 # Note: This is likely not necessary on Python 3. However, because
1100 # Note: This is likely not necessary on Python 3. However, because
1101 # the file handle is reused for reads and may be seeked there, we need
1101 # the file handle is reused for reads and may be seeked there, we need
1102 # to be careful before changing this.
1102 # to be careful before changing this.
1103 if self._writinghandles is None:
1103 if self._writinghandles is None:
1104 msg = b'adding revision outside `revlog._writing` context'
1104 msg = b'adding revision outside `revlog._writing` context'
1105 raise error.ProgrammingError(msg)
1105 raise error.ProgrammingError(msg)
1106 ifh, dfh, sdfh = self._writinghandles
1106 ifh, dfh, sdfh = self._writinghandles
1107 if index_end is None:
1107 if index_end is None:
1108 ifh.seek(0, os.SEEK_END)
1108 ifh.seek(0, os.SEEK_END)
1109 else:
1109 else:
1110 ifh.seek(index_end, os.SEEK_SET)
1110 ifh.seek(index_end, os.SEEK_SET)
1111 if dfh:
1111 if dfh:
1112 if data_end is None:
1112 if data_end is None:
1113 dfh.seek(0, os.SEEK_END)
1113 dfh.seek(0, os.SEEK_END)
1114 else:
1114 else:
1115 dfh.seek(data_end, os.SEEK_SET)
1115 dfh.seek(data_end, os.SEEK_SET)
1116 if sdfh:
1116 if sdfh:
1117 sdfh.seek(sidedata_end, os.SEEK_SET)
1117 sdfh.seek(sidedata_end, os.SEEK_SET)
1118
1118
1119 curr = len(self.index) - 1
1119 curr = len(self.index) - 1
1120 if not self.inline:
1120 if not self.inline:
1121 transaction.add(self.data_file, offset)
1121 transaction.add(self.data_file, offset)
1122 if self.sidedata_file:
1122 if self.sidedata_file:
1123 transaction.add(self.sidedata_file, sidedata_offset)
1123 transaction.add(self.sidedata_file, sidedata_offset)
1124 transaction.add(self.canonical_index_file, curr * len(entry))
1124 transaction.add(self.canonical_index_file, curr * len(entry))
1125 if data[0]:
1125 if data[0]:
1126 dfh.write(data[0])
1126 dfh.write(data[0])
1127 dfh.write(data[1])
1127 dfh.write(data[1])
1128 if sidedata:
1128 if sidedata:
1129 sdfh.write(sidedata)
1129 sdfh.write(sidedata)
1130 if self._delay_buffer is None:
1130 if self._delay_buffer is None:
1131 ifh.write(entry)
1131 ifh.write(entry)
1132 else:
1132 else:
1133 self._delay_buffer.append(entry)
1133 self._delay_buffer.append(entry)
1134 elif self._delay_buffer is not None:
1134 elif self._delay_buffer is not None:
1135 msg = b'invalid delayed write on inline revlog'
1135 msg = b'invalid delayed write on inline revlog'
1136 raise error.ProgrammingError(msg)
1136 raise error.ProgrammingError(msg)
1137 else:
1137 else:
1138 offset += curr * self.index.entry_size
1138 offset += curr * self.index.entry_size
1139 transaction.add(self.canonical_index_file, offset)
1139 transaction.add(self.canonical_index_file, offset)
1140 assert not sidedata
1140 assert not sidedata
1141 ifh.write(entry)
1141 ifh.write(entry)
1142 ifh.write(data[0])
1142 ifh.write(data[0])
1143 ifh.write(data[1])
1143 ifh.write(data[1])
1144 return (
1144 return (
1145 ifh.tell(),
1145 ifh.tell(),
1146 dfh.tell() if dfh else None,
1146 dfh.tell() if dfh else None,
1147 sdfh.tell() if sdfh else None,
1147 sdfh.tell() if sdfh else None,
1148 )
1148 )
1149
1149
1150 def _divert_index(self):
1150 def _divert_index(self):
1151 return self.index_file + b'.a'
1151 return self.index_file + b'.a'
1152
1152
1153 def delay(self):
1153 def delay(self):
1154 assert not self.is_open
1154 assert not self.is_open
1155 if self.inline:
1155 if self.inline:
1156 msg = "revlog with delayed write should not be inline"
1156 msg = "revlog with delayed write should not be inline"
1157 raise error.ProgrammingError(msg)
1157 raise error.ProgrammingError(msg)
1158 if self._delay_buffer is not None or self._orig_index_file is not None:
1158 if self._delay_buffer is not None or self._orig_index_file is not None:
1159 # delay or divert already in place
1159 # delay or divert already in place
1160 return None
1160 return None
1161 elif len(self.index) == 0:
1161 elif len(self.index) == 0:
1162 self._orig_index_file = self.index_file
1162 self._orig_index_file = self.index_file
1163 self.index_file = self._divert_index()
1163 self.index_file = self._divert_index()
1164 assert self._orig_index_file is not None
1164 assert self._orig_index_file is not None
1165 assert self.index_file is not None
1165 assert self.index_file is not None
1166 if self.opener.exists(self.index_file):
1166 if self.opener.exists(self.index_file):
1167 self.opener.unlink(self.index_file)
1167 self.opener.unlink(self.index_file)
1168 return self.index_file
1168 return self.index_file
1169 else:
1169 else:
1170 self._delay_buffer = []
1170 self._delay_buffer = []
1171 return None
1171 return None
1172
1172
1173 def write_pending(self):
1173 def write_pending(self):
1174 assert not self.is_open
1174 assert not self.is_open
1175 if self.inline:
1175 if self.inline:
1176 msg = "revlog with delayed write should not be inline"
1176 msg = "revlog with delayed write should not be inline"
1177 raise error.ProgrammingError(msg)
1177 raise error.ProgrammingError(msg)
1178 if self._orig_index_file is not None:
1178 if self._orig_index_file is not None:
1179 return None, True
1179 return None, True
1180 any_pending = False
1180 any_pending = False
1181 pending_index_file = self._divert_index()
1181 pending_index_file = self._divert_index()
1182 if self.opener.exists(pending_index_file):
1182 if self.opener.exists(pending_index_file):
1183 self.opener.unlink(pending_index_file)
1183 self.opener.unlink(pending_index_file)
1184 util.copyfile(
1184 util.copyfile(
1185 self.opener.join(self.index_file),
1185 self.opener.join(self.index_file),
1186 self.opener.join(pending_index_file),
1186 self.opener.join(pending_index_file),
1187 )
1187 )
1188 if self._delay_buffer:
1188 if self._delay_buffer:
1189 with self.opener(pending_index_file, b'r+') as ifh:
1189 with self.opener(pending_index_file, b'r+') as ifh:
1190 ifh.seek(0, os.SEEK_END)
1190 ifh.seek(0, os.SEEK_END)
1191 ifh.write(b"".join(self._delay_buffer))
1191 ifh.write(b"".join(self._delay_buffer))
1192 any_pending = True
1192 any_pending = True
1193 self._delay_buffer = None
1193 self._delay_buffer = None
1194 self._orig_index_file = self.index_file
1194 self._orig_index_file = self.index_file
1195 self.index_file = pending_index_file
1195 self.index_file = pending_index_file
1196 return self.index_file, any_pending
1196 return self.index_file, any_pending
1197
1197
1198 def finalize_pending(self):
1198 def finalize_pending(self):
1199 assert not self.is_open
1199 assert not self.is_open
1200 if self.inline:
1200 if self.inline:
1201 msg = "revlog with delayed write should not be inline"
1201 msg = "revlog with delayed write should not be inline"
1202 raise error.ProgrammingError(msg)
1202 raise error.ProgrammingError(msg)
1203
1203
1204 delay = self._delay_buffer is not None
1204 delay = self._delay_buffer is not None
1205 divert = self._orig_index_file is not None
1205 divert = self._orig_index_file is not None
1206
1206
1207 if delay and divert:
1207 if delay and divert:
1208 assert False, "unreachable"
1208 assert False, "unreachable"
1209 elif delay:
1209 elif delay:
1210 if self._delay_buffer:
1210 if self._delay_buffer:
1211 with self.opener(self.index_file, b'r+') as ifh:
1211 with self.opener(self.index_file, b'r+') as ifh:
1212 ifh.seek(0, os.SEEK_END)
1212 ifh.seek(0, os.SEEK_END)
1213 ifh.write(b"".join(self._delay_buffer))
1213 ifh.write(b"".join(self._delay_buffer))
1214 self._delay_buffer = None
1214 self._delay_buffer = None
1215 elif divert:
1215 elif divert:
1216 if self.opener.exists(self.index_file):
1216 if self.opener.exists(self.index_file):
1217 self.opener.rename(
1217 self.opener.rename(
1218 self.index_file,
1218 self.index_file,
1219 self._orig_index_file,
1219 self._orig_index_file,
1220 checkambig=True,
1220 checkambig=True,
1221 )
1221 )
1222 self.index_file = self._orig_index_file
1222 self.index_file = self._orig_index_file
1223 self._orig_index_file = None
1223 self._orig_index_file = None
1224 else:
1224 else:
1225 msg = b"not delay or divert found on this revlog"
1225 msg = b"not delay or divert found on this revlog"
1226 raise error.ProgrammingError(msg)
1226 raise error.ProgrammingError(msg)
1227 return self.canonical_index_file
1227 return self.canonical_index_file
1228
1228
1229
1229
1230 class revlog:
1230 class revlog:
1231 """
1231 """
1232 the underlying revision storage object
1232 the underlying revision storage object
1233
1233
1234 A revlog consists of two parts, an index and the revision data.
1234 A revlog consists of two parts, an index and the revision data.
1235
1235
1236 The index is a file with a fixed record size containing
1236 The index is a file with a fixed record size containing
1237 information on each revision, including its nodeid (hash), the
1237 information on each revision, including its nodeid (hash), the
1238 nodeids of its parents, the position and offset of its data within
1238 nodeids of its parents, the position and offset of its data within
1239 the data file, and the revision it's based on. Finally, each entry
1239 the data file, and the revision it's based on. Finally, each entry
1240 contains a linkrev entry that can serve as a pointer to external
1240 contains a linkrev entry that can serve as a pointer to external
1241 data.
1241 data.
1242
1242
1243 The revision data itself is a linear collection of data chunks.
1243 The revision data itself is a linear collection of data chunks.
1244 Each chunk represents a revision and is usually represented as a
1244 Each chunk represents a revision and is usually represented as a
1245 delta against the previous chunk. To bound lookup time, runs of
1245 delta against the previous chunk. To bound lookup time, runs of
1246 deltas are limited to about 2 times the length of the original
1246 deltas are limited to about 2 times the length of the original
1247 version data. This makes retrieval of a version proportional to
1247 version data. This makes retrieval of a version proportional to
1248 its size, or O(1) relative to the number of revisions.
1248 its size, or O(1) relative to the number of revisions.
1249
1249
1250 Both pieces of the revlog are written to in an append-only
1250 Both pieces of the revlog are written to in an append-only
1251 fashion, which means we never need to rewrite a file to insert or
1251 fashion, which means we never need to rewrite a file to insert or
1252 remove data, and can use some simple techniques to avoid the need
1252 remove data, and can use some simple techniques to avoid the need
1253 for locking while reading.
1253 for locking while reading.
1254
1254
1255 If checkambig, indexfile is opened with checkambig=True at
1255 If checkambig, indexfile is opened with checkambig=True at
1256 writing, to avoid file stat ambiguity.
1256 writing, to avoid file stat ambiguity.
1257
1257
1258 If mmaplargeindex is True, and an mmapindexthreshold is set, the
1258 If mmaplargeindex is True, and an mmapindexthreshold is set, the
1259 index will be mmapped rather than read if it is larger than the
1259 index will be mmapped rather than read if it is larger than the
1260 configured threshold.
1260 configured threshold.
1261
1261
1262 If censorable is True, the revlog can have censored revisions.
1262 If censorable is True, the revlog can have censored revisions.
1263
1263
1264 If `upperboundcomp` is not None, this is the expected maximal gain from
1264 If `upperboundcomp` is not None, this is the expected maximal gain from
1265 compression for the data content.
1265 compression for the data content.
1266
1266
1267 `concurrencychecker` is an optional function that receives 3 arguments: a
1267 `concurrencychecker` is an optional function that receives 3 arguments: a
1268 file handle, a filename, and an expected position. It should check whether
1268 file handle, a filename, and an expected position. It should check whether
1269 the current position in the file handle is valid, and log/warn/fail (by
1269 the current position in the file handle is valid, and log/warn/fail (by
1270 raising).
1270 raising).
1271
1271
1272 See mercurial/revlogutils/contants.py for details about the content of an
1272 See mercurial/revlogutils/contants.py for details about the content of an
1273 index entry.
1273 index entry.
1274 """
1274 """
1275
1275
1276 _flagserrorclass = error.RevlogError
1276 _flagserrorclass = error.RevlogError
1277
1277
1278 @staticmethod
1278 @staticmethod
1279 def is_inline_index(header_bytes):
1279 def is_inline_index(header_bytes):
1280 """Determine if a revlog is inline from the initial bytes of the index"""
1280 """Determine if a revlog is inline from the initial bytes of the index"""
1281 if len(header_bytes) == 0:
1281 if len(header_bytes) == 0:
1282 return True
1282 return True
1283
1283
1284 header = INDEX_HEADER.unpack(header_bytes)[0]
1284 header = INDEX_HEADER.unpack(header_bytes)[0]
1285
1285
1286 _format_flags = header & ~0xFFFF
1286 _format_flags = header & ~0xFFFF
1287 _format_version = header & 0xFFFF
1287 _format_version = header & 0xFFFF
1288
1288
1289 features = FEATURES_BY_VERSION[_format_version]
1289 features = FEATURES_BY_VERSION[_format_version]
1290 return features[b'inline'](_format_flags)
1290 return features[b'inline'](_format_flags)
1291
1291
1292 def __init__(
1292 def __init__(
1293 self,
1293 self,
1294 opener,
1294 opener,
1295 target,
1295 target,
1296 radix,
1296 radix,
1297 postfix=None, # only exist for `tmpcensored` now
1297 postfix=None, # only exist for `tmpcensored` now
1298 checkambig=False,
1298 checkambig=False,
1299 mmaplargeindex=False,
1299 mmaplargeindex=False,
1300 censorable=False,
1300 censorable=False,
1301 upperboundcomp=None,
1301 upperboundcomp=None,
1302 persistentnodemap=False,
1302 persistentnodemap=False,
1303 concurrencychecker=None,
1303 concurrencychecker=None,
1304 trypending=False,
1304 trypending=False,
1305 try_split=False,
1305 try_split=False,
1306 canonical_parent_order=True,
1306 canonical_parent_order=True,
1307 data_config=None,
1307 data_config=None,
1308 delta_config=None,
1308 delta_config=None,
1309 feature_config=None,
1309 feature_config=None,
1310 may_inline=True, # may inline new revlog
1310 may_inline=True, # may inline new revlog
1311 ):
1311 ):
1312 """
1312 """
1313 create a revlog object
1313 create a revlog object
1314
1314
1315 opener is a function that abstracts the file opening operation
1315 opener is a function that abstracts the file opening operation
1316 and can be used to implement COW semantics or the like.
1316 and can be used to implement COW semantics or the like.
1317
1317
1318 `target`: a (KIND, ID) tuple that identify the content stored in
1318 `target`: a (KIND, ID) tuple that identify the content stored in
1319 this revlog. It help the rest of the code to understand what the revlog
1319 this revlog. It help the rest of the code to understand what the revlog
1320 is about without having to resort to heuristic and index filename
1320 is about without having to resort to heuristic and index filename
1321 analysis. Note: that this must be reliably be set by normal code, but
1321 analysis. Note: that this must be reliably be set by normal code, but
1322 that test, debug, or performance measurement code might not set this to
1322 that test, debug, or performance measurement code might not set this to
1323 accurate value.
1323 accurate value.
1324 """
1324 """
1325
1325
1326 self.radix = radix
1326 self.radix = radix
1327
1327
1328 self._docket_file = None
1328 self._docket_file = None
1329 self._indexfile = None
1329 self._indexfile = None
1330 self._datafile = None
1330 self._datafile = None
1331 self._sidedatafile = None
1331 self._sidedatafile = None
1332 self._nodemap_file = None
1332 self._nodemap_file = None
1333 self.postfix = postfix
1333 self.postfix = postfix
1334 self._trypending = trypending
1334 self._trypending = trypending
1335 self._try_split = try_split
1335 self._try_split = try_split
1336 self._may_inline = may_inline
1336 self._may_inline = may_inline
1337 self.opener = opener
1337 self.opener = opener
1338 if persistentnodemap:
1338 if persistentnodemap:
1339 self._nodemap_file = nodemaputil.get_nodemap_file(self)
1339 self._nodemap_file = nodemaputil.get_nodemap_file(self)
1340
1340
1341 assert target[0] in ALL_KINDS
1341 assert target[0] in ALL_KINDS
1342 assert len(target) == 2
1342 assert len(target) == 2
1343 self.target = target
1343 self.target = target
1344 if feature_config is not None:
1344 if feature_config is not None:
1345 self.feature_config = feature_config.copy()
1345 self.feature_config = feature_config.copy()
1346 elif b'feature-config' in self.opener.options:
1346 elif b'feature-config' in self.opener.options:
1347 self.feature_config = self.opener.options[b'feature-config'].copy()
1347 self.feature_config = self.opener.options[b'feature-config'].copy()
1348 else:
1348 else:
1349 self.feature_config = FeatureConfig()
1349 self.feature_config = FeatureConfig()
1350 self.feature_config.censorable = censorable
1350 self.feature_config.censorable = censorable
1351 self.feature_config.canonical_parent_order = canonical_parent_order
1351 self.feature_config.canonical_parent_order = canonical_parent_order
1352 if data_config is not None:
1352 if data_config is not None:
1353 self.data_config = data_config.copy()
1353 self.data_config = data_config.copy()
1354 elif b'data-config' in self.opener.options:
1354 elif b'data-config' in self.opener.options:
1355 self.data_config = self.opener.options[b'data-config'].copy()
1355 self.data_config = self.opener.options[b'data-config'].copy()
1356 else:
1356 else:
1357 self.data_config = DataConfig()
1357 self.data_config = DataConfig()
1358 self.data_config.check_ambig = checkambig
1358 self.data_config.check_ambig = checkambig
1359 self.data_config.mmap_large_index = mmaplargeindex
1359 self.data_config.mmap_large_index = mmaplargeindex
1360 if delta_config is not None:
1360 if delta_config is not None:
1361 self.delta_config = delta_config.copy()
1361 self.delta_config = delta_config.copy()
1362 elif b'delta-config' in self.opener.options:
1362 elif b'delta-config' in self.opener.options:
1363 self.delta_config = self.opener.options[b'delta-config'].copy()
1363 self.delta_config = self.opener.options[b'delta-config'].copy()
1364 else:
1364 else:
1365 self.delta_config = DeltaConfig()
1365 self.delta_config = DeltaConfig()
1366 self.delta_config.upper_bound_comp = upperboundcomp
1366 self.delta_config.upper_bound_comp = upperboundcomp
1367
1367
1368 # Maps rev to chain base rev.
1368 # Maps rev to chain base rev.
1369 self._chainbasecache = util.lrucachedict(100)
1369 self._chainbasecache = util.lrucachedict(100)
1370
1370
1371 self.index = None
1371 self.index = None
1372 self._docket = None
1372 self._docket = None
1373 self._nodemap_docket = None
1373 self._nodemap_docket = None
1374 # Mapping of partial identifiers to full nodes.
1374 # Mapping of partial identifiers to full nodes.
1375 self._pcache = {}
1375 self._pcache = {}
1376
1376
1377 # other optionnals features
1377 # other optionnals features
1378
1378
1379 # Make copy of flag processors so each revlog instance can support
1379 # Make copy of flag processors so each revlog instance can support
1380 # custom flags.
1380 # custom flags.
1381 self._flagprocessors = dict(flagutil.flagprocessors)
1381 self._flagprocessors = dict(flagutil.flagprocessors)
1382 # prevent nesting of addgroup
1382 # prevent nesting of addgroup
1383 self._adding_group = None
1383 self._adding_group = None
1384
1384
1385 chunk_cache = self._loadindex()
1385 chunk_cache = self._loadindex()
1386 self._load_inner(chunk_cache)
1386 self._load_inner(chunk_cache)
1387 self._concurrencychecker = concurrencychecker
1387 self._concurrencychecker = concurrencychecker
1388
1388
1389 def _init_opts(self):
1389 def _init_opts(self):
1390 """process options (from above/config) to setup associated default revlog mode
1390 """process options (from above/config) to setup associated default revlog mode
1391
1391
1392 These values might be affected when actually reading on disk information.
1392 These values might be affected when actually reading on disk information.
1393
1393
1394 The relevant values are returned for use in _loadindex().
1394 The relevant values are returned for use in _loadindex().
1395
1395
1396 * newversionflags:
1396 * newversionflags:
1397 version header to use if we need to create a new revlog
1397 version header to use if we need to create a new revlog
1398
1398
1399 * mmapindexthreshold:
1399 * mmapindexthreshold:
1400 minimal index size for start to use mmap
1400 minimal index size for start to use mmap
1401
1401
1402 * force_nodemap:
1402 * force_nodemap:
1403 force the usage of a "development" version of the nodemap code
1403 force the usage of a "development" version of the nodemap code
1404 """
1404 """
1405 opts = self.opener.options
1405 opts = self.opener.options
1406
1406
1407 if b'changelogv2' in opts and self.revlog_kind == KIND_CHANGELOG:
1407 if b'changelogv2' in opts and self.revlog_kind == KIND_CHANGELOG:
1408 new_header = CHANGELOGV2
1408 new_header = CHANGELOGV2
1409 compute_rank = opts.get(b'changelogv2.compute-rank', True)
1409 compute_rank = opts.get(b'changelogv2.compute-rank', True)
1410 self.feature_config.compute_rank = compute_rank
1410 self.feature_config.compute_rank = compute_rank
1411 elif b'revlogv2' in opts:
1411 elif b'revlogv2' in opts:
1412 new_header = REVLOGV2
1412 new_header = REVLOGV2
1413 elif b'revlogv1' in opts:
1413 elif b'revlogv1' in opts:
1414 new_header = REVLOGV1
1414 new_header = REVLOGV1
1415 if self._may_inline:
1415 if self._may_inline:
1416 new_header |= FLAG_INLINE_DATA
1416 new_header |= FLAG_INLINE_DATA
1417 if b'generaldelta' in opts:
1417 if b'generaldelta' in opts:
1418 new_header |= FLAG_GENERALDELTA
1418 new_header |= FLAG_GENERALDELTA
1419 elif b'revlogv0' in self.opener.options:
1419 elif b'revlogv0' in self.opener.options:
1420 new_header = REVLOGV0
1420 new_header = REVLOGV0
1421 else:
1421 else:
1422 new_header = REVLOG_DEFAULT_VERSION
1422 new_header = REVLOG_DEFAULT_VERSION
1423
1423
1424 mmapindexthreshold = None
1424 mmapindexthreshold = None
1425 if self.data_config.mmap_large_index:
1425 if self.data_config.mmap_large_index:
1426 mmapindexthreshold = self.data_config.mmap_index_threshold
1426 mmapindexthreshold = self.data_config.mmap_index_threshold
1427 if self.feature_config.enable_ellipsis:
1427 if self.feature_config.enable_ellipsis:
1428 self._flagprocessors[REVIDX_ELLIPSIS] = ellipsisprocessor
1428 self._flagprocessors[REVIDX_ELLIPSIS] = ellipsisprocessor
1429
1429
1430 # revlog v0 doesn't have flag processors
1430 # revlog v0 doesn't have flag processors
1431 for flag, processor in opts.get(b'flagprocessors', {}).items():
1431 for flag, processor in opts.get(b'flagprocessors', {}).items():
1432 flagutil.insertflagprocessor(flag, processor, self._flagprocessors)
1432 flagutil.insertflagprocessor(flag, processor, self._flagprocessors)
1433
1433
1434 chunk_cache_size = self.data_config.chunk_cache_size
1434 chunk_cache_size = self.data_config.chunk_cache_size
1435 if chunk_cache_size <= 0:
1435 if chunk_cache_size <= 0:
1436 raise error.RevlogError(
1436 raise error.RevlogError(
1437 _(b'revlog chunk cache size %r is not greater than 0')
1437 _(b'revlog chunk cache size %r is not greater than 0')
1438 % chunk_cache_size
1438 % chunk_cache_size
1439 )
1439 )
1440 elif chunk_cache_size & (chunk_cache_size - 1):
1440 elif chunk_cache_size & (chunk_cache_size - 1):
1441 raise error.RevlogError(
1441 raise error.RevlogError(
1442 _(b'revlog chunk cache size %r is not a power of 2')
1442 _(b'revlog chunk cache size %r is not a power of 2')
1443 % chunk_cache_size
1443 % chunk_cache_size
1444 )
1444 )
1445 force_nodemap = opts.get(b'devel-force-nodemap', False)
1445 force_nodemap = opts.get(b'devel-force-nodemap', False)
1446 return new_header, mmapindexthreshold, force_nodemap
1446 return new_header, mmapindexthreshold, force_nodemap
1447
1447
1448 def _get_data(self, filepath, mmap_threshold, size=None):
1448 def _get_data(self, filepath, mmap_threshold, size=None):
1449 """return a file content with or without mmap
1449 """return a file content with or without mmap
1450
1450
1451 If the file is missing return the empty string"""
1451 If the file is missing return the empty string"""
1452 try:
1452 try:
1453 with self.opener(filepath) as fp:
1453 with self.opener(filepath) as fp:
1454 if mmap_threshold is not None:
1454 if mmap_threshold is not None:
1455 file_size = self.opener.fstat(fp).st_size
1455 file_size = self.opener.fstat(fp).st_size
1456 if file_size >= mmap_threshold:
1456 if file_size >= mmap_threshold:
1457 if size is not None:
1457 if size is not None:
1458 # avoid potentiel mmap crash
1458 # avoid potentiel mmap crash
1459 size = min(file_size, size)
1459 size = min(file_size, size)
1460 # TODO: should .close() to release resources without
1460 # TODO: should .close() to release resources without
1461 # relying on Python GC
1461 # relying on Python GC
1462 if size is None:
1462 if size is None:
1463 return util.buffer(util.mmapread(fp))
1463 return util.buffer(util.mmapread(fp))
1464 else:
1464 else:
1465 return util.buffer(util.mmapread(fp, size))
1465 return util.buffer(util.mmapread(fp, size))
1466 if size is None:
1466 if size is None:
1467 return fp.read()
1467 return fp.read()
1468 else:
1468 else:
1469 return fp.read(size)
1469 return fp.read(size)
1470 except FileNotFoundError:
1470 except FileNotFoundError:
1471 return b''
1471 return b''
1472
1472
1473 def get_streams(self, max_linkrev, force_inline=False):
1473 def get_streams(self, max_linkrev, force_inline=False):
1474 """return a list of streams that represent this revlog
1474 """return a list of streams that represent this revlog
1475
1475
1476 This is used by stream-clone to do bytes to bytes copies of a repository.
1476 This is used by stream-clone to do bytes to bytes copies of a repository.
1477
1477
1478 This streams data for all revisions that refer to a changelog revision up
1478 This streams data for all revisions that refer to a changelog revision up
1479 to `max_linkrev`.
1479 to `max_linkrev`.
1480
1480
1481 If `force_inline` is set, it enforces that the stream will represent an inline revlog.
1481 If `force_inline` is set, it enforces that the stream will represent an inline revlog.
1482
1482
1483 It returns is a list of three-tuple:
1483 It returns is a list of three-tuple:
1484
1484
1485 [
1485 [
1486 (filename, bytes_stream, stream_size),
1486 (filename, bytes_stream, stream_size),
1487 …
1487 …
1488 ]
1488 ]
1489 """
1489 """
1490 n = len(self)
1490 n = len(self)
1491 index = self.index
1491 index = self.index
1492 while n > 0:
1492 while n > 0:
1493 linkrev = index[n - 1][4]
1493 linkrev = index[n - 1][4]
1494 if linkrev < max_linkrev:
1494 if linkrev < max_linkrev:
1495 break
1495 break
1496 # note: this loop will rarely go through multiple iterations, since
1496 # note: this loop will rarely go through multiple iterations, since
1497 # it only traverses commits created during the current streaming
1497 # it only traverses commits created during the current streaming
1498 # pull operation.
1498 # pull operation.
1499 #
1499 #
1500 # If this become a problem, using a binary search should cap the
1500 # If this become a problem, using a binary search should cap the
1501 # runtime of this.
1501 # runtime of this.
1502 n = n - 1
1502 n = n - 1
1503 if n == 0:
1503 if n == 0:
1504 # no data to send
1504 # no data to send
1505 return []
1505 return []
1506 index_size = n * index.entry_size
1506 index_size = n * index.entry_size
1507 data_size = self.end(n - 1)
1507 data_size = self.end(n - 1)
1508
1508
1509 # XXX we might have been split (or stripped) since the object
1509 # XXX we might have been split (or stripped) since the object
1510 # initialization, We need to close this race too, but having a way to
1510 # initialization, We need to close this race too, but having a way to
1511 # pre-open the file we feed to the revlog and never closing them before
1511 # pre-open the file we feed to the revlog and never closing them before
1512 # we are done streaming.
1512 # we are done streaming.
1513
1513
1514 if self._inline:
1514 if self._inline:
1515
1515
1516 def get_stream():
1516 def get_stream():
1517 with self.opener(self._indexfile, mode=b"r") as fp:
1517 with self.opener(self._indexfile, mode=b"r") as fp:
1518 yield None
1518 yield None
1519 size = index_size + data_size
1519 size = index_size + data_size
1520 if size <= 65536:
1520 if size <= 65536:
1521 yield fp.read(size)
1521 yield fp.read(size)
1522 else:
1522 else:
1523 yield from util.filechunkiter(fp, limit=size)
1523 yield from util.filechunkiter(fp, limit=size)
1524
1524
1525 inline_stream = get_stream()
1525 inline_stream = get_stream()
1526 next(inline_stream)
1526 next(inline_stream)
1527 return [
1527 return [
1528 (self._indexfile, inline_stream, index_size + data_size),
1528 (self._indexfile, inline_stream, index_size + data_size),
1529 ]
1529 ]
1530 elif force_inline:
1530 elif force_inline:
1531
1531
1532 def get_stream():
1532 def get_stream():
1533 with self.reading():
1533 with self.reading():
1534 yield None
1534 yield None
1535
1535
1536 for rev in range(n):
1536 for rev in range(n):
1537 idx = self.index.entry_binary(rev)
1537 idx = self.index.entry_binary(rev)
1538 if rev == 0 and self._docket is None:
1538 if rev == 0 and self._docket is None:
1539 # re-inject the inline flag
1539 # re-inject the inline flag
1540 header = self._format_flags
1540 header = self._format_flags
1541 header |= self._format_version
1541 header |= self._format_version
1542 header |= FLAG_INLINE_DATA
1542 header |= FLAG_INLINE_DATA
1543 header = self.index.pack_header(header)
1543 header = self.index.pack_header(header)
1544 idx = header + idx
1544 idx = header + idx
1545 yield idx
1545 yield idx
1546 yield self._inner.get_segment_for_revs(rev, rev)[1]
1546 yield self._inner.get_segment_for_revs(rev, rev)[1]
1547
1547
1548 inline_stream = get_stream()
1548 inline_stream = get_stream()
1549 next(inline_stream)
1549 next(inline_stream)
1550 return [
1550 return [
1551 (self._indexfile, inline_stream, index_size + data_size),
1551 (self._indexfile, inline_stream, index_size + data_size),
1552 ]
1552 ]
1553 else:
1553 else:
1554
1554
1555 def get_index_stream():
1555 def get_index_stream():
1556 with self.opener(self._indexfile, mode=b"r") as fp:
1556 with self.opener(self._indexfile, mode=b"r") as fp:
1557 yield None
1557 yield None
1558 if index_size <= 65536:
1558 if index_size <= 65536:
1559 yield fp.read(index_size)
1559 yield fp.read(index_size)
1560 else:
1560 else:
1561 yield from util.filechunkiter(fp, limit=index_size)
1561 yield from util.filechunkiter(fp, limit=index_size)
1562
1562
1563 def get_data_stream():
1563 def get_data_stream():
1564 with self._datafp() as fp:
1564 with self._datafp() as fp:
1565 yield None
1565 yield None
1566 if data_size <= 65536:
1566 if data_size <= 65536:
1567 yield fp.read(data_size)
1567 yield fp.read(data_size)
1568 else:
1568 else:
1569 yield from util.filechunkiter(fp, limit=data_size)
1569 yield from util.filechunkiter(fp, limit=data_size)
1570
1570
1571 index_stream = get_index_stream()
1571 index_stream = get_index_stream()
1572 next(index_stream)
1572 next(index_stream)
1573 data_stream = get_data_stream()
1573 data_stream = get_data_stream()
1574 next(data_stream)
1574 next(data_stream)
1575 return [
1575 return [
1576 (self._datafile, data_stream, data_size),
1576 (self._datafile, data_stream, data_size),
1577 (self._indexfile, index_stream, index_size),
1577 (self._indexfile, index_stream, index_size),
1578 ]
1578 ]
1579
1579
1580 def _loadindex(self, docket=None):
1580 def _loadindex(self, docket=None):
1581 new_header, mmapindexthreshold, force_nodemap = self._init_opts()
1581 new_header, mmapindexthreshold, force_nodemap = self._init_opts()
1582
1582
1583 if self.postfix is not None:
1583 if self.postfix is not None:
1584 entry_point = b'%s.i.%s' % (self.radix, self.postfix)
1584 entry_point = b'%s.i.%s' % (self.radix, self.postfix)
1585 elif self._trypending and self.opener.exists(b'%s.i.a' % self.radix):
1585 elif self._trypending and self.opener.exists(b'%s.i.a' % self.radix):
1586 entry_point = b'%s.i.a' % self.radix
1586 entry_point = b'%s.i.a' % self.radix
1587 elif self._try_split and self.opener.exists(self._split_index_file):
1587 elif self._try_split and self.opener.exists(self._split_index_file):
1588 entry_point = self._split_index_file
1588 entry_point = self._split_index_file
1589 else:
1589 else:
1590 entry_point = b'%s.i' % self.radix
1590 entry_point = b'%s.i' % self.radix
1591
1591
1592 if docket is not None:
1592 if docket is not None:
1593 self._docket = docket
1593 self._docket = docket
1594 self._docket_file = entry_point
1594 self._docket_file = entry_point
1595 else:
1595 else:
1596 self._initempty = True
1596 self._initempty = True
1597 entry_data = self._get_data(entry_point, mmapindexthreshold)
1597 entry_data = self._get_data(entry_point, mmapindexthreshold)
1598 if len(entry_data) > 0:
1598 if len(entry_data) > 0:
1599 header = INDEX_HEADER.unpack(entry_data[:4])[0]
1599 header = INDEX_HEADER.unpack(entry_data[:4])[0]
1600 self._initempty = False
1600 self._initempty = False
1601 else:
1601 else:
1602 header = new_header
1602 header = new_header
1603
1603
1604 self._format_flags = header & ~0xFFFF
1604 self._format_flags = header & ~0xFFFF
1605 self._format_version = header & 0xFFFF
1605 self._format_version = header & 0xFFFF
1606
1606
1607 supported_flags = SUPPORTED_FLAGS.get(self._format_version)
1607 supported_flags = SUPPORTED_FLAGS.get(self._format_version)
1608 if supported_flags is None:
1608 if supported_flags is None:
1609 msg = _(b'unknown version (%d) in revlog %s')
1609 msg = _(b'unknown version (%d) in revlog %s')
1610 msg %= (self._format_version, self.display_id)
1610 msg %= (self._format_version, self.display_id)
1611 raise error.RevlogError(msg)
1611 raise error.RevlogError(msg)
1612 elif self._format_flags & ~supported_flags:
1612 elif self._format_flags & ~supported_flags:
1613 msg = _(b'unknown flags (%#04x) in version %d revlog %s')
1613 msg = _(b'unknown flags (%#04x) in version %d revlog %s')
1614 display_flag = self._format_flags >> 16
1614 display_flag = self._format_flags >> 16
1615 msg %= (display_flag, self._format_version, self.display_id)
1615 msg %= (display_flag, self._format_version, self.display_id)
1616 raise error.RevlogError(msg)
1616 raise error.RevlogError(msg)
1617
1617
1618 features = FEATURES_BY_VERSION[self._format_version]
1618 features = FEATURES_BY_VERSION[self._format_version]
1619 self._inline = features[b'inline'](self._format_flags)
1619 self._inline = features[b'inline'](self._format_flags)
1620 self.delta_config.general_delta = features[b'generaldelta'](
1620 self.delta_config.general_delta = features[b'generaldelta'](
1621 self._format_flags
1621 self._format_flags
1622 )
1622 )
1623 self.feature_config.has_side_data = features[b'sidedata']
1623 self.feature_config.has_side_data = features[b'sidedata']
1624
1624
1625 if not features[b'docket']:
1625 if not features[b'docket']:
1626 self._indexfile = entry_point
1626 self._indexfile = entry_point
1627 index_data = entry_data
1627 index_data = entry_data
1628 else:
1628 else:
1629 self._docket_file = entry_point
1629 self._docket_file = entry_point
1630 if self._initempty:
1630 if self._initempty:
1631 self._docket = docketutil.default_docket(self, header)
1631 self._docket = docketutil.default_docket(self, header)
1632 else:
1632 else:
1633 self._docket = docketutil.parse_docket(
1633 self._docket = docketutil.parse_docket(
1634 self, entry_data, use_pending=self._trypending
1634 self, entry_data, use_pending=self._trypending
1635 )
1635 )
1636
1636
1637 if self._docket is not None:
1637 if self._docket is not None:
1638 self._indexfile = self._docket.index_filepath()
1638 self._indexfile = self._docket.index_filepath()
1639 index_data = b''
1639 index_data = b''
1640 index_size = self._docket.index_end
1640 index_size = self._docket.index_end
1641 if index_size > 0:
1641 if index_size > 0:
1642 index_data = self._get_data(
1642 index_data = self._get_data(
1643 self._indexfile, mmapindexthreshold, size=index_size
1643 self._indexfile, mmapindexthreshold, size=index_size
1644 )
1644 )
1645 if len(index_data) < index_size:
1645 if len(index_data) < index_size:
1646 msg = _(b'too few index data for %s: got %d, expected %d')
1646 msg = _(b'too few index data for %s: got %d, expected %d')
1647 msg %= (self.display_id, len(index_data), index_size)
1647 msg %= (self.display_id, len(index_data), index_size)
1648 raise error.RevlogError(msg)
1648 raise error.RevlogError(msg)
1649
1649
1650 self._inline = False
1650 self._inline = False
1651 # generaldelta implied by version 2 revlogs.
1651 # generaldelta implied by version 2 revlogs.
1652 self.delta_config.general_delta = True
1652 self.delta_config.general_delta = True
1653 # the logic for persistent nodemap will be dealt with within the
1653 # the logic for persistent nodemap will be dealt with within the
1654 # main docket, so disable it for now.
1654 # main docket, so disable it for now.
1655 self._nodemap_file = None
1655 self._nodemap_file = None
1656
1656
1657 if self._docket is not None:
1657 if self._docket is not None:
1658 self._datafile = self._docket.data_filepath()
1658 self._datafile = self._docket.data_filepath()
1659 self._sidedatafile = self._docket.sidedata_filepath()
1659 self._sidedatafile = self._docket.sidedata_filepath()
1660 elif self.postfix is None:
1660 elif self.postfix is None:
1661 self._datafile = b'%s.d' % self.radix
1661 self._datafile = b'%s.d' % self.radix
1662 else:
1662 else:
1663 self._datafile = b'%s.d.%s' % (self.radix, self.postfix)
1663 self._datafile = b'%s.d.%s' % (self.radix, self.postfix)
1664
1664
1665 self.nodeconstants = sha1nodeconstants
1665 self.nodeconstants = sha1nodeconstants
1666 self.nullid = self.nodeconstants.nullid
1666 self.nullid = self.nodeconstants.nullid
1667
1667
1668 # sparse-revlog can't be on without general-delta (issue6056)
1668 # sparse-revlog can't be on without general-delta (issue6056)
1669 if not self.delta_config.general_delta:
1669 if not self.delta_config.general_delta:
1670 self.delta_config.sparse_revlog = False
1670 self.delta_config.sparse_revlog = False
1671
1671
1672 self._storedeltachains = True
1672 self._storedeltachains = True
1673
1673
1674 devel_nodemap = (
1674 devel_nodemap = (
1675 self._nodemap_file
1675 self._nodemap_file
1676 and force_nodemap
1676 and force_nodemap
1677 and parse_index_v1_nodemap is not None
1677 and parse_index_v1_nodemap is not None
1678 )
1678 )
1679
1679
1680 use_rust_index = False
1680 use_rust_index = False
1681 if rustrevlog is not None and self._nodemap_file is not None:
1681 if rustrevlog is not None and self._nodemap_file is not None:
1682 # we would like to use the rust_index in all case, especially
1682 # we would like to use the rust_index in all case, especially
1683 # because it is necessary for AncestorsIterator and LazyAncestors
1683 # because it is necessary for AncestorsIterator and LazyAncestors
1684 # since the 6.7 cycle.
1684 # since the 6.7 cycle.
1685 #
1685 #
1686 # However, the performance impact of inconditionnaly building the
1686 # However, the performance impact of inconditionnaly building the
1687 # nodemap is currently a problem for non-persistent nodemap
1687 # nodemap is currently a problem for non-persistent nodemap
1688 # repository.
1688 # repository.
1689 use_rust_index = True
1689 use_rust_index = True
1690
1690
1691 self._parse_index = parse_index_v1
1691 self._parse_index = parse_index_v1
1692 if self._format_version == REVLOGV0:
1692 if self._format_version == REVLOGV0:
1693 self._parse_index = revlogv0.parse_index_v0
1693 self._parse_index = revlogv0.parse_index_v0
1694 elif self._format_version == REVLOGV2:
1694 elif self._format_version == REVLOGV2:
1695 self._parse_index = parse_index_v2
1695 self._parse_index = parse_index_v2
1696 elif self._format_version == CHANGELOGV2:
1696 elif self._format_version == CHANGELOGV2:
1697 self._parse_index = parse_index_cl_v2
1697 self._parse_index = parse_index_cl_v2
1698 elif devel_nodemap:
1698 elif devel_nodemap:
1699 self._parse_index = parse_index_v1_nodemap
1699 self._parse_index = parse_index_v1_nodemap
1700 elif use_rust_index:
1700 elif use_rust_index:
1701 self._parse_index = functools.partial(
1701 self._parse_index = functools.partial(
1702 parse_index_v1_rust, default_header=new_header
1702 parse_index_v1_rust, default_header=new_header
1703 )
1703 )
1704 try:
1704 try:
1705 d = self._parse_index(index_data, self._inline)
1705 d = self._parse_index(index_data, self._inline)
1706 index, chunkcache = d
1706 index, chunkcache = d
1707 use_nodemap = (
1707 use_nodemap = (
1708 not self._inline
1708 not self._inline
1709 and self._nodemap_file is not None
1709 and self._nodemap_file is not None
1710 and hasattr(index, 'update_nodemap_data')
1710 and hasattr(index, 'update_nodemap_data')
1711 )
1711 )
1712 if use_nodemap:
1712 if use_nodemap:
1713 nodemap_data = nodemaputil.persisted_data(self)
1713 nodemap_data = nodemaputil.persisted_data(self)
1714 if nodemap_data is not None:
1714 if nodemap_data is not None:
1715 docket = nodemap_data[0]
1715 docket = nodemap_data[0]
1716 if (
1716 if (
1717 len(d[0]) > docket.tip_rev
1717 len(d[0]) > docket.tip_rev
1718 and d[0][docket.tip_rev][7] == docket.tip_node
1718 and d[0][docket.tip_rev][7] == docket.tip_node
1719 ):
1719 ):
1720 # no changelog tampering
1720 # no changelog tampering
1721 self._nodemap_docket = docket
1721 self._nodemap_docket = docket
1722 index.update_nodemap_data(*nodemap_data)
1722 index.update_nodemap_data(*nodemap_data)
1723 except (ValueError, IndexError):
1723 except (ValueError, IndexError):
1724 raise error.RevlogError(
1724 raise error.RevlogError(
1725 _(b"index %s is corrupted") % self.display_id
1725 _(b"index %s is corrupted") % self.display_id
1726 )
1726 )
1727 self.index = index
1727 self.index = index
1728 # revnum -> (chain-length, sum-delta-length)
1728 # revnum -> (chain-length, sum-delta-length)
1729 self._chaininfocache = util.lrucachedict(500)
1729 self._chaininfocache = util.lrucachedict(500)
1730
1730
1731 return chunkcache
1731 return chunkcache
1732
1732
1733 def _load_inner(self, chunk_cache):
1733 def _load_inner(self, chunk_cache):
1734 if self._docket is None:
1734 if self._docket is None:
1735 default_compression_header = None
1735 default_compression_header = None
1736 else:
1736 else:
1737 default_compression_header = self._docket.default_compression_header
1737 default_compression_header = self._docket.default_compression_header
1738
1738
1739 self._inner = _InnerRevlog(
1739 self._inner = _InnerRevlog(
1740 opener=self.opener,
1740 opener=self.opener,
1741 index=self.index,
1741 index=self.index,
1742 index_file=self._indexfile,
1742 index_file=self._indexfile,
1743 data_file=self._datafile,
1743 data_file=self._datafile,
1744 sidedata_file=self._sidedatafile,
1744 sidedata_file=self._sidedatafile,
1745 inline=self._inline,
1745 inline=self._inline,
1746 data_config=self.data_config,
1746 data_config=self.data_config,
1747 delta_config=self.delta_config,
1747 delta_config=self.delta_config,
1748 feature_config=self.feature_config,
1748 feature_config=self.feature_config,
1749 chunk_cache=chunk_cache,
1749 chunk_cache=chunk_cache,
1750 default_compression_header=default_compression_header,
1750 default_compression_header=default_compression_header,
1751 )
1751 )
1752
1752
1753 def get_revlog(self):
1753 def get_revlog(self):
1754 """simple function to mirror API of other not-really-revlog API"""
1754 """simple function to mirror API of other not-really-revlog API"""
1755 return self
1755 return self
1756
1756
1757 @util.propertycache
1757 @util.propertycache
1758 def revlog_kind(self):
1758 def revlog_kind(self):
1759 return self.target[0]
1759 return self.target[0]
1760
1760
1761 @util.propertycache
1761 @util.propertycache
1762 def display_id(self):
1762 def display_id(self):
1763 """The public facing "ID" of the revlog that we use in message"""
1763 """The public facing "ID" of the revlog that we use in message"""
1764 if self.revlog_kind == KIND_FILELOG:
1764 if self.revlog_kind == KIND_FILELOG:
1765 # Reference the file without the "data/" prefix, so it is familiar
1765 # Reference the file without the "data/" prefix, so it is familiar
1766 # to the user.
1766 # to the user.
1767 return self.target[1]
1767 return self.target[1]
1768 else:
1768 else:
1769 return self.radix
1769 return self.radix
1770
1770
1771 def _datafp(self, mode=b'r'):
1771 def _datafp(self, mode=b'r'):
1772 """file object for the revlog's data file"""
1772 """file object for the revlog's data file"""
1773 return self.opener(self._datafile, mode=mode)
1773 return self.opener(self._datafile, mode=mode)
1774
1774
1775 def tiprev(self):
1775 def tiprev(self):
1776 return len(self.index) - 1
1776 return len(self.index) - 1
1777
1777
1778 def tip(self):
1778 def tip(self):
1779 return self.node(self.tiprev())
1779 return self.node(self.tiprev())
1780
1780
1781 def __contains__(self, rev):
1781 def __contains__(self, rev):
1782 return 0 <= rev < len(self)
1782 return 0 <= rev < len(self)
1783
1783
1784 def __len__(self):
1784 def __len__(self):
1785 return len(self.index)
1785 return len(self.index)
1786
1786
1787 def __iter__(self):
1787 def __iter__(self):
1788 return iter(range(len(self)))
1788 return iter(range(len(self)))
1789
1789
1790 def revs(self, start=0, stop=None):
1790 def revs(self, start=0, stop=None):
1791 """iterate over all rev in this revlog (from start to stop)"""
1791 """iterate over all rev in this revlog (from start to stop)"""
1792 return storageutil.iterrevs(len(self), start=start, stop=stop)
1792 return storageutil.iterrevs(len(self), start=start, stop=stop)
1793
1793
1794 def hasnode(self, node):
1794 def hasnode(self, node):
1795 try:
1795 try:
1796 self.rev(node)
1796 self.rev(node)
1797 return True
1797 return True
1798 except KeyError:
1798 except KeyError:
1799 return False
1799 return False
1800
1800
1801 def _candelta(self, baserev, rev):
1801 def _candelta(self, baserev, rev):
1802 """whether two revisions (baserev, rev) can be delta-ed or not"""
1802 """whether two revisions (baserev, rev) can be delta-ed or not"""
1803 # Disable delta if either rev requires a content-changing flag
1803 # Disable delta if either rev requires a content-changing flag
1804 # processor (ex. LFS). This is because such flag processor can alter
1804 # processor (ex. LFS). This is because such flag processor can alter
1805 # the rawtext content that the delta will be based on, and two clients
1805 # the rawtext content that the delta will be based on, and two clients
1806 # could have a same revlog node with different flags (i.e. different
1806 # could have a same revlog node with different flags (i.e. different
1807 # rawtext contents) and the delta could be incompatible.
1807 # rawtext contents) and the delta could be incompatible.
1808 if (self.flags(baserev) & REVIDX_RAWTEXT_CHANGING_FLAGS) or (
1808 if (self.flags(baserev) & REVIDX_RAWTEXT_CHANGING_FLAGS) or (
1809 self.flags(rev) & REVIDX_RAWTEXT_CHANGING_FLAGS
1809 self.flags(rev) & REVIDX_RAWTEXT_CHANGING_FLAGS
1810 ):
1810 ):
1811 return False
1811 return False
1812 return True
1812 return True
1813
1813
1814 def update_caches(self, transaction):
1814 def update_caches(self, transaction):
1815 """update on disk cache
1815 """update on disk cache
1816
1816
1817 If a transaction is passed, the update may be delayed to transaction
1817 If a transaction is passed, the update may be delayed to transaction
1818 commit."""
1818 commit."""
1819 if self._nodemap_file is not None:
1819 if self._nodemap_file is not None:
1820 if transaction is None:
1820 if transaction is None:
1821 nodemaputil.update_persistent_nodemap(self)
1821 nodemaputil.update_persistent_nodemap(self)
1822 else:
1822 else:
1823 nodemaputil.setup_persistent_nodemap(transaction, self)
1823 nodemaputil.setup_persistent_nodemap(transaction, self)
1824
1824
1825 def clearcaches(self):
1825 def clearcaches(self):
1826 """Clear in-memory caches"""
1826 """Clear in-memory caches"""
1827 self._chainbasecache.clear()
1827 self._chainbasecache.clear()
1828 self._inner.clear_cache()
1828 self._inner.clear_cache()
1829 self._pcache = {}
1829 self._pcache = {}
1830 self._nodemap_docket = None
1830 self._nodemap_docket = None
1831 self.index.clearcaches()
1831 self.index.clearcaches()
1832 # The python code is the one responsible for validating the docket, we
1832 # The python code is the one responsible for validating the docket, we
1833 # end up having to refresh it here.
1833 # end up having to refresh it here.
1834 use_nodemap = (
1834 use_nodemap = (
1835 not self._inline
1835 not self._inline
1836 and self._nodemap_file is not None
1836 and self._nodemap_file is not None
1837 and hasattr(self.index, 'update_nodemap_data')
1837 and hasattr(self.index, 'update_nodemap_data')
1838 )
1838 )
1839 if use_nodemap:
1839 if use_nodemap:
1840 nodemap_data = nodemaputil.persisted_data(self)
1840 nodemap_data = nodemaputil.persisted_data(self)
1841 if nodemap_data is not None:
1841 if nodemap_data is not None:
1842 self._nodemap_docket = nodemap_data[0]
1842 self._nodemap_docket = nodemap_data[0]
1843 self.index.update_nodemap_data(*nodemap_data)
1843 self.index.update_nodemap_data(*nodemap_data)
1844
1844
1845 def rev(self, node):
1845 def rev(self, node):
1846 """return the revision number associated with a <nodeid>"""
1846 """return the revision number associated with a <nodeid>"""
1847 try:
1847 try:
1848 return self.index.rev(node)
1848 return self.index.rev(node)
1849 except TypeError:
1849 except TypeError:
1850 raise
1850 raise
1851 except error.RevlogError:
1851 except error.RevlogError:
1852 # parsers.c radix tree lookup failed
1852 # parsers.c radix tree lookup failed
1853 if (
1853 if (
1854 node == self.nodeconstants.wdirid
1854 node == self.nodeconstants.wdirid
1855 or node in self.nodeconstants.wdirfilenodeids
1855 or node in self.nodeconstants.wdirfilenodeids
1856 ):
1856 ):
1857 raise error.WdirUnsupported
1857 raise error.WdirUnsupported
1858 raise error.LookupError(node, self.display_id, _(b'no node'))
1858 raise error.LookupError(node, self.display_id, _(b'no node'))
1859
1859
1860 # Accessors for index entries.
1860 # Accessors for index entries.
1861
1861
1862 # First tuple entry is 8 bytes. First 6 bytes are offset. Last 2 bytes
1862 # First tuple entry is 8 bytes. First 6 bytes are offset. Last 2 bytes
1863 # are flags.
1863 # are flags.
1864 def start(self, rev):
1864 def start(self, rev):
1865 return int(self.index[rev][0] >> 16)
1865 return int(self.index[rev][0] >> 16)
1866
1866
1867 def sidedata_cut_off(self, rev):
1867 def sidedata_cut_off(self, rev):
1868 sd_cut_off = self.index[rev][8]
1868 sd_cut_off = self.index[rev][8]
1869 if sd_cut_off != 0:
1869 if sd_cut_off != 0:
1870 return sd_cut_off
1870 return sd_cut_off
1871 # This is some annoying dance, because entries without sidedata
1871 # This is some annoying dance, because entries without sidedata
1872 # currently use 0 as their ofsset. (instead of previous-offset +
1872 # currently use 0 as their ofsset. (instead of previous-offset +
1873 # previous-size)
1873 # previous-size)
1874 #
1874 #
1875 # We should reconsider this sidedata β†’ 0 sidata_offset policy.
1875 # We should reconsider this sidedata β†’ 0 sidata_offset policy.
1876 # In the meantime, we need this.
1876 # In the meantime, we need this.
1877 while 0 <= rev:
1877 while 0 <= rev:
1878 e = self.index[rev]
1878 e = self.index[rev]
1879 if e[9] != 0:
1879 if e[9] != 0:
1880 return e[8] + e[9]
1880 return e[8] + e[9]
1881 rev -= 1
1881 rev -= 1
1882 return 0
1882 return 0
1883
1883
1884 def flags(self, rev):
1884 def flags(self, rev):
1885 return self.index[rev][0] & 0xFFFF
1885 return self.index[rev][0] & 0xFFFF
1886
1886
1887 def length(self, rev):
1887 def length(self, rev):
1888 return self.index[rev][1]
1888 return self.index[rev][1]
1889
1889
1890 def sidedata_length(self, rev):
1890 def sidedata_length(self, rev):
1891 if not self.feature_config.has_side_data:
1891 if not self.feature_config.has_side_data:
1892 return 0
1892 return 0
1893 return self.index[rev][9]
1893 return self.index[rev][9]
1894
1894
1895 def rawsize(self, rev):
1895 def rawsize(self, rev):
1896 """return the length of the uncompressed text for a given revision"""
1896 """return the length of the uncompressed text for a given revision"""
1897 l = self.index[rev][2]
1897 l = self.index[rev][2]
1898 if l >= 0:
1898 if l >= 0:
1899 return l
1899 return l
1900
1900
1901 t = self.rawdata(rev)
1901 t = self.rawdata(rev)
1902 return len(t)
1902 return len(t)
1903
1903
1904 def size(self, rev):
1904 def size(self, rev):
1905 """length of non-raw text (processed by a "read" flag processor)"""
1905 """length of non-raw text (processed by a "read" flag processor)"""
1906 # fast path: if no "read" flag processor could change the content,
1906 # fast path: if no "read" flag processor could change the content,
1907 # size is rawsize. note: ELLIPSIS is known to not change the content.
1907 # size is rawsize. note: ELLIPSIS is known to not change the content.
1908 flags = self.flags(rev)
1908 flags = self.flags(rev)
1909 if flags & (flagutil.REVIDX_KNOWN_FLAGS ^ REVIDX_ELLIPSIS) == 0:
1909 if flags & (flagutil.REVIDX_KNOWN_FLAGS ^ REVIDX_ELLIPSIS) == 0:
1910 return self.rawsize(rev)
1910 return self.rawsize(rev)
1911
1911
1912 return len(self.revision(rev))
1912 return len(self.revision(rev))
1913
1913
1914 def fast_rank(self, rev):
1914 def fast_rank(self, rev):
1915 """Return the rank of a revision if already known, or None otherwise.
1915 """Return the rank of a revision if already known, or None otherwise.
1916
1916
1917 The rank of a revision is the size of the sub-graph it defines as a
1917 The rank of a revision is the size of the sub-graph it defines as a
1918 head. Equivalently, the rank of a revision `r` is the size of the set
1918 head. Equivalently, the rank of a revision `r` is the size of the set
1919 `ancestors(r)`, `r` included.
1919 `ancestors(r)`, `r` included.
1920
1920
1921 This method returns the rank retrieved from the revlog in constant
1921 This method returns the rank retrieved from the revlog in constant
1922 time. It makes no attempt at computing unknown values for versions of
1922 time. It makes no attempt at computing unknown values for versions of
1923 the revlog which do not persist the rank.
1923 the revlog which do not persist the rank.
1924 """
1924 """
1925 rank = self.index[rev][ENTRY_RANK]
1925 rank = self.index[rev][ENTRY_RANK]
1926 if self._format_version != CHANGELOGV2 or rank == RANK_UNKNOWN:
1926 if self._format_version != CHANGELOGV2 or rank == RANK_UNKNOWN:
1927 return None
1927 return None
1928 if rev == nullrev:
1928 if rev == nullrev:
1929 return 0 # convention
1929 return 0 # convention
1930 return rank
1930 return rank
1931
1931
1932 def chainbase(self, rev):
1932 def chainbase(self, rev):
1933 base = self._chainbasecache.get(rev)
1933 base = self._chainbasecache.get(rev)
1934 if base is not None:
1934 if base is not None:
1935 return base
1935 return base
1936
1936
1937 index = self.index
1937 index = self.index
1938 iterrev = rev
1938 iterrev = rev
1939 base = index[iterrev][3]
1939 base = index[iterrev][3]
1940 while base != iterrev:
1940 while base != iterrev:
1941 iterrev = base
1941 iterrev = base
1942 base = index[iterrev][3]
1942 base = index[iterrev][3]
1943
1943
1944 self._chainbasecache[rev] = base
1944 self._chainbasecache[rev] = base
1945 return base
1945 return base
1946
1946
1947 def linkrev(self, rev):
1947 def linkrev(self, rev):
1948 return self.index[rev][4]
1948 return self.index[rev][4]
1949
1949
1950 def parentrevs(self, rev):
1950 def parentrevs(self, rev):
1951 try:
1951 try:
1952 entry = self.index[rev]
1952 entry = self.index[rev]
1953 except IndexError:
1953 except IndexError:
1954 if rev == wdirrev:
1954 if rev == wdirrev:
1955 raise error.WdirUnsupported
1955 raise error.WdirUnsupported
1956 raise
1956 raise
1957
1957
1958 if self.feature_config.canonical_parent_order and entry[5] == nullrev:
1958 if self.feature_config.canonical_parent_order and entry[5] == nullrev:
1959 return entry[6], entry[5]
1959 return entry[6], entry[5]
1960 else:
1960 else:
1961 return entry[5], entry[6]
1961 return entry[5], entry[6]
1962
1962
1963 # fast parentrevs(rev) where rev isn't filtered
1963 # fast parentrevs(rev) where rev isn't filtered
1964 _uncheckedparentrevs = parentrevs
1964 _uncheckedparentrevs = parentrevs
1965
1965
1966 def node(self, rev):
1966 def node(self, rev):
1967 try:
1967 try:
1968 return self.index[rev][7]
1968 return self.index[rev][7]
1969 except IndexError:
1969 except IndexError:
1970 if rev == wdirrev:
1970 if rev == wdirrev:
1971 raise error.WdirUnsupported
1971 raise error.WdirUnsupported
1972 raise
1972 raise
1973
1973
1974 # Derived from index values.
1974 # Derived from index values.
1975
1975
1976 def end(self, rev):
1976 def end(self, rev):
1977 return self.start(rev) + self.length(rev)
1977 return self.start(rev) + self.length(rev)
1978
1978
1979 def parents(self, node):
1979 def parents(self, node):
1980 i = self.index
1980 i = self.index
1981 d = i[self.rev(node)]
1981 d = i[self.rev(node)]
1982 # inline node() to avoid function call overhead
1982 # inline node() to avoid function call overhead
1983 if self.feature_config.canonical_parent_order and d[5] == self.nullid:
1983 if self.feature_config.canonical_parent_order and d[5] == self.nullid:
1984 return i[d[6]][7], i[d[5]][7]
1984 return i[d[6]][7], i[d[5]][7]
1985 else:
1985 else:
1986 return i[d[5]][7], i[d[6]][7]
1986 return i[d[5]][7], i[d[6]][7]
1987
1987
1988 def chainlen(self, rev):
1988 def chainlen(self, rev):
1989 return self._chaininfo(rev)[0]
1989 return self._chaininfo(rev)[0]
1990
1990
1991 def _chaininfo(self, rev):
1991 def _chaininfo(self, rev):
1992 chaininfocache = self._chaininfocache
1992 chaininfocache = self._chaininfocache
1993 if rev in chaininfocache:
1993 if rev in chaininfocache:
1994 return chaininfocache[rev]
1994 return chaininfocache[rev]
1995 index = self.index
1995 index = self.index
1996 generaldelta = self.delta_config.general_delta
1996 generaldelta = self.delta_config.general_delta
1997 iterrev = rev
1997 iterrev = rev
1998 e = index[iterrev]
1998 e = index[iterrev]
1999 clen = 0
1999 clen = 0
2000 compresseddeltalen = 0
2000 compresseddeltalen = 0
2001 while iterrev != e[3]:
2001 while iterrev != e[3]:
2002 clen += 1
2002 clen += 1
2003 compresseddeltalen += e[1]
2003 compresseddeltalen += e[1]
2004 if generaldelta:
2004 if generaldelta:
2005 iterrev = e[3]
2005 iterrev = e[3]
2006 else:
2006 else:
2007 iterrev -= 1
2007 iterrev -= 1
2008 if iterrev in chaininfocache:
2008 if iterrev in chaininfocache:
2009 t = chaininfocache[iterrev]
2009 t = chaininfocache[iterrev]
2010 clen += t[0]
2010 clen += t[0]
2011 compresseddeltalen += t[1]
2011 compresseddeltalen += t[1]
2012 break
2012 break
2013 e = index[iterrev]
2013 e = index[iterrev]
2014 else:
2014 else:
2015 # Add text length of base since decompressing that also takes
2015 # Add text length of base since decompressing that also takes
2016 # work. For cache hits the length is already included.
2016 # work. For cache hits the length is already included.
2017 compresseddeltalen += e[1]
2017 compresseddeltalen += e[1]
2018 r = (clen, compresseddeltalen)
2018 r = (clen, compresseddeltalen)
2019 chaininfocache[rev] = r
2019 chaininfocache[rev] = r
2020 return r
2020 return r
2021
2021
2022 def _deltachain(self, rev, stoprev=None):
2022 def _deltachain(self, rev, stoprev=None):
2023 return self._inner._deltachain(rev, stoprev=stoprev)
2023 return self._inner._deltachain(rev, stoprev=stoprev)
2024
2024
2025 def ancestors(self, revs, stoprev=0, inclusive=False):
2025 def ancestors(self, revs, stoprev=0, inclusive=False):
2026 """Generate the ancestors of 'revs' in reverse revision order.
2026 """Generate the ancestors of 'revs' in reverse revision order.
2027 Does not generate revs lower than stoprev.
2027 Does not generate revs lower than stoprev.
2028
2028
2029 See the documentation for ancestor.lazyancestors for more details."""
2029 See the documentation for ancestor.lazyancestors for more details."""
2030
2030
2031 # first, make sure start revisions aren't filtered
2031 # first, make sure start revisions aren't filtered
2032 revs = list(revs)
2032 revs = list(revs)
2033 checkrev = self.node
2033 checkrev = self.node
2034 for r in revs:
2034 for r in revs:
2035 checkrev(r)
2035 checkrev(r)
2036 # and we're sure ancestors aren't filtered as well
2036 # and we're sure ancestors aren't filtered as well
2037
2037
2038 if rustancestor is not None and self.index.rust_ext_compat:
2038 if rustancestor is not None and self.index.rust_ext_compat:
2039 lazyancestors = rustancestor.LazyAncestors
2039 lazyancestors = rustancestor.LazyAncestors
2040 arg = self.index
2040 arg = self.index
2041 else:
2041 else:
2042 lazyancestors = ancestor.lazyancestors
2042 lazyancestors = ancestor.lazyancestors
2043 arg = self._uncheckedparentrevs
2043 arg = self._uncheckedparentrevs
2044 return lazyancestors(arg, revs, stoprev=stoprev, inclusive=inclusive)
2044 return lazyancestors(arg, revs, stoprev=stoprev, inclusive=inclusive)
2045
2045
2046 def descendants(self, revs):
2046 def descendants(self, revs):
2047 return dagop.descendantrevs(revs, self.revs, self.parentrevs)
2047 return dagop.descendantrevs(revs, self.revs, self.parentrevs)
2048
2048
2049 def findcommonmissing(self, common=None, heads=None):
2049 def findcommonmissing(self, common=None, heads=None):
2050 """Return a tuple of the ancestors of common and the ancestors of heads
2050 """Return a tuple of the ancestors of common and the ancestors of heads
2051 that are not ancestors of common. In revset terminology, we return the
2051 that are not ancestors of common. In revset terminology, we return the
2052 tuple:
2052 tuple:
2053
2053
2054 ::common, (::heads) - (::common)
2054 ::common, (::heads) - (::common)
2055
2055
2056 The list is sorted by revision number, meaning it is
2056 The list is sorted by revision number, meaning it is
2057 topologically sorted.
2057 topologically sorted.
2058
2058
2059 'heads' and 'common' are both lists of node IDs. If heads is
2059 'heads' and 'common' are both lists of node IDs. If heads is
2060 not supplied, uses all of the revlog's heads. If common is not
2060 not supplied, uses all of the revlog's heads. If common is not
2061 supplied, uses nullid."""
2061 supplied, uses nullid."""
2062 if common is None:
2062 if common is None:
2063 common = [self.nullid]
2063 common = [self.nullid]
2064 if heads is None:
2064 if heads is None:
2065 heads = self.heads()
2065 heads = self.heads()
2066
2066
2067 common = [self.rev(n) for n in common]
2067 common = [self.rev(n) for n in common]
2068 heads = [self.rev(n) for n in heads]
2068 heads = [self.rev(n) for n in heads]
2069
2069
2070 # we want the ancestors, but inclusive
2070 # we want the ancestors, but inclusive
2071 class lazyset:
2071 class lazyset:
2072 def __init__(self, lazyvalues):
2072 def __init__(self, lazyvalues):
2073 self.addedvalues = set()
2073 self.addedvalues = set()
2074 self.lazyvalues = lazyvalues
2074 self.lazyvalues = lazyvalues
2075
2075
2076 def __contains__(self, value):
2076 def __contains__(self, value):
2077 return value in self.addedvalues or value in self.lazyvalues
2077 return value in self.addedvalues or value in self.lazyvalues
2078
2078
2079 def __iter__(self):
2079 def __iter__(self):
2080 added = self.addedvalues
2080 added = self.addedvalues
2081 for r in added:
2081 for r in added:
2082 yield r
2082 yield r
2083 for r in self.lazyvalues:
2083 for r in self.lazyvalues:
2084 if not r in added:
2084 if not r in added:
2085 yield r
2085 yield r
2086
2086
2087 def add(self, value):
2087 def add(self, value):
2088 self.addedvalues.add(value)
2088 self.addedvalues.add(value)
2089
2089
2090 def update(self, values):
2090 def update(self, values):
2091 self.addedvalues.update(values)
2091 self.addedvalues.update(values)
2092
2092
2093 has = lazyset(self.ancestors(common))
2093 has = lazyset(self.ancestors(common))
2094 has.add(nullrev)
2094 has.add(nullrev)
2095 has.update(common)
2095 has.update(common)
2096
2096
2097 # take all ancestors from heads that aren't in has
2097 # take all ancestors from heads that aren't in has
2098 missing = set()
2098 missing = set()
2099 visit = collections.deque(r for r in heads if r not in has)
2099 visit = collections.deque(r for r in heads if r not in has)
2100 while visit:
2100 while visit:
2101 r = visit.popleft()
2101 r = visit.popleft()
2102 if r in missing:
2102 if r in missing:
2103 continue
2103 continue
2104 else:
2104 else:
2105 missing.add(r)
2105 missing.add(r)
2106 for p in self.parentrevs(r):
2106 for p in self.parentrevs(r):
2107 if p not in has:
2107 if p not in has:
2108 visit.append(p)
2108 visit.append(p)
2109 missing = list(missing)
2109 missing = list(missing)
2110 missing.sort()
2110 missing.sort()
2111 return has, [self.node(miss) for miss in missing]
2111 return has, [self.node(miss) for miss in missing]
2112
2112
2113 def incrementalmissingrevs(self, common=None):
2113 def incrementalmissingrevs(self, common=None):
2114 """Return an object that can be used to incrementally compute the
2114 """Return an object that can be used to incrementally compute the
2115 revision numbers of the ancestors of arbitrary sets that are not
2115 revision numbers of the ancestors of arbitrary sets that are not
2116 ancestors of common. This is an ancestor.incrementalmissingancestors
2116 ancestors of common. This is an ancestor.incrementalmissingancestors
2117 object.
2117 object.
2118
2118
2119 'common' is a list of revision numbers. If common is not supplied, uses
2119 'common' is a list of revision numbers. If common is not supplied, uses
2120 nullrev.
2120 nullrev.
2121 """
2121 """
2122 if common is None:
2122 if common is None:
2123 common = [nullrev]
2123 common = [nullrev]
2124
2124
2125 if rustancestor is not None and self.index.rust_ext_compat:
2125 if rustancestor is not None and self.index.rust_ext_compat:
2126 return rustancestor.MissingAncestors(self.index, common)
2126 return rustancestor.MissingAncestors(self.index, common)
2127 return ancestor.incrementalmissingancestors(self.parentrevs, common)
2127 return ancestor.incrementalmissingancestors(self.parentrevs, common)
2128
2128
2129 def findmissingrevs(self, common=None, heads=None):
2129 def findmissingrevs(self, common=None, heads=None):
2130 """Return the revision numbers of the ancestors of heads that
2130 """Return the revision numbers of the ancestors of heads that
2131 are not ancestors of common.
2131 are not ancestors of common.
2132
2132
2133 More specifically, return a list of revision numbers corresponding to
2133 More specifically, return a list of revision numbers corresponding to
2134 nodes N such that every N satisfies the following constraints:
2134 nodes N such that every N satisfies the following constraints:
2135
2135
2136 1. N is an ancestor of some node in 'heads'
2136 1. N is an ancestor of some node in 'heads'
2137 2. N is not an ancestor of any node in 'common'
2137 2. N is not an ancestor of any node in 'common'
2138
2138
2139 The list is sorted by revision number, meaning it is
2139 The list is sorted by revision number, meaning it is
2140 topologically sorted.
2140 topologically sorted.
2141
2141
2142 'heads' and 'common' are both lists of revision numbers. If heads is
2142 'heads' and 'common' are both lists of revision numbers. If heads is
2143 not supplied, uses all of the revlog's heads. If common is not
2143 not supplied, uses all of the revlog's heads. If common is not
2144 supplied, uses nullid."""
2144 supplied, uses nullid."""
2145 if common is None:
2145 if common is None:
2146 common = [nullrev]
2146 common = [nullrev]
2147 if heads is None:
2147 if heads is None:
2148 heads = self.headrevs()
2148 heads = self.headrevs()
2149
2149
2150 inc = self.incrementalmissingrevs(common=common)
2150 inc = self.incrementalmissingrevs(common=common)
2151 return inc.missingancestors(heads)
2151 return inc.missingancestors(heads)
2152
2152
2153 def findmissing(self, common=None, heads=None):
2153 def findmissing(self, common=None, heads=None):
2154 """Return the ancestors of heads that are not ancestors of common.
2154 """Return the ancestors of heads that are not ancestors of common.
2155
2155
2156 More specifically, return a list of nodes N such that every N
2156 More specifically, return a list of nodes N such that every N
2157 satisfies the following constraints:
2157 satisfies the following constraints:
2158
2158
2159 1. N is an ancestor of some node in 'heads'
2159 1. N is an ancestor of some node in 'heads'
2160 2. N is not an ancestor of any node in 'common'
2160 2. N is not an ancestor of any node in 'common'
2161
2161
2162 The list is sorted by revision number, meaning it is
2162 The list is sorted by revision number, meaning it is
2163 topologically sorted.
2163 topologically sorted.
2164
2164
2165 'heads' and 'common' are both lists of node IDs. If heads is
2165 'heads' and 'common' are both lists of node IDs. If heads is
2166 not supplied, uses all of the revlog's heads. If common is not
2166 not supplied, uses all of the revlog's heads. If common is not
2167 supplied, uses nullid."""
2167 supplied, uses nullid."""
2168 if common is None:
2168 if common is None:
2169 common = [self.nullid]
2169 common = [self.nullid]
2170 if heads is None:
2170 if heads is None:
2171 heads = self.heads()
2171 heads = self.heads()
2172
2172
2173 common = [self.rev(n) for n in common]
2173 common = [self.rev(n) for n in common]
2174 heads = [self.rev(n) for n in heads]
2174 heads = [self.rev(n) for n in heads]
2175
2175
2176 inc = self.incrementalmissingrevs(common=common)
2176 inc = self.incrementalmissingrevs(common=common)
2177 return [self.node(r) for r in inc.missingancestors(heads)]
2177 return [self.node(r) for r in inc.missingancestors(heads)]
2178
2178
2179 def nodesbetween(self, roots=None, heads=None):
2179 def nodesbetween(self, roots=None, heads=None):
2180 """Return a topological path from 'roots' to 'heads'.
2180 """Return a topological path from 'roots' to 'heads'.
2181
2181
2182 Return a tuple (nodes, outroots, outheads) where 'nodes' is a
2182 Return a tuple (nodes, outroots, outheads) where 'nodes' is a
2183 topologically sorted list of all nodes N that satisfy both of
2183 topologically sorted list of all nodes N that satisfy both of
2184 these constraints:
2184 these constraints:
2185
2185
2186 1. N is a descendant of some node in 'roots'
2186 1. N is a descendant of some node in 'roots'
2187 2. N is an ancestor of some node in 'heads'
2187 2. N is an ancestor of some node in 'heads'
2188
2188
2189 Every node is considered to be both a descendant and an ancestor
2189 Every node is considered to be both a descendant and an ancestor
2190 of itself, so every reachable node in 'roots' and 'heads' will be
2190 of itself, so every reachable node in 'roots' and 'heads' will be
2191 included in 'nodes'.
2191 included in 'nodes'.
2192
2192
2193 'outroots' is the list of reachable nodes in 'roots', i.e., the
2193 'outroots' is the list of reachable nodes in 'roots', i.e., the
2194 subset of 'roots' that is returned in 'nodes'. Likewise,
2194 subset of 'roots' that is returned in 'nodes'. Likewise,
2195 'outheads' is the subset of 'heads' that is also in 'nodes'.
2195 'outheads' is the subset of 'heads' that is also in 'nodes'.
2196
2196
2197 'roots' and 'heads' are both lists of node IDs. If 'roots' is
2197 'roots' and 'heads' are both lists of node IDs. If 'roots' is
2198 unspecified, uses nullid as the only root. If 'heads' is
2198 unspecified, uses nullid as the only root. If 'heads' is
2199 unspecified, uses list of all of the revlog's heads."""
2199 unspecified, uses list of all of the revlog's heads."""
2200 nonodes = ([], [], [])
2200 nonodes = ([], [], [])
2201 if roots is not None:
2201 if roots is not None:
2202 roots = list(roots)
2202 roots = list(roots)
2203 if not roots:
2203 if not roots:
2204 return nonodes
2204 return nonodes
2205 lowestrev = min([self.rev(n) for n in roots])
2205 lowestrev = min([self.rev(n) for n in roots])
2206 else:
2206 else:
2207 roots = [self.nullid] # Everybody's a descendant of nullid
2207 roots = [self.nullid] # Everybody's a descendant of nullid
2208 lowestrev = nullrev
2208 lowestrev = nullrev
2209 if (lowestrev == nullrev) and (heads is None):
2209 if (lowestrev == nullrev) and (heads is None):
2210 # We want _all_ the nodes!
2210 # We want _all_ the nodes!
2211 return (
2211 return (
2212 [self.node(r) for r in self],
2212 [self.node(r) for r in self],
2213 [self.nullid],
2213 [self.nullid],
2214 list(self.heads()),
2214 list(self.heads()),
2215 )
2215 )
2216 if heads is None:
2216 if heads is None:
2217 # All nodes are ancestors, so the latest ancestor is the last
2217 # All nodes are ancestors, so the latest ancestor is the last
2218 # node.
2218 # node.
2219 highestrev = len(self) - 1
2219 highestrev = len(self) - 1
2220 # Set ancestors to None to signal that every node is an ancestor.
2220 # Set ancestors to None to signal that every node is an ancestor.
2221 ancestors = None
2221 ancestors = None
2222 # Set heads to an empty dictionary for later discovery of heads
2222 # Set heads to an empty dictionary for later discovery of heads
2223 heads = {}
2223 heads = {}
2224 else:
2224 else:
2225 heads = list(heads)
2225 heads = list(heads)
2226 if not heads:
2226 if not heads:
2227 return nonodes
2227 return nonodes
2228 ancestors = set()
2228 ancestors = set()
2229 # Turn heads into a dictionary so we can remove 'fake' heads.
2229 # Turn heads into a dictionary so we can remove 'fake' heads.
2230 # Also, later we will be using it to filter out the heads we can't
2230 # Also, later we will be using it to filter out the heads we can't
2231 # find from roots.
2231 # find from roots.
2232 heads = dict.fromkeys(heads, False)
2232 heads = dict.fromkeys(heads, False)
2233 # Start at the top and keep marking parents until we're done.
2233 # Start at the top and keep marking parents until we're done.
2234 nodestotag = set(heads)
2234 nodestotag = set(heads)
2235 # Remember where the top was so we can use it as a limit later.
2235 # Remember where the top was so we can use it as a limit later.
2236 highestrev = max([self.rev(n) for n in nodestotag])
2236 highestrev = max([self.rev(n) for n in nodestotag])
2237 while nodestotag:
2237 while nodestotag:
2238 # grab a node to tag
2238 # grab a node to tag
2239 n = nodestotag.pop()
2239 n = nodestotag.pop()
2240 # Never tag nullid
2240 # Never tag nullid
2241 if n == self.nullid:
2241 if n == self.nullid:
2242 continue
2242 continue
2243 # A node's revision number represents its place in a
2243 # A node's revision number represents its place in a
2244 # topologically sorted list of nodes.
2244 # topologically sorted list of nodes.
2245 r = self.rev(n)
2245 r = self.rev(n)
2246 if r >= lowestrev:
2246 if r >= lowestrev:
2247 if n not in ancestors:
2247 if n not in ancestors:
2248 # If we are possibly a descendant of one of the roots
2248 # If we are possibly a descendant of one of the roots
2249 # and we haven't already been marked as an ancestor
2249 # and we haven't already been marked as an ancestor
2250 ancestors.add(n) # Mark as ancestor
2250 ancestors.add(n) # Mark as ancestor
2251 # Add non-nullid parents to list of nodes to tag.
2251 # Add non-nullid parents to list of nodes to tag.
2252 nodestotag.update(
2252 nodestotag.update(
2253 [p for p in self.parents(n) if p != self.nullid]
2253 [p for p in self.parents(n) if p != self.nullid]
2254 )
2254 )
2255 elif n in heads: # We've seen it before, is it a fake head?
2255 elif n in heads: # We've seen it before, is it a fake head?
2256 # So it is, real heads should not be the ancestors of
2256 # So it is, real heads should not be the ancestors of
2257 # any other heads.
2257 # any other heads.
2258 heads.pop(n)
2258 heads.pop(n)
2259 if not ancestors:
2259 if not ancestors:
2260 return nonodes
2260 return nonodes
2261 # Now that we have our set of ancestors, we want to remove any
2261 # Now that we have our set of ancestors, we want to remove any
2262 # roots that are not ancestors.
2262 # roots that are not ancestors.
2263
2263
2264 # If one of the roots was nullid, everything is included anyway.
2264 # If one of the roots was nullid, everything is included anyway.
2265 if lowestrev > nullrev:
2265 if lowestrev > nullrev:
2266 # But, since we weren't, let's recompute the lowest rev to not
2266 # But, since we weren't, let's recompute the lowest rev to not
2267 # include roots that aren't ancestors.
2267 # include roots that aren't ancestors.
2268
2268
2269 # Filter out roots that aren't ancestors of heads
2269 # Filter out roots that aren't ancestors of heads
2270 roots = [root for root in roots if root in ancestors]
2270 roots = [root for root in roots if root in ancestors]
2271 # Recompute the lowest revision
2271 # Recompute the lowest revision
2272 if roots:
2272 if roots:
2273 lowestrev = min([self.rev(root) for root in roots])
2273 lowestrev = min([self.rev(root) for root in roots])
2274 else:
2274 else:
2275 # No more roots? Return empty list
2275 # No more roots? Return empty list
2276 return nonodes
2276 return nonodes
2277 else:
2277 else:
2278 # We are descending from nullid, and don't need to care about
2278 # We are descending from nullid, and don't need to care about
2279 # any other roots.
2279 # any other roots.
2280 lowestrev = nullrev
2280 lowestrev = nullrev
2281 roots = [self.nullid]
2281 roots = [self.nullid]
2282 # Transform our roots list into a set.
2282 # Transform our roots list into a set.
2283 descendants = set(roots)
2283 descendants = set(roots)
2284 # Also, keep the original roots so we can filter out roots that aren't
2284 # Also, keep the original roots so we can filter out roots that aren't
2285 # 'real' roots (i.e. are descended from other roots).
2285 # 'real' roots (i.e. are descended from other roots).
2286 roots = descendants.copy()
2286 roots = descendants.copy()
2287 # Our topologically sorted list of output nodes.
2287 # Our topologically sorted list of output nodes.
2288 orderedout = []
2288 orderedout = []
2289 # Don't start at nullid since we don't want nullid in our output list,
2289 # Don't start at nullid since we don't want nullid in our output list,
2290 # and if nullid shows up in descendants, empty parents will look like
2290 # and if nullid shows up in descendants, empty parents will look like
2291 # they're descendants.
2291 # they're descendants.
2292 for r in self.revs(start=max(lowestrev, 0), stop=highestrev + 1):
2292 for r in self.revs(start=max(lowestrev, 0), stop=highestrev + 1):
2293 n = self.node(r)
2293 n = self.node(r)
2294 isdescendant = False
2294 isdescendant = False
2295 if lowestrev == nullrev: # Everybody is a descendant of nullid
2295 if lowestrev == nullrev: # Everybody is a descendant of nullid
2296 isdescendant = True
2296 isdescendant = True
2297 elif n in descendants:
2297 elif n in descendants:
2298 # n is already a descendant
2298 # n is already a descendant
2299 isdescendant = True
2299 isdescendant = True
2300 # This check only needs to be done here because all the roots
2300 # This check only needs to be done here because all the roots
2301 # will start being marked is descendants before the loop.
2301 # will start being marked is descendants before the loop.
2302 if n in roots:
2302 if n in roots:
2303 # If n was a root, check if it's a 'real' root.
2303 # If n was a root, check if it's a 'real' root.
2304 p = tuple(self.parents(n))
2304 p = tuple(self.parents(n))
2305 # If any of its parents are descendants, it's not a root.
2305 # If any of its parents are descendants, it's not a root.
2306 if (p[0] in descendants) or (p[1] in descendants):
2306 if (p[0] in descendants) or (p[1] in descendants):
2307 roots.remove(n)
2307 roots.remove(n)
2308 else:
2308 else:
2309 p = tuple(self.parents(n))
2309 p = tuple(self.parents(n))
2310 # A node is a descendant if either of its parents are
2310 # A node is a descendant if either of its parents are
2311 # descendants. (We seeded the dependents list with the roots
2311 # descendants. (We seeded the dependents list with the roots
2312 # up there, remember?)
2312 # up there, remember?)
2313 if (p[0] in descendants) or (p[1] in descendants):
2313 if (p[0] in descendants) or (p[1] in descendants):
2314 descendants.add(n)
2314 descendants.add(n)
2315 isdescendant = True
2315 isdescendant = True
2316 if isdescendant and ((ancestors is None) or (n in ancestors)):
2316 if isdescendant and ((ancestors is None) or (n in ancestors)):
2317 # Only include nodes that are both descendants and ancestors.
2317 # Only include nodes that are both descendants and ancestors.
2318 orderedout.append(n)
2318 orderedout.append(n)
2319 if (ancestors is not None) and (n in heads):
2319 if (ancestors is not None) and (n in heads):
2320 # We're trying to figure out which heads are reachable
2320 # We're trying to figure out which heads are reachable
2321 # from roots.
2321 # from roots.
2322 # Mark this head as having been reached
2322 # Mark this head as having been reached
2323 heads[n] = True
2323 heads[n] = True
2324 elif ancestors is None:
2324 elif ancestors is None:
2325 # Otherwise, we're trying to discover the heads.
2325 # Otherwise, we're trying to discover the heads.
2326 # Assume this is a head because if it isn't, the next step
2326 # Assume this is a head because if it isn't, the next step
2327 # will eventually remove it.
2327 # will eventually remove it.
2328 heads[n] = True
2328 heads[n] = True
2329 # But, obviously its parents aren't.
2329 # But, obviously its parents aren't.
2330 for p in self.parents(n):
2330 for p in self.parents(n):
2331 heads.pop(p, None)
2331 heads.pop(p, None)
2332 heads = [head for head, flag in heads.items() if flag]
2332 heads = [head for head, flag in heads.items() if flag]
2333 roots = list(roots)
2333 roots = list(roots)
2334 assert orderedout
2334 assert orderedout
2335 assert roots
2335 assert roots
2336 assert heads
2336 assert heads
2337 return (orderedout, roots, heads)
2337 return (orderedout, roots, heads)
2338
2338
2339 def headrevs(self, revs=None):
2339 def headrevs(self, revs=None):
2340 if revs is None:
2340 if revs is None:
2341 try:
2341 try:
2342 return self.index.headrevs()
2342 return self.index.headrevs()
2343 except AttributeError:
2343 except AttributeError:
2344 return self._headrevs()
2344 return self._headrevs()
2345 if rustdagop is not None and self.index.rust_ext_compat:
2345 if rustdagop is not None and self.index.rust_ext_compat:
2346 return rustdagop.headrevs(self.index, revs)
2346 return rustdagop.headrevs(self.index, revs)
2347 return dagop.headrevs(revs, self._uncheckedparentrevs)
2347 return dagop.headrevs(revs, self._uncheckedparentrevs)
2348
2348
2349 def headrevsdiff(self, start, stop):
2349 def headrevsdiff(self, start, stop):
2350 try:
2350 try:
2351 return self.index.headrevsdiff(start, stop)
2351 return self.index.headrevsdiff(start, stop)
2352 except AttributeError:
2352 except AttributeError:
2353 return dagop.headrevsdiff(self._uncheckedparentrevs, start, stop)
2353 return dagop.headrevsdiff(self._uncheckedparentrevs, start, stop)
2354
2354
2355 def computephases(self, roots):
2355 def computephases(self, roots):
2356 return self.index.computephasesmapsets(roots)
2356 return self.index.computephasesmapsets(roots)
2357
2357
2358 def _headrevs(self):
2358 def _headrevs(self):
2359 count = len(self)
2359 count = len(self)
2360 if not count:
2360 if not count:
2361 return [nullrev]
2361 return [nullrev]
2362 # we won't iter over filtered rev so nobody is a head at start
2362 # we won't iter over filtered rev so nobody is a head at start
2363 ishead = [0] * (count + 1)
2363 ishead = [0] * (count + 1)
2364 index = self.index
2364 index = self.index
2365 for r in self:
2365 for r in self:
2366 ishead[r] = 1 # I may be an head
2366 ishead[r] = 1 # I may be an head
2367 e = index[r]
2367 e = index[r]
2368 ishead[e[5]] = ishead[e[6]] = 0 # my parent are not
2368 ishead[e[5]] = ishead[e[6]] = 0 # my parent are not
2369 return [r for r, val in enumerate(ishead) if val]
2369 return [r for r, val in enumerate(ishead) if val]
2370
2370
2371 def _head_node_ids(self):
2371 def _head_node_ids(self):
2372 try:
2372 try:
2373 return self.index.head_node_ids()
2373 return self.index.head_node_ids()
2374 except AttributeError:
2374 except AttributeError:
2375 return [self.node(r) for r in self.headrevs()]
2375 return [self.node(r) for r in self.headrevs()]
2376
2376
2377 def heads(self, start=None, stop=None):
2377 def heads(self, start=None, stop=None):
2378 """return the list of all nodes that have no children
2378 """return the list of all nodes that have no children
2379
2379
2380 if start is specified, only heads that are descendants of
2380 if start is specified, only heads that are descendants of
2381 start will be returned
2381 start will be returned
2382 if stop is specified, it will consider all the revs from stop
2382 if stop is specified, it will consider all the revs from stop
2383 as if they had no children
2383 as if they had no children
2384 """
2384 """
2385 if start is None and stop is None:
2385 if start is None and stop is None:
2386 if not len(self):
2386 if not len(self):
2387 return [self.nullid]
2387 return [self.nullid]
2388 return self._head_node_ids()
2388 return self._head_node_ids()
2389 if start is None:
2389 if start is None:
2390 start = nullrev
2390 start = nullrev
2391 else:
2391 else:
2392 start = self.rev(start)
2392 start = self.rev(start)
2393
2393
2394 stoprevs = {self.rev(n) for n in stop or []}
2394 stoprevs = {self.rev(n) for n in stop or []}
2395
2395
2396 revs = dagop.headrevssubset(
2396 revs = dagop.headrevssubset(
2397 self.revs, self.parentrevs, startrev=start, stoprevs=stoprevs
2397 self.revs, self.parentrevs, startrev=start, stoprevs=stoprevs
2398 )
2398 )
2399
2399
2400 return [self.node(rev) for rev in revs]
2400 return [self.node(rev) for rev in revs]
2401
2401
2402 def diffheads(self, start, stop):
2402 def diffheads(self, start, stop):
2403 """return the nodes that make up the difference between
2403 """return the nodes that make up the difference between
2404 heads of revs before `start` and heads of revs before `stop`"""
2404 heads of revs before `start` and heads of revs before `stop`"""
2405 removed, added = self.headrevsdiff(start, stop)
2405 removed, added = self.headrevsdiff(start, stop)
2406 return [self.node(r) for r in removed], [self.node(r) for r in added]
2406 return [self.node(r) for r in removed], [self.node(r) for r in added]
2407
2407
2408 def children(self, node):
2408 def children(self, node):
2409 """find the children of a given node"""
2409 """find the children of a given node"""
2410 c = []
2410 c = []
2411 p = self.rev(node)
2411 p = self.rev(node)
2412 for r in self.revs(start=p + 1):
2412 for r in self.revs(start=p + 1):
2413 prevs = [pr for pr in self.parentrevs(r) if pr != nullrev]
2413 prevs = [pr for pr in self.parentrevs(r) if pr != nullrev]
2414 if prevs:
2414 if prevs:
2415 for pr in prevs:
2415 for pr in prevs:
2416 if pr == p:
2416 if pr == p:
2417 c.append(self.node(r))
2417 c.append(self.node(r))
2418 elif p == nullrev:
2418 elif p == nullrev:
2419 c.append(self.node(r))
2419 c.append(self.node(r))
2420 return c
2420 return c
2421
2421
2422 def commonancestorsheads(self, a, b):
2422 def commonancestorsheads(self, a, b):
2423 """calculate all the heads of the common ancestors of nodes a and b"""
2423 """calculate all the heads of the common ancestors of nodes a and b"""
2424 a, b = self.rev(a), self.rev(b)
2424 a, b = self.rev(a), self.rev(b)
2425 ancs = self._commonancestorsheads(a, b)
2425 ancs = self._commonancestorsheads(a, b)
2426 return pycompat.maplist(self.node, ancs)
2426 return pycompat.maplist(self.node, ancs)
2427
2427
2428 def _commonancestorsheads(self, *revs):
2428 def _commonancestorsheads(self, *revs):
2429 """calculate all the heads of the common ancestors of revs"""
2429 """calculate all the heads of the common ancestors of revs"""
2430 try:
2430 try:
2431 ancs = self.index.commonancestorsheads(*revs)
2431 ancs = self.index.commonancestorsheads(*revs)
2432 except (AttributeError, OverflowError): # C implementation failed
2432 except (AttributeError, OverflowError): # C implementation failed
2433 ancs = ancestor.commonancestorsheads(self.parentrevs, *revs)
2433 ancs = ancestor.commonancestorsheads(self.parentrevs, *revs)
2434 return ancs
2434 return ancs
2435
2435
2436 def isancestor(self, a, b):
2436 def isancestor(self, a, b):
2437 """return True if node a is an ancestor of node b
2437 """return True if node a is an ancestor of node b
2438
2438
2439 A revision is considered an ancestor of itself."""
2439 A revision is considered an ancestor of itself."""
2440 a, b = self.rev(a), self.rev(b)
2440 a, b = self.rev(a), self.rev(b)
2441 return self.isancestorrev(a, b)
2441 return self.isancestorrev(a, b)
2442
2442
2443 def isancestorrev(self, a, b):
2443 def isancestorrev(self, a, b):
2444 """return True if revision a is an ancestor of revision b
2444 """return True if revision a is an ancestor of revision b
2445
2445
2446 A revision is considered an ancestor of itself.
2446 A revision is considered an ancestor of itself.
2447
2447
2448 The implementation of this is trivial but the use of
2448 The implementation of this is trivial but the use of
2449 reachableroots is not."""
2449 reachableroots is not."""
2450 if a == nullrev:
2450 if a == nullrev:
2451 return True
2451 return True
2452 elif a == b:
2452 elif a == b:
2453 return True
2453 return True
2454 elif a > b:
2454 elif a > b:
2455 return False
2455 return False
2456 return bool(self.reachableroots(a, [b], [a], includepath=False))
2456 return bool(self.reachableroots(a, [b], [a], includepath=False))
2457
2457
2458 def reachableroots(self, minroot, heads, roots, includepath=False):
2458 def reachableroots(self, minroot, heads, roots, includepath=False):
2459 """return (heads(::(<roots> and <roots>::<heads>)))
2459 """return (heads(::(<roots> and <roots>::<heads>)))
2460
2460
2461 If includepath is True, return (<roots>::<heads>)."""
2461 If includepath is True, return (<roots>::<heads>)."""
2462 try:
2462 try:
2463 return self.index.reachableroots2(
2463 return self.index.reachableroots2(
2464 minroot, heads, roots, includepath
2464 minroot, heads, roots, includepath
2465 )
2465 )
2466 except AttributeError:
2466 except AttributeError:
2467 return dagop._reachablerootspure(
2467 return dagop._reachablerootspure(
2468 self.parentrevs, minroot, roots, heads, includepath
2468 self.parentrevs, minroot, roots, heads, includepath
2469 )
2469 )
2470
2470
2471 def ancestor(self, a, b):
2471 def ancestor(self, a, b):
2472 """calculate the "best" common ancestor of nodes a and b"""
2472 """calculate the "best" common ancestor of nodes a and b"""
2473
2473
2474 a, b = self.rev(a), self.rev(b)
2474 a, b = self.rev(a), self.rev(b)
2475 try:
2475 try:
2476 ancs = self.index.ancestors(a, b)
2476 ancs = self.index.ancestors(a, b)
2477 except (AttributeError, OverflowError):
2477 except (AttributeError, OverflowError):
2478 ancs = ancestor.ancestors(self.parentrevs, a, b)
2478 ancs = ancestor.ancestors(self.parentrevs, a, b)
2479 if ancs:
2479 if ancs:
2480 # choose a consistent winner when there's a tie
2480 # choose a consistent winner when there's a tie
2481 return min(map(self.node, ancs))
2481 return min(map(self.node, ancs))
2482 return self.nullid
2482 return self.nullid
2483
2483
2484 def _match(self, id):
2484 def _match(self, id):
2485 if isinstance(id, int):
2485 if isinstance(id, int):
2486 # rev
2486 # rev
2487 return self.node(id)
2487 return self.node(id)
2488 if len(id) == self.nodeconstants.nodelen:
2488 if len(id) == self.nodeconstants.nodelen:
2489 # possibly a binary node
2489 # possibly a binary node
2490 # odds of a binary node being all hex in ASCII are 1 in 10**25
2490 # odds of a binary node being all hex in ASCII are 1 in 10**25
2491 try:
2491 try:
2492 node = id
2492 node = id
2493 self.rev(node) # quick search the index
2493 self.rev(node) # quick search the index
2494 return node
2494 return node
2495 except error.LookupError:
2495 except error.LookupError:
2496 pass # may be partial hex id
2496 pass # may be partial hex id
2497 try:
2497 try:
2498 # str(rev)
2498 # str(rev)
2499 rev = int(id)
2499 rev = int(id)
2500 if b"%d" % rev != id:
2500 if b"%d" % rev != id:
2501 raise ValueError
2501 raise ValueError
2502 if rev < 0:
2502 if rev < 0:
2503 rev = len(self) + rev
2503 rev = len(self) + rev
2504 if rev < 0 or rev >= len(self):
2504 if rev < 0 or rev >= len(self):
2505 raise ValueError
2505 raise ValueError
2506 return self.node(rev)
2506 return self.node(rev)
2507 except (ValueError, OverflowError):
2507 except (ValueError, OverflowError):
2508 pass
2508 pass
2509 if len(id) == 2 * self.nodeconstants.nodelen:
2509 if len(id) == 2 * self.nodeconstants.nodelen:
2510 try:
2510 try:
2511 # a full hex nodeid?
2511 # a full hex nodeid?
2512 node = bin(id)
2512 node = bin(id)
2513 self.rev(node)
2513 self.rev(node)
2514 return node
2514 return node
2515 except (binascii.Error, error.LookupError):
2515 except (binascii.Error, error.LookupError):
2516 pass
2516 pass
2517
2517
2518 def _partialmatch(self, id):
2518 def _partialmatch(self, id):
2519 # we don't care wdirfilenodeids as they should be always full hash
2519 # we don't care wdirfilenodeids as they should be always full hash
2520 maybewdir = self.nodeconstants.wdirhex.startswith(id)
2520 maybewdir = self.nodeconstants.wdirhex.startswith(id)
2521 ambiguous = False
2521 ambiguous = False
2522 try:
2522 try:
2523 partial = self.index.partialmatch(id)
2523 partial = self.index.partialmatch(id)
2524 if partial and self.hasnode(partial):
2524 if partial and self.hasnode(partial):
2525 if maybewdir:
2525 if maybewdir:
2526 # single 'ff...' match in radix tree, ambiguous with wdir
2526 # single 'ff...' match in radix tree, ambiguous with wdir
2527 ambiguous = True
2527 ambiguous = True
2528 else:
2528 else:
2529 return partial
2529 return partial
2530 elif maybewdir:
2530 elif maybewdir:
2531 # no 'ff...' match in radix tree, wdir identified
2531 # no 'ff...' match in radix tree, wdir identified
2532 raise error.WdirUnsupported
2532 raise error.WdirUnsupported
2533 else:
2533 else:
2534 return None
2534 return None
2535 except error.RevlogError:
2535 except error.RevlogError:
2536 # parsers.c radix tree lookup gave multiple matches
2536 # parsers.c radix tree lookup gave multiple matches
2537 # fast path: for unfiltered changelog, radix tree is accurate
2537 # fast path: for unfiltered changelog, radix tree is accurate
2538 if not getattr(self, 'filteredrevs', None):
2538 if not getattr(self, 'filteredrevs', None):
2539 ambiguous = True
2539 ambiguous = True
2540 # fall through to slow path that filters hidden revisions
2540 # fall through to slow path that filters hidden revisions
2541 except (AttributeError, ValueError):
2541 except (AttributeError, ValueError):
2542 # we are pure python, or key is not hex
2542 # we are pure python, or key is not hex
2543 pass
2543 pass
2544 if ambiguous:
2544 if ambiguous:
2545 raise error.AmbiguousPrefixLookupError(
2545 raise error.AmbiguousPrefixLookupError(
2546 id, self.display_id, _(b'ambiguous identifier')
2546 id, self.display_id, _(b'ambiguous identifier')
2547 )
2547 )
2548
2548
2549 if id in self._pcache:
2549 if id in self._pcache:
2550 return self._pcache[id]
2550 return self._pcache[id]
2551
2551
2552 if len(id) <= 40:
2552 if len(id) <= 40:
2553 # hex(node)[:...]
2553 # hex(node)[:...]
2554 l = len(id) // 2 * 2 # grab an even number of digits
2554 l = len(id) // 2 * 2 # grab an even number of digits
2555 try:
2555 try:
2556 # we're dropping the last digit, so let's check that it's hex,
2556 # we're dropping the last digit, so let's check that it's hex,
2557 # to avoid the expensive computation below if it's not
2557 # to avoid the expensive computation below if it's not
2558 if len(id) % 2 > 0:
2558 if len(id) % 2 > 0:
2559 if not (id[-1] in hexdigits):
2559 if not (id[-1] in hexdigits):
2560 return None
2560 return None
2561 prefix = bin(id[:l])
2561 prefix = bin(id[:l])
2562 except binascii.Error:
2562 except binascii.Error:
2563 pass
2563 pass
2564 else:
2564 else:
2565 nl = [e[7] for e in self.index if e[7].startswith(prefix)]
2565 nl = [e[7] for e in self.index if e[7].startswith(prefix)]
2566 nl = [
2566 nl = [
2567 n for n in nl if hex(n).startswith(id) and self.hasnode(n)
2567 n for n in nl if hex(n).startswith(id) and self.hasnode(n)
2568 ]
2568 ]
2569 if self.nodeconstants.nullhex.startswith(id):
2569 if self.nodeconstants.nullhex.startswith(id):
2570 nl.append(self.nullid)
2570 nl.append(self.nullid)
2571 if len(nl) > 0:
2571 if len(nl) > 0:
2572 if len(nl) == 1 and not maybewdir:
2572 if len(nl) == 1 and not maybewdir:
2573 self._pcache[id] = nl[0]
2573 self._pcache[id] = nl[0]
2574 return nl[0]
2574 return nl[0]
2575 raise error.AmbiguousPrefixLookupError(
2575 raise error.AmbiguousPrefixLookupError(
2576 id, self.display_id, _(b'ambiguous identifier')
2576 id, self.display_id, _(b'ambiguous identifier')
2577 )
2577 )
2578 if maybewdir:
2578 if maybewdir:
2579 raise error.WdirUnsupported
2579 raise error.WdirUnsupported
2580 return None
2580 return None
2581
2581
2582 def lookup(self, id):
2582 def lookup(self, id):
2583 """locate a node based on:
2583 """locate a node based on:
2584 - revision number or str(revision number)
2584 - revision number or str(revision number)
2585 - nodeid or subset of hex nodeid
2585 - nodeid or subset of hex nodeid
2586 """
2586 """
2587 n = self._match(id)
2587 n = self._match(id)
2588 if n is not None:
2588 if n is not None:
2589 return n
2589 return n
2590 n = self._partialmatch(id)
2590 n = self._partialmatch(id)
2591 if n:
2591 if n:
2592 return n
2592 return n
2593
2593
2594 raise error.LookupError(id, self.display_id, _(b'no match found'))
2594 raise error.LookupError(id, self.display_id, _(b'no match found'))
2595
2595
2596 def shortest(self, node, minlength=1):
2596 def shortest(self, node, minlength=1):
2597 """Find the shortest unambiguous prefix that matches node."""
2597 """Find the shortest unambiguous prefix that matches node."""
2598
2598
2599 def isvalid(prefix):
2599 def isvalid(prefix):
2600 try:
2600 try:
2601 matchednode = self._partialmatch(prefix)
2601 matchednode = self._partialmatch(prefix)
2602 except error.AmbiguousPrefixLookupError:
2602 except error.AmbiguousPrefixLookupError:
2603 return False
2603 return False
2604 except error.WdirUnsupported:
2604 except error.WdirUnsupported:
2605 # single 'ff...' match
2605 # single 'ff...' match
2606 return True
2606 return True
2607 if matchednode is None:
2607 if matchednode is None:
2608 raise error.LookupError(node, self.display_id, _(b'no node'))
2608 raise error.LookupError(node, self.display_id, _(b'no node'))
2609 return True
2609 return True
2610
2610
2611 def maybewdir(prefix):
2611 def maybewdir(prefix):
2612 return all(c == b'f' for c in pycompat.iterbytestr(prefix))
2612 return all(c == b'f' for c in pycompat.iterbytestr(prefix))
2613
2613
2614 hexnode = hex(node)
2614 hexnode = hex(node)
2615
2615
2616 def disambiguate(hexnode, minlength):
2616 def disambiguate(hexnode, minlength):
2617 """Disambiguate against wdirid."""
2617 """Disambiguate against wdirid."""
2618 for length in range(minlength, len(hexnode) + 1):
2618 for length in range(minlength, len(hexnode) + 1):
2619 prefix = hexnode[:length]
2619 prefix = hexnode[:length]
2620 if not maybewdir(prefix):
2620 if not maybewdir(prefix):
2621 return prefix
2621 return prefix
2622
2622
2623 if not getattr(self, 'filteredrevs', None):
2623 if not getattr(self, 'filteredrevs', None):
2624 try:
2624 try:
2625 length = max(self.index.shortest(node), minlength)
2625 length = max(self.index.shortest(node), minlength)
2626 return disambiguate(hexnode, length)
2626 return disambiguate(hexnode, length)
2627 except error.RevlogError:
2627 except error.RevlogError:
2628 if node != self.nodeconstants.wdirid:
2628 if node != self.nodeconstants.wdirid:
2629 raise error.LookupError(
2629 raise error.LookupError(
2630 node, self.display_id, _(b'no node')
2630 node, self.display_id, _(b'no node')
2631 )
2631 )
2632 except AttributeError:
2632 except AttributeError:
2633 # Fall through to pure code
2633 # Fall through to pure code
2634 pass
2634 pass
2635
2635
2636 if node == self.nodeconstants.wdirid:
2636 if node == self.nodeconstants.wdirid:
2637 for length in range(minlength, len(hexnode) + 1):
2637 for length in range(minlength, len(hexnode) + 1):
2638 prefix = hexnode[:length]
2638 prefix = hexnode[:length]
2639 if isvalid(prefix):
2639 if isvalid(prefix):
2640 return prefix
2640 return prefix
2641
2641
2642 for length in range(minlength, len(hexnode) + 1):
2642 for length in range(minlength, len(hexnode) + 1):
2643 prefix = hexnode[:length]
2643 prefix = hexnode[:length]
2644 if isvalid(prefix):
2644 if isvalid(prefix):
2645 return disambiguate(hexnode, length)
2645 return disambiguate(hexnode, length)
2646
2646
2647 def cmp(self, node, text):
2647 def cmp(self, node, text):
2648 """compare text with a given file revision
2648 """compare text with a given file revision
2649
2649
2650 returns True if text is different than what is stored.
2650 returns True if text is different than what is stored.
2651 """
2651 """
2652 p1, p2 = self.parents(node)
2652 p1, p2 = self.parents(node)
2653 return storageutil.hashrevisionsha1(text, p1, p2) != node
2653 return storageutil.hashrevisionsha1(text, p1, p2) != node
2654
2654
2655 def deltaparent(self, rev):
2655 def deltaparent(self, rev):
2656 """return deltaparent of the given revision"""
2656 """return deltaparent of the given revision"""
2657 base = self.index[rev][3]
2657 base = self.index[rev][3]
2658 if base == rev:
2658 if base == rev:
2659 return nullrev
2659 return nullrev
2660 elif self.delta_config.general_delta:
2660 elif self.delta_config.general_delta:
2661 return base
2661 return base
2662 else:
2662 else:
2663 return rev - 1
2663 return rev - 1
2664
2664
2665 def issnapshot(self, rev):
2665 def issnapshot(self, rev):
2666 """tells whether rev is a snapshot"""
2666 """tells whether rev is a snapshot"""
2667 ret = self._inner.issnapshot(rev)
2667 ret = self._inner.issnapshot(rev)
2668 self.issnapshot = self._inner.issnapshot
2668 self.issnapshot = self._inner.issnapshot
2669 return ret
2669 return ret
2670
2670
2671 def snapshotdepth(self, rev):
2671 def snapshotdepth(self, rev):
2672 """number of snapshot in the chain before this one"""
2672 """number of snapshot in the chain before this one"""
2673 if not self.issnapshot(rev):
2673 if not self.issnapshot(rev):
2674 raise error.ProgrammingError(b'revision %d not a snapshot')
2674 raise error.ProgrammingError(b'revision %d not a snapshot')
2675 return len(self._inner._deltachain(rev)[0]) - 1
2675 return len(self._inner._deltachain(rev)[0]) - 1
2676
2676
2677 def revdiff(self, rev1, rev2):
2677 def revdiff(self, rev1, rev2):
2678 """return or calculate a delta between two revisions
2678 """return or calculate a delta between two revisions
2679
2679
2680 The delta calculated is in binary form and is intended to be written to
2680 The delta calculated is in binary form and is intended to be written to
2681 revlog data directly. So this function needs raw revision data.
2681 revlog data directly. So this function needs raw revision data.
2682 """
2682 """
2683 if rev1 != nullrev and self.deltaparent(rev2) == rev1:
2683 if rev1 != nullrev and self.deltaparent(rev2) == rev1:
2684 return bytes(self._inner._chunk(rev2))
2684 return bytes(self._inner._chunk(rev2))
2685
2685
2686 return mdiff.textdiff(self.rawdata(rev1), self.rawdata(rev2))
2686 return mdiff.textdiff(self.rawdata(rev1), self.rawdata(rev2))
2687
2687
2688 def revision(self, nodeorrev):
2688 def revision(self, nodeorrev):
2689 """return an uncompressed revision of a given node or revision
2689 """return an uncompressed revision of a given node or revision
2690 number.
2690 number.
2691 """
2691 """
2692 return self._revisiondata(nodeorrev)
2692 return self._revisiondata(nodeorrev)
2693
2693
2694 def sidedata(self, nodeorrev):
2694 def sidedata(self, nodeorrev):
2695 """a map of extra data related to the changeset but not part of the hash
2695 """a map of extra data related to the changeset but not part of the hash
2696
2696
2697 This function currently return a dictionary. However, more advanced
2697 This function currently return a dictionary. However, more advanced
2698 mapping object will likely be used in the future for a more
2698 mapping object will likely be used in the future for a more
2699 efficient/lazy code.
2699 efficient/lazy code.
2700 """
2700 """
2701 # deal with <nodeorrev> argument type
2701 # deal with <nodeorrev> argument type
2702 if isinstance(nodeorrev, int):
2702 if isinstance(nodeorrev, int):
2703 rev = nodeorrev
2703 rev = nodeorrev
2704 else:
2704 else:
2705 rev = self.rev(nodeorrev)
2705 rev = self.rev(nodeorrev)
2706 return self._sidedata(rev)
2706 return self._sidedata(rev)
2707
2707
2708 def _rawtext(self, node, rev):
2708 def _rawtext(self, node, rev):
2709 """return the possibly unvalidated rawtext for a revision
2709 """return the possibly unvalidated rawtext for a revision
2710
2710
2711 returns (rev, rawtext, validated)
2711 returns (rev, rawtext, validated)
2712 """
2712 """
2713 # Check if we have the entry in cache
2713 # Check if we have the entry in cache
2714 # The cache entry looks like (node, rev, rawtext)
2714 # The cache entry looks like (node, rev, rawtext)
2715 if self._inner._revisioncache:
2715 if self._inner._revisioncache:
2716 if self._inner._revisioncache[0] == node:
2716 if self._inner._revisioncache[0] == node:
2717 return (rev, self._inner._revisioncache[2], True)
2717 return (rev, self._inner._revisioncache[2], True)
2718
2718
2719 if rev is None:
2719 if rev is None:
2720 rev = self.rev(node)
2720 rev = self.rev(node)
2721
2721
2722 return self._inner.raw_text(node, rev)
2722 return self._inner.raw_text(node, rev)
2723
2723
2724 def _revisiondata(self, nodeorrev, raw=False):
2724 def _revisiondata(self, nodeorrev, raw=False):
2725 # deal with <nodeorrev> argument type
2725 # deal with <nodeorrev> argument type
2726 if isinstance(nodeorrev, int):
2726 if isinstance(nodeorrev, int):
2727 rev = nodeorrev
2727 rev = nodeorrev
2728 node = self.node(rev)
2728 node = self.node(rev)
2729 else:
2729 else:
2730 node = nodeorrev
2730 node = nodeorrev
2731 rev = None
2731 rev = None
2732
2732
2733 # fast path the special `nullid` rev
2733 # fast path the special `nullid` rev
2734 if node == self.nullid:
2734 if node == self.nullid:
2735 return b""
2735 return b""
2736
2736
2737 # ``rawtext`` is the text as stored inside the revlog. Might be the
2737 # ``rawtext`` is the text as stored inside the revlog. Might be the
2738 # revision or might need to be processed to retrieve the revision.
2738 # revision or might need to be processed to retrieve the revision.
2739 rev, rawtext, validated = self._rawtext(node, rev)
2739 rev, rawtext, validated = self._rawtext(node, rev)
2740
2740
2741 if raw and validated:
2741 if raw and validated:
2742 # if we don't want to process the raw text and that raw
2742 # if we don't want to process the raw text and that raw
2743 # text is cached, we can exit early.
2743 # text is cached, we can exit early.
2744 return rawtext
2744 return rawtext
2745 if rev is None:
2745 if rev is None:
2746 rev = self.rev(node)
2746 rev = self.rev(node)
2747 # the revlog's flag for this revision
2747 # the revlog's flag for this revision
2748 # (usually alter its state or content)
2748 # (usually alter its state or content)
2749 flags = self.flags(rev)
2749 flags = self.flags(rev)
2750
2750
2751 if validated and flags == REVIDX_DEFAULT_FLAGS:
2751 if validated and flags == REVIDX_DEFAULT_FLAGS:
2752 # no extra flags set, no flag processor runs, text = rawtext
2752 # no extra flags set, no flag processor runs, text = rawtext
2753 return rawtext
2753 return rawtext
2754
2754
2755 if raw:
2755 if raw:
2756 validatehash = flagutil.processflagsraw(self, rawtext, flags)
2756 validatehash = flagutil.processflagsraw(self, rawtext, flags)
2757 text = rawtext
2757 text = rawtext
2758 else:
2758 else:
2759 r = flagutil.processflagsread(self, rawtext, flags)
2759 r = flagutil.processflagsread(self, rawtext, flags)
2760 text, validatehash = r
2760 text, validatehash = r
2761 if validatehash:
2761 if validatehash:
2762 self.checkhash(text, node, rev=rev)
2762 self.checkhash(text, node, rev=rev)
2763 if not validated:
2763 if not validated:
2764 self._inner._revisioncache = (node, rev, rawtext)
2764 self._inner._revisioncache = (node, rev, rawtext)
2765
2765
2766 return text
2766 return text
2767
2767
2768 def _sidedata(self, rev):
2768 def _sidedata(self, rev):
2769 """Return the sidedata for a given revision number."""
2769 """Return the sidedata for a given revision number."""
2770 sidedata_end = None
2770 sidedata_end = None
2771 if self._docket is not None:
2771 if self._docket is not None:
2772 sidedata_end = self._docket.sidedata_end
2772 sidedata_end = self._docket.sidedata_end
2773 return self._inner.sidedata(rev, sidedata_end)
2773 return self._inner.sidedata(rev, sidedata_end)
2774
2774
2775 def rawdata(self, nodeorrev):
2775 def rawdata(self, nodeorrev):
2776 """return an uncompressed raw data of a given node or revision number."""
2776 """return an uncompressed raw data of a given node or revision number."""
2777 return self._revisiondata(nodeorrev, raw=True)
2777 return self._revisiondata(nodeorrev, raw=True)
2778
2778
2779 def hash(self, text, p1, p2):
2779 def hash(self, text, p1, p2):
2780 """Compute a node hash.
2780 """Compute a node hash.
2781
2781
2782 Available as a function so that subclasses can replace the hash
2782 Available as a function so that subclasses can replace the hash
2783 as needed.
2783 as needed.
2784 """
2784 """
2785 return storageutil.hashrevisionsha1(text, p1, p2)
2785 return storageutil.hashrevisionsha1(text, p1, p2)
2786
2786
2787 def checkhash(self, text, node, p1=None, p2=None, rev=None):
2787 def checkhash(self, text, node, p1=None, p2=None, rev=None):
2788 """Check node hash integrity.
2788 """Check node hash integrity.
2789
2789
2790 Available as a function so that subclasses can extend hash mismatch
2790 Available as a function so that subclasses can extend hash mismatch
2791 behaviors as needed.
2791 behaviors as needed.
2792 """
2792 """
2793 try:
2793 try:
2794 if p1 is None and p2 is None:
2794 if p1 is None and p2 is None:
2795 p1, p2 = self.parents(node)
2795 p1, p2 = self.parents(node)
2796 if node != self.hash(text, p1, p2):
2796 if node != self.hash(text, p1, p2):
2797 # Clear the revision cache on hash failure. The revision cache
2797 # Clear the revision cache on hash failure. The revision cache
2798 # only stores the raw revision and clearing the cache does have
2798 # only stores the raw revision and clearing the cache does have
2799 # the side-effect that we won't have a cache hit when the raw
2799 # the side-effect that we won't have a cache hit when the raw
2800 # revision data is accessed. But this case should be rare and
2800 # revision data is accessed. But this case should be rare and
2801 # it is extra work to teach the cache about the hash
2801 # it is extra work to teach the cache about the hash
2802 # verification state.
2802 # verification state.
2803 if (
2803 if (
2804 self._inner._revisioncache
2804 self._inner._revisioncache
2805 and self._inner._revisioncache[0] == node
2805 and self._inner._revisioncache[0] == node
2806 ):
2806 ):
2807 self._inner._revisioncache = None
2807 self._inner._revisioncache = None
2808
2808
2809 revornode = rev
2809 revornode = rev
2810 if revornode is None:
2810 if revornode is None:
2811 revornode = templatefilters.short(hex(node))
2811 revornode = templatefilters.short(hex(node))
2812 raise error.RevlogError(
2812 raise error.RevlogError(
2813 _(b"integrity check failed on %s:%s")
2813 _(b"integrity check failed on %s:%s")
2814 % (self.display_id, pycompat.bytestr(revornode))
2814 % (self.display_id, pycompat.bytestr(revornode))
2815 )
2815 )
2816 except error.RevlogError:
2816 except error.RevlogError:
2817 if self.feature_config.censorable and storageutil.iscensoredtext(
2817 if self.feature_config.censorable and storageutil.iscensoredtext(
2818 text
2818 text
2819 ):
2819 ):
2820 raise error.CensoredNodeError(self.display_id, node, text)
2820 raise error.CensoredNodeError(self.display_id, node, text)
2821 raise
2821 raise
2822
2822
2823 @property
2823 @property
2824 def _split_index_file(self):
2824 def _split_index_file(self):
2825 """the path where to expect the index of an ongoing splitting operation
2825 """the path where to expect the index of an ongoing splitting operation
2826
2826
2827 The file will only exist if a splitting operation is in progress, but
2827 The file will only exist if a splitting operation is in progress, but
2828 it is always expected at the same location."""
2828 it is always expected at the same location."""
2829 parts = self.radix.split(b'/')
2829 parts = self.radix.split(b'/')
2830 if len(parts) > 1:
2830 if len(parts) > 1:
2831 # adds a '-s' prefix to the ``data/` or `meta/` base
2831 # adds a '-s' prefix to the ``data/` or `meta/` base
2832 head = parts[0] + b'-s'
2832 head = parts[0] + b'-s'
2833 mids = parts[1:-1]
2833 mids = parts[1:-1]
2834 tail = parts[-1] + b'.i'
2834 tail = parts[-1] + b'.i'
2835 pieces = [head] + mids + [tail]
2835 pieces = [head] + mids + [tail]
2836 return b'/'.join(pieces)
2836 return b'/'.join(pieces)
2837 else:
2837 else:
2838 # the revlog is stored at the root of the store (changelog or
2838 # the revlog is stored at the root of the store (changelog or
2839 # manifest), no risk of collision.
2839 # manifest), no risk of collision.
2840 return self.radix + b'.i.s'
2840 return self.radix + b'.i.s'
2841
2841
2842 def _enforceinlinesize(self, tr):
2842 def _enforceinlinesize(self, tr):
2843 """Check if the revlog is too big for inline and convert if so.
2843 """Check if the revlog is too big for inline and convert if so.
2844
2844
2845 This should be called after revisions are added to the revlog. If the
2845 This should be called after revisions are added to the revlog. If the
2846 revlog has grown too large to be an inline revlog, it will convert it
2846 revlog has grown too large to be an inline revlog, it will convert it
2847 to use multiple index and data files.
2847 to use multiple index and data files.
2848 """
2848 """
2849 tiprev = len(self) - 1
2849 tiprev = len(self) - 1
2850 total_size = self.start(tiprev) + self.length(tiprev)
2850 total_size = self.start(tiprev) + self.length(tiprev)
2851 if not self._inline or (self._may_inline and total_size < _maxinline):
2851 if not self._inline or (self._may_inline and total_size < _maxinline):
2852 return
2852 return
2853
2853
2854 if self._docket is not None:
2854 if self._docket is not None:
2855 msg = b"inline revlog should not have a docket"
2855 msg = b"inline revlog should not have a docket"
2856 raise error.ProgrammingError(msg)
2856 raise error.ProgrammingError(msg)
2857
2857
2858 # In the common case, we enforce inline size because the revlog has
2858 # In the common case, we enforce inline size because the revlog has
2859 # been appened too. And in such case, it must have an initial offset
2859 # been appened too. And in such case, it must have an initial offset
2860 # recorded in the transaction.
2860 # recorded in the transaction.
2861 troffset = tr.findoffset(self._inner.canonical_index_file)
2861 troffset = tr.findoffset(self._inner.canonical_index_file)
2862 pre_touched = troffset is not None
2862 pre_touched = troffset is not None
2863 if not pre_touched and self.target[0] != KIND_CHANGELOG:
2863 if not pre_touched and self.target[0] != KIND_CHANGELOG:
2864 raise error.RevlogError(
2864 raise error.RevlogError(
2865 _(b"%s not found in the transaction") % self._indexfile
2865 _(b"%s not found in the transaction") % self._indexfile
2866 )
2866 )
2867
2867
2868 tr.addbackup(self._inner.canonical_index_file, for_offset=pre_touched)
2868 tr.addbackup(self._inner.canonical_index_file, for_offset=pre_touched)
2869 tr.add(self._datafile, 0)
2869 tr.add(self._datafile, 0)
2870
2870
2871 new_index_file_path = None
2871 new_index_file_path = None
2872 old_index_file_path = self._indexfile
2872 old_index_file_path = self._indexfile
2873 new_index_file_path = self._split_index_file
2873 new_index_file_path = self._split_index_file
2874 opener = self.opener
2874 opener = self.opener
2875 weak_self = weakref.ref(self)
2875 weak_self = weakref.ref(self)
2876
2876
2877 # the "split" index replace the real index when the transaction is
2877 # the "split" index replace the real index when the transaction is
2878 # finalized
2878 # finalized
2879 def finalize_callback(tr):
2879 def finalize_callback(tr):
2880 opener.rename(
2880 opener.rename(
2881 new_index_file_path,
2881 new_index_file_path,
2882 old_index_file_path,
2882 old_index_file_path,
2883 checkambig=True,
2883 checkambig=True,
2884 )
2884 )
2885 maybe_self = weak_self()
2885 maybe_self = weak_self()
2886 if maybe_self is not None:
2886 if maybe_self is not None:
2887 maybe_self._indexfile = old_index_file_path
2887 maybe_self._indexfile = old_index_file_path
2888 maybe_self._inner.index_file = maybe_self._indexfile
2888 maybe_self._inner.index_file = maybe_self._indexfile
2889
2889
2890 def abort_callback(tr):
2890 def abort_callback(tr):
2891 maybe_self = weak_self()
2891 maybe_self = weak_self()
2892 if maybe_self is not None:
2892 if maybe_self is not None:
2893 maybe_self._indexfile = old_index_file_path
2893 maybe_self._indexfile = old_index_file_path
2894 maybe_self._inner.inline = True
2894 maybe_self._inner.inline = True
2895 maybe_self._inner.index_file = old_index_file_path
2895 maybe_self._inner.index_file = old_index_file_path
2896
2896
2897 tr.registertmp(new_index_file_path)
2897 tr.registertmp(new_index_file_path)
2898 # we use 001 here to make this this happens after the finalisation of
2899 # pending changelog write (using 000). Otherwise the two finalizer
2900 # would step over each other and delete the changelog.i file.
2898 if self.target[1] is not None:
2901 if self.target[1] is not None:
2899 callback_id = b'000-revlog-split-%d-%s' % self.target
2902 callback_id = b'001-revlog-split-%d-%s' % self.target
2900 else:
2903 else:
2901 callback_id = b'000-revlog-split-%d' % self.target[0]
2904 callback_id = b'001-revlog-split-%d' % self.target[0]
2902 tr.addfinalize(callback_id, finalize_callback)
2905 tr.addfinalize(callback_id, finalize_callback)
2903 tr.addabort(callback_id, abort_callback)
2906 tr.addabort(callback_id, abort_callback)
2904
2907
2905 self._format_flags &= ~FLAG_INLINE_DATA
2908 self._format_flags &= ~FLAG_INLINE_DATA
2906 self._inner.split_inline(
2909 self._inner.split_inline(
2907 tr,
2910 tr,
2908 self._format_flags | self._format_version,
2911 self._format_flags | self._format_version,
2909 new_index_file_path=new_index_file_path,
2912 new_index_file_path=new_index_file_path,
2910 )
2913 )
2911
2914
2912 self._inline = False
2915 self._inline = False
2913 if new_index_file_path is not None:
2916 if new_index_file_path is not None:
2914 self._indexfile = new_index_file_path
2917 self._indexfile = new_index_file_path
2915
2918
2916 nodemaputil.setup_persistent_nodemap(tr, self)
2919 nodemaputil.setup_persistent_nodemap(tr, self)
2917
2920
2918 def _nodeduplicatecallback(self, transaction, node):
2921 def _nodeduplicatecallback(self, transaction, node):
2919 """called when trying to add a node already stored."""
2922 """called when trying to add a node already stored."""
2920
2923
2921 @contextlib.contextmanager
2924 @contextlib.contextmanager
2922 def reading(self):
2925 def reading(self):
2923 with self._inner.reading():
2926 with self._inner.reading():
2924 yield
2927 yield
2925
2928
2926 @contextlib.contextmanager
2929 @contextlib.contextmanager
2927 def _writing(self, transaction):
2930 def _writing(self, transaction):
2928 if self._trypending:
2931 if self._trypending:
2929 msg = b'try to write in a `trypending` revlog: %s'
2932 msg = b'try to write in a `trypending` revlog: %s'
2930 msg %= self.display_id
2933 msg %= self.display_id
2931 raise error.ProgrammingError(msg)
2934 raise error.ProgrammingError(msg)
2932 if self._inner.is_writing:
2935 if self._inner.is_writing:
2933 yield
2936 yield
2934 else:
2937 else:
2935 data_end = None
2938 data_end = None
2936 sidedata_end = None
2939 sidedata_end = None
2937 if self._docket is not None:
2940 if self._docket is not None:
2938 data_end = self._docket.data_end
2941 data_end = self._docket.data_end
2939 sidedata_end = self._docket.sidedata_end
2942 sidedata_end = self._docket.sidedata_end
2940 with self._inner.writing(
2943 with self._inner.writing(
2941 transaction,
2944 transaction,
2942 data_end=data_end,
2945 data_end=data_end,
2943 sidedata_end=sidedata_end,
2946 sidedata_end=sidedata_end,
2944 ):
2947 ):
2945 yield
2948 yield
2946 if self._docket is not None:
2949 if self._docket is not None:
2947 self._write_docket(transaction)
2950 self._write_docket(transaction)
2948
2951
2949 @property
2952 @property
2950 def is_delaying(self):
2953 def is_delaying(self):
2951 return self._inner.is_delaying
2954 return self._inner.is_delaying
2952
2955
2953 def _write_docket(self, transaction):
2956 def _write_docket(self, transaction):
2954 """write the current docket on disk
2957 """write the current docket on disk
2955
2958
2956 Exist as a method to help changelog to implement transaction logic
2959 Exist as a method to help changelog to implement transaction logic
2957
2960
2958 We could also imagine using the same transaction logic for all revlog
2961 We could also imagine using the same transaction logic for all revlog
2959 since docket are cheap."""
2962 since docket are cheap."""
2960 self._docket.write(transaction)
2963 self._docket.write(transaction)
2961
2964
2962 def addrevision(
2965 def addrevision(
2963 self,
2966 self,
2964 text,
2967 text,
2965 transaction,
2968 transaction,
2966 link,
2969 link,
2967 p1,
2970 p1,
2968 p2,
2971 p2,
2969 cachedelta=None,
2972 cachedelta=None,
2970 node=None,
2973 node=None,
2971 flags=REVIDX_DEFAULT_FLAGS,
2974 flags=REVIDX_DEFAULT_FLAGS,
2972 deltacomputer=None,
2975 deltacomputer=None,
2973 sidedata=None,
2976 sidedata=None,
2974 ):
2977 ):
2975 """add a revision to the log
2978 """add a revision to the log
2976
2979
2977 text - the revision data to add
2980 text - the revision data to add
2978 transaction - the transaction object used for rollback
2981 transaction - the transaction object used for rollback
2979 link - the linkrev data to add
2982 link - the linkrev data to add
2980 p1, p2 - the parent nodeids of the revision
2983 p1, p2 - the parent nodeids of the revision
2981 cachedelta - an optional precomputed delta
2984 cachedelta - an optional precomputed delta
2982 node - nodeid of revision; typically node is not specified, and it is
2985 node - nodeid of revision; typically node is not specified, and it is
2983 computed by default as hash(text, p1, p2), however subclasses might
2986 computed by default as hash(text, p1, p2), however subclasses might
2984 use different hashing method (and override checkhash() in such case)
2987 use different hashing method (and override checkhash() in such case)
2985 flags - the known flags to set on the revision
2988 flags - the known flags to set on the revision
2986 deltacomputer - an optional deltacomputer instance shared between
2989 deltacomputer - an optional deltacomputer instance shared between
2987 multiple calls
2990 multiple calls
2988 """
2991 """
2989 if link == nullrev:
2992 if link == nullrev:
2990 raise error.RevlogError(
2993 raise error.RevlogError(
2991 _(b"attempted to add linkrev -1 to %s") % self.display_id
2994 _(b"attempted to add linkrev -1 to %s") % self.display_id
2992 )
2995 )
2993
2996
2994 if sidedata is None:
2997 if sidedata is None:
2995 sidedata = {}
2998 sidedata = {}
2996 elif sidedata and not self.feature_config.has_side_data:
2999 elif sidedata and not self.feature_config.has_side_data:
2997 raise error.ProgrammingError(
3000 raise error.ProgrammingError(
2998 _(b"trying to add sidedata to a revlog who don't support them")
3001 _(b"trying to add sidedata to a revlog who don't support them")
2999 )
3002 )
3000
3003
3001 if flags:
3004 if flags:
3002 node = node or self.hash(text, p1, p2)
3005 node = node or self.hash(text, p1, p2)
3003
3006
3004 rawtext, validatehash = flagutil.processflagswrite(self, text, flags)
3007 rawtext, validatehash = flagutil.processflagswrite(self, text, flags)
3005
3008
3006 # If the flag processor modifies the revision data, ignore any provided
3009 # If the flag processor modifies the revision data, ignore any provided
3007 # cachedelta.
3010 # cachedelta.
3008 if rawtext != text:
3011 if rawtext != text:
3009 cachedelta = None
3012 cachedelta = None
3010
3013
3011 if len(rawtext) > _maxentrysize:
3014 if len(rawtext) > _maxentrysize:
3012 raise error.RevlogError(
3015 raise error.RevlogError(
3013 _(
3016 _(
3014 b"%s: size of %d bytes exceeds maximum revlog storage of 2GiB"
3017 b"%s: size of %d bytes exceeds maximum revlog storage of 2GiB"
3015 )
3018 )
3016 % (self.display_id, len(rawtext))
3019 % (self.display_id, len(rawtext))
3017 )
3020 )
3018
3021
3019 node = node or self.hash(rawtext, p1, p2)
3022 node = node or self.hash(rawtext, p1, p2)
3020 rev = self.index.get_rev(node)
3023 rev = self.index.get_rev(node)
3021 if rev is not None:
3024 if rev is not None:
3022 return rev
3025 return rev
3023
3026
3024 if validatehash:
3027 if validatehash:
3025 self.checkhash(rawtext, node, p1=p1, p2=p2)
3028 self.checkhash(rawtext, node, p1=p1, p2=p2)
3026
3029
3027 return self.addrawrevision(
3030 return self.addrawrevision(
3028 rawtext,
3031 rawtext,
3029 transaction,
3032 transaction,
3030 link,
3033 link,
3031 p1,
3034 p1,
3032 p2,
3035 p2,
3033 node,
3036 node,
3034 flags,
3037 flags,
3035 cachedelta=cachedelta,
3038 cachedelta=cachedelta,
3036 deltacomputer=deltacomputer,
3039 deltacomputer=deltacomputer,
3037 sidedata=sidedata,
3040 sidedata=sidedata,
3038 )
3041 )
3039
3042
3040 def addrawrevision(
3043 def addrawrevision(
3041 self,
3044 self,
3042 rawtext,
3045 rawtext,
3043 transaction,
3046 transaction,
3044 link,
3047 link,
3045 p1,
3048 p1,
3046 p2,
3049 p2,
3047 node,
3050 node,
3048 flags,
3051 flags,
3049 cachedelta=None,
3052 cachedelta=None,
3050 deltacomputer=None,
3053 deltacomputer=None,
3051 sidedata=None,
3054 sidedata=None,
3052 ):
3055 ):
3053 """add a raw revision with known flags, node and parents
3056 """add a raw revision with known flags, node and parents
3054 useful when reusing a revision not stored in this revlog (ex: received
3057 useful when reusing a revision not stored in this revlog (ex: received
3055 over wire, or read from an external bundle).
3058 over wire, or read from an external bundle).
3056 """
3059 """
3057 with self._writing(transaction):
3060 with self._writing(transaction):
3058 return self._addrevision(
3061 return self._addrevision(
3059 node,
3062 node,
3060 rawtext,
3063 rawtext,
3061 transaction,
3064 transaction,
3062 link,
3065 link,
3063 p1,
3066 p1,
3064 p2,
3067 p2,
3065 flags,
3068 flags,
3066 cachedelta,
3069 cachedelta,
3067 deltacomputer=deltacomputer,
3070 deltacomputer=deltacomputer,
3068 sidedata=sidedata,
3071 sidedata=sidedata,
3069 )
3072 )
3070
3073
3071 def compress(self, data):
3074 def compress(self, data):
3072 return self._inner.compress(data)
3075 return self._inner.compress(data)
3073
3076
3074 def decompress(self, data):
3077 def decompress(self, data):
3075 return self._inner.decompress(data)
3078 return self._inner.decompress(data)
3076
3079
3077 def _addrevision(
3080 def _addrevision(
3078 self,
3081 self,
3079 node,
3082 node,
3080 rawtext,
3083 rawtext,
3081 transaction,
3084 transaction,
3082 link,
3085 link,
3083 p1,
3086 p1,
3084 p2,
3087 p2,
3085 flags,
3088 flags,
3086 cachedelta,
3089 cachedelta,
3087 alwayscache=False,
3090 alwayscache=False,
3088 deltacomputer=None,
3091 deltacomputer=None,
3089 sidedata=None,
3092 sidedata=None,
3090 ):
3093 ):
3091 """internal function to add revisions to the log
3094 """internal function to add revisions to the log
3092
3095
3093 see addrevision for argument descriptions.
3096 see addrevision for argument descriptions.
3094
3097
3095 note: "addrevision" takes non-raw text, "_addrevision" takes raw text.
3098 note: "addrevision" takes non-raw text, "_addrevision" takes raw text.
3096
3099
3097 if "deltacomputer" is not provided or None, a defaultdeltacomputer will
3100 if "deltacomputer" is not provided or None, a defaultdeltacomputer will
3098 be used.
3101 be used.
3099
3102
3100 invariants:
3103 invariants:
3101 - rawtext is optional (can be None); if not set, cachedelta must be set.
3104 - rawtext is optional (can be None); if not set, cachedelta must be set.
3102 if both are set, they must correspond to each other.
3105 if both are set, they must correspond to each other.
3103 """
3106 """
3104 if node == self.nullid:
3107 if node == self.nullid:
3105 raise error.RevlogError(
3108 raise error.RevlogError(
3106 _(b"%s: attempt to add null revision") % self.display_id
3109 _(b"%s: attempt to add null revision") % self.display_id
3107 )
3110 )
3108 if (
3111 if (
3109 node == self.nodeconstants.wdirid
3112 node == self.nodeconstants.wdirid
3110 or node in self.nodeconstants.wdirfilenodeids
3113 or node in self.nodeconstants.wdirfilenodeids
3111 ):
3114 ):
3112 raise error.RevlogError(
3115 raise error.RevlogError(
3113 _(b"%s: attempt to add wdir revision") % self.display_id
3116 _(b"%s: attempt to add wdir revision") % self.display_id
3114 )
3117 )
3115 if self._inner._writinghandles is None:
3118 if self._inner._writinghandles is None:
3116 msg = b'adding revision outside `revlog._writing` context'
3119 msg = b'adding revision outside `revlog._writing` context'
3117 raise error.ProgrammingError(msg)
3120 raise error.ProgrammingError(msg)
3118
3121
3119 btext = [rawtext]
3122 btext = [rawtext]
3120
3123
3121 curr = len(self)
3124 curr = len(self)
3122 prev = curr - 1
3125 prev = curr - 1
3123
3126
3124 offset = self._get_data_offset(prev)
3127 offset = self._get_data_offset(prev)
3125
3128
3126 if self._concurrencychecker:
3129 if self._concurrencychecker:
3127 ifh, dfh, sdfh = self._inner._writinghandles
3130 ifh, dfh, sdfh = self._inner._writinghandles
3128 # XXX no checking for the sidedata file
3131 # XXX no checking for the sidedata file
3129 if self._inline:
3132 if self._inline:
3130 # offset is "as if" it were in the .d file, so we need to add on
3133 # offset is "as if" it were in the .d file, so we need to add on
3131 # the size of the entry metadata.
3134 # the size of the entry metadata.
3132 self._concurrencychecker(
3135 self._concurrencychecker(
3133 ifh, self._indexfile, offset + curr * self.index.entry_size
3136 ifh, self._indexfile, offset + curr * self.index.entry_size
3134 )
3137 )
3135 else:
3138 else:
3136 # Entries in the .i are a consistent size.
3139 # Entries in the .i are a consistent size.
3137 self._concurrencychecker(
3140 self._concurrencychecker(
3138 ifh, self._indexfile, curr * self.index.entry_size
3141 ifh, self._indexfile, curr * self.index.entry_size
3139 )
3142 )
3140 self._concurrencychecker(dfh, self._datafile, offset)
3143 self._concurrencychecker(dfh, self._datafile, offset)
3141
3144
3142 p1r, p2r = self.rev(p1), self.rev(p2)
3145 p1r, p2r = self.rev(p1), self.rev(p2)
3143
3146
3144 # full versions are inserted when the needed deltas
3147 # full versions are inserted when the needed deltas
3145 # become comparable to the uncompressed text
3148 # become comparable to the uncompressed text
3146 if rawtext is None:
3149 if rawtext is None:
3147 # need rawtext size, before changed by flag processors, which is
3150 # need rawtext size, before changed by flag processors, which is
3148 # the non-raw size. use revlog explicitly to avoid filelog's extra
3151 # the non-raw size. use revlog explicitly to avoid filelog's extra
3149 # logic that might remove metadata size.
3152 # logic that might remove metadata size.
3150 textlen = mdiff.patchedsize(
3153 textlen = mdiff.patchedsize(
3151 revlog.size(self, cachedelta[0]), cachedelta[1]
3154 revlog.size(self, cachedelta[0]), cachedelta[1]
3152 )
3155 )
3153 else:
3156 else:
3154 textlen = len(rawtext)
3157 textlen = len(rawtext)
3155
3158
3156 if deltacomputer is None:
3159 if deltacomputer is None:
3157 write_debug = None
3160 write_debug = None
3158 if self.delta_config.debug_delta:
3161 if self.delta_config.debug_delta:
3159 write_debug = transaction._report
3162 write_debug = transaction._report
3160 deltacomputer = deltautil.deltacomputer(
3163 deltacomputer = deltautil.deltacomputer(
3161 self, write_debug=write_debug
3164 self, write_debug=write_debug
3162 )
3165 )
3163
3166
3164 if cachedelta is not None and len(cachedelta) == 2:
3167 if cachedelta is not None and len(cachedelta) == 2:
3165 # If the cached delta has no information about how it should be
3168 # If the cached delta has no information about how it should be
3166 # reused, add the default reuse instruction according to the
3169 # reused, add the default reuse instruction according to the
3167 # revlog's configuration.
3170 # revlog's configuration.
3168 if (
3171 if (
3169 self.delta_config.general_delta
3172 self.delta_config.general_delta
3170 and self.delta_config.lazy_delta_base
3173 and self.delta_config.lazy_delta_base
3171 ):
3174 ):
3172 delta_base_reuse = DELTA_BASE_REUSE_TRY
3175 delta_base_reuse = DELTA_BASE_REUSE_TRY
3173 else:
3176 else:
3174 delta_base_reuse = DELTA_BASE_REUSE_NO
3177 delta_base_reuse = DELTA_BASE_REUSE_NO
3175 cachedelta = (cachedelta[0], cachedelta[1], delta_base_reuse)
3178 cachedelta = (cachedelta[0], cachedelta[1], delta_base_reuse)
3176
3179
3177 revinfo = revlogutils.revisioninfo(
3180 revinfo = revlogutils.revisioninfo(
3178 node,
3181 node,
3179 p1,
3182 p1,
3180 p2,
3183 p2,
3181 btext,
3184 btext,
3182 textlen,
3185 textlen,
3183 cachedelta,
3186 cachedelta,
3184 flags,
3187 flags,
3185 )
3188 )
3186
3189
3187 deltainfo = deltacomputer.finddeltainfo(revinfo)
3190 deltainfo = deltacomputer.finddeltainfo(revinfo)
3188
3191
3189 compression_mode = COMP_MODE_INLINE
3192 compression_mode = COMP_MODE_INLINE
3190 if self._docket is not None:
3193 if self._docket is not None:
3191 default_comp = self._docket.default_compression_header
3194 default_comp = self._docket.default_compression_header
3192 r = deltautil.delta_compression(default_comp, deltainfo)
3195 r = deltautil.delta_compression(default_comp, deltainfo)
3193 compression_mode, deltainfo = r
3196 compression_mode, deltainfo = r
3194
3197
3195 sidedata_compression_mode = COMP_MODE_INLINE
3198 sidedata_compression_mode = COMP_MODE_INLINE
3196 if sidedata and self.feature_config.has_side_data:
3199 if sidedata and self.feature_config.has_side_data:
3197 sidedata_compression_mode = COMP_MODE_PLAIN
3200 sidedata_compression_mode = COMP_MODE_PLAIN
3198 serialized_sidedata = sidedatautil.serialize_sidedata(sidedata)
3201 serialized_sidedata = sidedatautil.serialize_sidedata(sidedata)
3199 sidedata_offset = self._docket.sidedata_end
3202 sidedata_offset = self._docket.sidedata_end
3200 h, comp_sidedata = self._inner.compress(serialized_sidedata)
3203 h, comp_sidedata = self._inner.compress(serialized_sidedata)
3201 if (
3204 if (
3202 h != b'u'
3205 h != b'u'
3203 and comp_sidedata[0:1] != b'\0'
3206 and comp_sidedata[0:1] != b'\0'
3204 and len(comp_sidedata) < len(serialized_sidedata)
3207 and len(comp_sidedata) < len(serialized_sidedata)
3205 ):
3208 ):
3206 assert not h
3209 assert not h
3207 if (
3210 if (
3208 comp_sidedata[0:1]
3211 comp_sidedata[0:1]
3209 == self._docket.default_compression_header
3212 == self._docket.default_compression_header
3210 ):
3213 ):
3211 sidedata_compression_mode = COMP_MODE_DEFAULT
3214 sidedata_compression_mode = COMP_MODE_DEFAULT
3212 serialized_sidedata = comp_sidedata
3215 serialized_sidedata = comp_sidedata
3213 else:
3216 else:
3214 sidedata_compression_mode = COMP_MODE_INLINE
3217 sidedata_compression_mode = COMP_MODE_INLINE
3215 serialized_sidedata = comp_sidedata
3218 serialized_sidedata = comp_sidedata
3216 else:
3219 else:
3217 serialized_sidedata = b""
3220 serialized_sidedata = b""
3218 # Don't store the offset if the sidedata is empty, that way
3221 # Don't store the offset if the sidedata is empty, that way
3219 # we can easily detect empty sidedata and they will be no different
3222 # we can easily detect empty sidedata and they will be no different
3220 # than ones we manually add.
3223 # than ones we manually add.
3221 sidedata_offset = 0
3224 sidedata_offset = 0
3222
3225
3223 rank = RANK_UNKNOWN
3226 rank = RANK_UNKNOWN
3224 if self.feature_config.compute_rank:
3227 if self.feature_config.compute_rank:
3225 if (p1r, p2r) == (nullrev, nullrev):
3228 if (p1r, p2r) == (nullrev, nullrev):
3226 rank = 1
3229 rank = 1
3227 elif p1r != nullrev and p2r == nullrev:
3230 elif p1r != nullrev and p2r == nullrev:
3228 rank = 1 + self.fast_rank(p1r)
3231 rank = 1 + self.fast_rank(p1r)
3229 elif p1r == nullrev and p2r != nullrev:
3232 elif p1r == nullrev and p2r != nullrev:
3230 rank = 1 + self.fast_rank(p2r)
3233 rank = 1 + self.fast_rank(p2r)
3231 else: # merge node
3234 else: # merge node
3232 if rustdagop is not None and self.index.rust_ext_compat:
3235 if rustdagop is not None and self.index.rust_ext_compat:
3233 rank = rustdagop.rank(self.index, p1r, p2r)
3236 rank = rustdagop.rank(self.index, p1r, p2r)
3234 else:
3237 else:
3235 pmin, pmax = sorted((p1r, p2r))
3238 pmin, pmax = sorted((p1r, p2r))
3236 rank = 1 + self.fast_rank(pmax)
3239 rank = 1 + self.fast_rank(pmax)
3237 rank += sum(1 for _ in self.findmissingrevs([pmax], [pmin]))
3240 rank += sum(1 for _ in self.findmissingrevs([pmax], [pmin]))
3238
3241
3239 e = revlogutils.entry(
3242 e = revlogutils.entry(
3240 flags=flags,
3243 flags=flags,
3241 data_offset=offset,
3244 data_offset=offset,
3242 data_compressed_length=deltainfo.deltalen,
3245 data_compressed_length=deltainfo.deltalen,
3243 data_uncompressed_length=textlen,
3246 data_uncompressed_length=textlen,
3244 data_compression_mode=compression_mode,
3247 data_compression_mode=compression_mode,
3245 data_delta_base=deltainfo.base,
3248 data_delta_base=deltainfo.base,
3246 link_rev=link,
3249 link_rev=link,
3247 parent_rev_1=p1r,
3250 parent_rev_1=p1r,
3248 parent_rev_2=p2r,
3251 parent_rev_2=p2r,
3249 node_id=node,
3252 node_id=node,
3250 sidedata_offset=sidedata_offset,
3253 sidedata_offset=sidedata_offset,
3251 sidedata_compressed_length=len(serialized_sidedata),
3254 sidedata_compressed_length=len(serialized_sidedata),
3252 sidedata_compression_mode=sidedata_compression_mode,
3255 sidedata_compression_mode=sidedata_compression_mode,
3253 rank=rank,
3256 rank=rank,
3254 )
3257 )
3255
3258
3256 self.index.append(e)
3259 self.index.append(e)
3257 entry = self.index.entry_binary(curr)
3260 entry = self.index.entry_binary(curr)
3258 if curr == 0 and self._docket is None:
3261 if curr == 0 and self._docket is None:
3259 header = self._format_flags | self._format_version
3262 header = self._format_flags | self._format_version
3260 header = self.index.pack_header(header)
3263 header = self.index.pack_header(header)
3261 entry = header + entry
3264 entry = header + entry
3262 self._writeentry(
3265 self._writeentry(
3263 transaction,
3266 transaction,
3264 entry,
3267 entry,
3265 deltainfo.data,
3268 deltainfo.data,
3266 link,
3269 link,
3267 offset,
3270 offset,
3268 serialized_sidedata,
3271 serialized_sidedata,
3269 sidedata_offset,
3272 sidedata_offset,
3270 )
3273 )
3271
3274
3272 rawtext = btext[0]
3275 rawtext = btext[0]
3273
3276
3274 if alwayscache and rawtext is None:
3277 if alwayscache and rawtext is None:
3275 rawtext = deltacomputer.buildtext(revinfo)
3278 rawtext = deltacomputer.buildtext(revinfo)
3276
3279
3277 if type(rawtext) == bytes: # only accept immutable objects
3280 if type(rawtext) == bytes: # only accept immutable objects
3278 self._inner._revisioncache = (node, curr, rawtext)
3281 self._inner._revisioncache = (node, curr, rawtext)
3279 self._chainbasecache[curr] = deltainfo.chainbase
3282 self._chainbasecache[curr] = deltainfo.chainbase
3280 return curr
3283 return curr
3281
3284
3282 def _get_data_offset(self, prev):
3285 def _get_data_offset(self, prev):
3283 """Returns the current offset in the (in-transaction) data file.
3286 """Returns the current offset in the (in-transaction) data file.
3284 Versions < 2 of the revlog can get this 0(1), revlog v2 needs a docket
3287 Versions < 2 of the revlog can get this 0(1), revlog v2 needs a docket
3285 file to store that information: since sidedata can be rewritten to the
3288 file to store that information: since sidedata can be rewritten to the
3286 end of the data file within a transaction, you can have cases where, for
3289 end of the data file within a transaction, you can have cases where, for
3287 example, rev `n` does not have sidedata while rev `n - 1` does, leading
3290 example, rev `n` does not have sidedata while rev `n - 1` does, leading
3288 to `n - 1`'s sidedata being written after `n`'s data.
3291 to `n - 1`'s sidedata being written after `n`'s data.
3289
3292
3290 TODO cache this in a docket file before getting out of experimental."""
3293 TODO cache this in a docket file before getting out of experimental."""
3291 if self._docket is None:
3294 if self._docket is None:
3292 return self.end(prev)
3295 return self.end(prev)
3293 else:
3296 else:
3294 return self._docket.data_end
3297 return self._docket.data_end
3295
3298
3296 def _writeentry(
3299 def _writeentry(
3297 self,
3300 self,
3298 transaction,
3301 transaction,
3299 entry,
3302 entry,
3300 data,
3303 data,
3301 link,
3304 link,
3302 offset,
3305 offset,
3303 sidedata,
3306 sidedata,
3304 sidedata_offset,
3307 sidedata_offset,
3305 ):
3308 ):
3306 # Files opened in a+ mode have inconsistent behavior on various
3309 # Files opened in a+ mode have inconsistent behavior on various
3307 # platforms. Windows requires that a file positioning call be made
3310 # platforms. Windows requires that a file positioning call be made
3308 # when the file handle transitions between reads and writes. See
3311 # when the file handle transitions between reads and writes. See
3309 # 3686fa2b8eee and the mixedfilemodewrapper in windows.py. On other
3312 # 3686fa2b8eee and the mixedfilemodewrapper in windows.py. On other
3310 # platforms, Python or the platform itself can be buggy. Some versions
3313 # platforms, Python or the platform itself can be buggy. Some versions
3311 # of Solaris have been observed to not append at the end of the file
3314 # of Solaris have been observed to not append at the end of the file
3312 # if the file was seeked to before the end. See issue4943 for more.
3315 # if the file was seeked to before the end. See issue4943 for more.
3313 #
3316 #
3314 # We work around this issue by inserting a seek() before writing.
3317 # We work around this issue by inserting a seek() before writing.
3315 # Note: This is likely not necessary on Python 3. However, because
3318 # Note: This is likely not necessary on Python 3. However, because
3316 # the file handle is reused for reads and may be seeked there, we need
3319 # the file handle is reused for reads and may be seeked there, we need
3317 # to be careful before changing this.
3320 # to be careful before changing this.
3318 index_end = data_end = sidedata_end = None
3321 index_end = data_end = sidedata_end = None
3319 if self._docket is not None:
3322 if self._docket is not None:
3320 index_end = self._docket.index_end
3323 index_end = self._docket.index_end
3321 data_end = self._docket.data_end
3324 data_end = self._docket.data_end
3322 sidedata_end = self._docket.sidedata_end
3325 sidedata_end = self._docket.sidedata_end
3323
3326
3324 files_end = self._inner.write_entry(
3327 files_end = self._inner.write_entry(
3325 transaction,
3328 transaction,
3326 entry,
3329 entry,
3327 data,
3330 data,
3328 link,
3331 link,
3329 offset,
3332 offset,
3330 sidedata,
3333 sidedata,
3331 sidedata_offset,
3334 sidedata_offset,
3332 index_end,
3335 index_end,
3333 data_end,
3336 data_end,
3334 sidedata_end,
3337 sidedata_end,
3335 )
3338 )
3336 self._enforceinlinesize(transaction)
3339 self._enforceinlinesize(transaction)
3337 if self._docket is not None:
3340 if self._docket is not None:
3338 self._docket.index_end = files_end[0]
3341 self._docket.index_end = files_end[0]
3339 self._docket.data_end = files_end[1]
3342 self._docket.data_end = files_end[1]
3340 self._docket.sidedata_end = files_end[2]
3343 self._docket.sidedata_end = files_end[2]
3341
3344
3342 nodemaputil.setup_persistent_nodemap(transaction, self)
3345 nodemaputil.setup_persistent_nodemap(transaction, self)
3343
3346
3344 def addgroup(
3347 def addgroup(
3345 self,
3348 self,
3346 deltas,
3349 deltas,
3347 linkmapper,
3350 linkmapper,
3348 transaction,
3351 transaction,
3349 alwayscache=False,
3352 alwayscache=False,
3350 addrevisioncb=None,
3353 addrevisioncb=None,
3351 duplicaterevisioncb=None,
3354 duplicaterevisioncb=None,
3352 debug_info=None,
3355 debug_info=None,
3353 delta_base_reuse_policy=None,
3356 delta_base_reuse_policy=None,
3354 ):
3357 ):
3355 """
3358 """
3356 add a delta group
3359 add a delta group
3357
3360
3358 given a set of deltas, add them to the revision log. the
3361 given a set of deltas, add them to the revision log. the
3359 first delta is against its parent, which should be in our
3362 first delta is against its parent, which should be in our
3360 log, the rest are against the previous delta.
3363 log, the rest are against the previous delta.
3361
3364
3362 If ``addrevisioncb`` is defined, it will be called with arguments of
3365 If ``addrevisioncb`` is defined, it will be called with arguments of
3363 this revlog and the node that was added.
3366 this revlog and the node that was added.
3364 """
3367 """
3365
3368
3366 if self._adding_group:
3369 if self._adding_group:
3367 raise error.ProgrammingError(b'cannot nest addgroup() calls')
3370 raise error.ProgrammingError(b'cannot nest addgroup() calls')
3368
3371
3369 # read the default delta-base reuse policy from revlog config if the
3372 # read the default delta-base reuse policy from revlog config if the
3370 # group did not specify one.
3373 # group did not specify one.
3371 if delta_base_reuse_policy is None:
3374 if delta_base_reuse_policy is None:
3372 if (
3375 if (
3373 self.delta_config.general_delta
3376 self.delta_config.general_delta
3374 and self.delta_config.lazy_delta_base
3377 and self.delta_config.lazy_delta_base
3375 ):
3378 ):
3376 delta_base_reuse_policy = DELTA_BASE_REUSE_TRY
3379 delta_base_reuse_policy = DELTA_BASE_REUSE_TRY
3377 else:
3380 else:
3378 delta_base_reuse_policy = DELTA_BASE_REUSE_NO
3381 delta_base_reuse_policy = DELTA_BASE_REUSE_NO
3379
3382
3380 self._adding_group = True
3383 self._adding_group = True
3381 empty = True
3384 empty = True
3382 try:
3385 try:
3383 with self._writing(transaction):
3386 with self._writing(transaction):
3384 write_debug = None
3387 write_debug = None
3385 if self.delta_config.debug_delta:
3388 if self.delta_config.debug_delta:
3386 write_debug = transaction._report
3389 write_debug = transaction._report
3387 deltacomputer = deltautil.deltacomputer(
3390 deltacomputer = deltautil.deltacomputer(
3388 self,
3391 self,
3389 write_debug=write_debug,
3392 write_debug=write_debug,
3390 debug_info=debug_info,
3393 debug_info=debug_info,
3391 )
3394 )
3392 # loop through our set of deltas
3395 # loop through our set of deltas
3393 for data in deltas:
3396 for data in deltas:
3394 (
3397 (
3395 node,
3398 node,
3396 p1,
3399 p1,
3397 p2,
3400 p2,
3398 linknode,
3401 linknode,
3399 deltabase,
3402 deltabase,
3400 delta,
3403 delta,
3401 flags,
3404 flags,
3402 sidedata,
3405 sidedata,
3403 ) = data
3406 ) = data
3404 link = linkmapper(linknode)
3407 link = linkmapper(linknode)
3405 flags = flags or REVIDX_DEFAULT_FLAGS
3408 flags = flags or REVIDX_DEFAULT_FLAGS
3406
3409
3407 rev = self.index.get_rev(node)
3410 rev = self.index.get_rev(node)
3408 if rev is not None:
3411 if rev is not None:
3409 # this can happen if two branches make the same change
3412 # this can happen if two branches make the same change
3410 self._nodeduplicatecallback(transaction, rev)
3413 self._nodeduplicatecallback(transaction, rev)
3411 if duplicaterevisioncb:
3414 if duplicaterevisioncb:
3412 duplicaterevisioncb(self, rev)
3415 duplicaterevisioncb(self, rev)
3413 empty = False
3416 empty = False
3414 continue
3417 continue
3415
3418
3416 for p in (p1, p2):
3419 for p in (p1, p2):
3417 if not self.index.has_node(p):
3420 if not self.index.has_node(p):
3418 raise error.LookupError(
3421 raise error.LookupError(
3419 p, self.radix, _(b'unknown parent')
3422 p, self.radix, _(b'unknown parent')
3420 )
3423 )
3421
3424
3422 if not self.index.has_node(deltabase):
3425 if not self.index.has_node(deltabase):
3423 raise error.LookupError(
3426 raise error.LookupError(
3424 deltabase, self.display_id, _(b'unknown delta base')
3427 deltabase, self.display_id, _(b'unknown delta base')
3425 )
3428 )
3426
3429
3427 baserev = self.rev(deltabase)
3430 baserev = self.rev(deltabase)
3428
3431
3429 if baserev != nullrev and self.iscensored(baserev):
3432 if baserev != nullrev and self.iscensored(baserev):
3430 # if base is censored, delta must be full replacement in a
3433 # if base is censored, delta must be full replacement in a
3431 # single patch operation
3434 # single patch operation
3432 hlen = struct.calcsize(b">lll")
3435 hlen = struct.calcsize(b">lll")
3433 oldlen = self.rawsize(baserev)
3436 oldlen = self.rawsize(baserev)
3434 newlen = len(delta) - hlen
3437 newlen = len(delta) - hlen
3435 if delta[:hlen] != mdiff.replacediffheader(
3438 if delta[:hlen] != mdiff.replacediffheader(
3436 oldlen, newlen
3439 oldlen, newlen
3437 ):
3440 ):
3438 raise error.CensoredBaseError(
3441 raise error.CensoredBaseError(
3439 self.display_id, self.node(baserev)
3442 self.display_id, self.node(baserev)
3440 )
3443 )
3441
3444
3442 if not flags and self._peek_iscensored(baserev, delta):
3445 if not flags and self._peek_iscensored(baserev, delta):
3443 flags |= REVIDX_ISCENSORED
3446 flags |= REVIDX_ISCENSORED
3444
3447
3445 # We assume consumers of addrevisioncb will want to retrieve
3448 # We assume consumers of addrevisioncb will want to retrieve
3446 # the added revision, which will require a call to
3449 # the added revision, which will require a call to
3447 # revision(). revision() will fast path if there is a cache
3450 # revision(). revision() will fast path if there is a cache
3448 # hit. So, we tell _addrevision() to always cache in this case.
3451 # hit. So, we tell _addrevision() to always cache in this case.
3449 # We're only using addgroup() in the context of changegroup
3452 # We're only using addgroup() in the context of changegroup
3450 # generation so the revision data can always be handled as raw
3453 # generation so the revision data can always be handled as raw
3451 # by the flagprocessor.
3454 # by the flagprocessor.
3452 rev = self._addrevision(
3455 rev = self._addrevision(
3453 node,
3456 node,
3454 None,
3457 None,
3455 transaction,
3458 transaction,
3456 link,
3459 link,
3457 p1,
3460 p1,
3458 p2,
3461 p2,
3459 flags,
3462 flags,
3460 (baserev, delta, delta_base_reuse_policy),
3463 (baserev, delta, delta_base_reuse_policy),
3461 alwayscache=alwayscache,
3464 alwayscache=alwayscache,
3462 deltacomputer=deltacomputer,
3465 deltacomputer=deltacomputer,
3463 sidedata=sidedata,
3466 sidedata=sidedata,
3464 )
3467 )
3465
3468
3466 if addrevisioncb:
3469 if addrevisioncb:
3467 addrevisioncb(self, rev)
3470 addrevisioncb(self, rev)
3468 empty = False
3471 empty = False
3469 finally:
3472 finally:
3470 self._adding_group = False
3473 self._adding_group = False
3471 return not empty
3474 return not empty
3472
3475
3473 def iscensored(self, rev):
3476 def iscensored(self, rev):
3474 """Check if a file revision is censored."""
3477 """Check if a file revision is censored."""
3475 if not self.feature_config.censorable:
3478 if not self.feature_config.censorable:
3476 return False
3479 return False
3477
3480
3478 return self.flags(rev) & REVIDX_ISCENSORED
3481 return self.flags(rev) & REVIDX_ISCENSORED
3479
3482
3480 def _peek_iscensored(self, baserev, delta):
3483 def _peek_iscensored(self, baserev, delta):
3481 """Quickly check if a delta produces a censored revision."""
3484 """Quickly check if a delta produces a censored revision."""
3482 if not self.feature_config.censorable:
3485 if not self.feature_config.censorable:
3483 return False
3486 return False
3484
3487
3485 return storageutil.deltaiscensored(delta, baserev, self.rawsize)
3488 return storageutil.deltaiscensored(delta, baserev, self.rawsize)
3486
3489
3487 def getstrippoint(self, minlink):
3490 def getstrippoint(self, minlink):
3488 """find the minimum rev that must be stripped to strip the linkrev
3491 """find the minimum rev that must be stripped to strip the linkrev
3489
3492
3490 Returns a tuple containing the minimum rev and a set of all revs that
3493 Returns a tuple containing the minimum rev and a set of all revs that
3491 have linkrevs that will be broken by this strip.
3494 have linkrevs that will be broken by this strip.
3492 """
3495 """
3493 return storageutil.resolvestripinfo(
3496 return storageutil.resolvestripinfo(
3494 minlink,
3497 minlink,
3495 len(self) - 1,
3498 len(self) - 1,
3496 self.headrevs(),
3499 self.headrevs(),
3497 self.linkrev,
3500 self.linkrev,
3498 self.parentrevs,
3501 self.parentrevs,
3499 )
3502 )
3500
3503
3501 def strip(self, minlink, transaction):
3504 def strip(self, minlink, transaction):
3502 """truncate the revlog on the first revision with a linkrev >= minlink
3505 """truncate the revlog on the first revision with a linkrev >= minlink
3503
3506
3504 This function is called when we're stripping revision minlink and
3507 This function is called when we're stripping revision minlink and
3505 its descendants from the repository.
3508 its descendants from the repository.
3506
3509
3507 We have to remove all revisions with linkrev >= minlink, because
3510 We have to remove all revisions with linkrev >= minlink, because
3508 the equivalent changelog revisions will be renumbered after the
3511 the equivalent changelog revisions will be renumbered after the
3509 strip.
3512 strip.
3510
3513
3511 So we truncate the revlog on the first of these revisions, and
3514 So we truncate the revlog on the first of these revisions, and
3512 trust that the caller has saved the revisions that shouldn't be
3515 trust that the caller has saved the revisions that shouldn't be
3513 removed and that it'll re-add them after this truncation.
3516 removed and that it'll re-add them after this truncation.
3514 """
3517 """
3515 if len(self) == 0:
3518 if len(self) == 0:
3516 return
3519 return
3517
3520
3518 rev, _ = self.getstrippoint(minlink)
3521 rev, _ = self.getstrippoint(minlink)
3519 if rev == len(self):
3522 if rev == len(self):
3520 return
3523 return
3521
3524
3522 # first truncate the files on disk
3525 # first truncate the files on disk
3523 data_end = self.start(rev)
3526 data_end = self.start(rev)
3524 if not self._inline:
3527 if not self._inline:
3525 transaction.add(self._datafile, data_end)
3528 transaction.add(self._datafile, data_end)
3526 end = rev * self.index.entry_size
3529 end = rev * self.index.entry_size
3527 else:
3530 else:
3528 end = data_end + (rev * self.index.entry_size)
3531 end = data_end + (rev * self.index.entry_size)
3529
3532
3530 if self._sidedatafile:
3533 if self._sidedatafile:
3531 sidedata_end = self.sidedata_cut_off(rev)
3534 sidedata_end = self.sidedata_cut_off(rev)
3532 transaction.add(self._sidedatafile, sidedata_end)
3535 transaction.add(self._sidedatafile, sidedata_end)
3533
3536
3534 transaction.add(self._indexfile, end)
3537 transaction.add(self._indexfile, end)
3535 if self._docket is not None:
3538 if self._docket is not None:
3536 # XXX we could, leverage the docket while stripping. However it is
3539 # XXX we could, leverage the docket while stripping. However it is
3537 # not powerfull enough at the time of this comment
3540 # not powerfull enough at the time of this comment
3538 self._docket.index_end = end
3541 self._docket.index_end = end
3539 self._docket.data_end = data_end
3542 self._docket.data_end = data_end
3540 self._docket.sidedata_end = sidedata_end
3543 self._docket.sidedata_end = sidedata_end
3541 self._docket.write(transaction, stripping=True)
3544 self._docket.write(transaction, stripping=True)
3542
3545
3543 # then reset internal state in memory to forget those revisions
3546 # then reset internal state in memory to forget those revisions
3544 self._chaininfocache = util.lrucachedict(500)
3547 self._chaininfocache = util.lrucachedict(500)
3545 self._inner.clear_cache()
3548 self._inner.clear_cache()
3546
3549
3547 del self.index[rev:-1]
3550 del self.index[rev:-1]
3548
3551
3549 def checksize(self):
3552 def checksize(self):
3550 """Check size of index and data files
3553 """Check size of index and data files
3551
3554
3552 return a (dd, di) tuple.
3555 return a (dd, di) tuple.
3553 - dd: extra bytes for the "data" file
3556 - dd: extra bytes for the "data" file
3554 - di: extra bytes for the "index" file
3557 - di: extra bytes for the "index" file
3555
3558
3556 A healthy revlog will return (0, 0).
3559 A healthy revlog will return (0, 0).
3557 """
3560 """
3558 expected = 0
3561 expected = 0
3559 if len(self):
3562 if len(self):
3560 expected = max(0, self.end(len(self) - 1))
3563 expected = max(0, self.end(len(self) - 1))
3561
3564
3562 try:
3565 try:
3563 with self._datafp() as f:
3566 with self._datafp() as f:
3564 f.seek(0, io.SEEK_END)
3567 f.seek(0, io.SEEK_END)
3565 actual = f.tell()
3568 actual = f.tell()
3566 dd = actual - expected
3569 dd = actual - expected
3567 except FileNotFoundError:
3570 except FileNotFoundError:
3568 dd = 0
3571 dd = 0
3569
3572
3570 try:
3573 try:
3571 f = self.opener(self._indexfile)
3574 f = self.opener(self._indexfile)
3572 f.seek(0, io.SEEK_END)
3575 f.seek(0, io.SEEK_END)
3573 actual = f.tell()
3576 actual = f.tell()
3574 f.close()
3577 f.close()
3575 s = self.index.entry_size
3578 s = self.index.entry_size
3576 i = max(0, actual // s)
3579 i = max(0, actual // s)
3577 di = actual - (i * s)
3580 di = actual - (i * s)
3578 if self._inline:
3581 if self._inline:
3579 databytes = 0
3582 databytes = 0
3580 for r in self:
3583 for r in self:
3581 databytes += max(0, self.length(r))
3584 databytes += max(0, self.length(r))
3582 dd = 0
3585 dd = 0
3583 di = actual - len(self) * s - databytes
3586 di = actual - len(self) * s - databytes
3584 except FileNotFoundError:
3587 except FileNotFoundError:
3585 di = 0
3588 di = 0
3586
3589
3587 return (dd, di)
3590 return (dd, di)
3588
3591
3589 def files(self):
3592 def files(self):
3590 """return list of files that compose this revlog"""
3593 """return list of files that compose this revlog"""
3591 res = [self._indexfile]
3594 res = [self._indexfile]
3592 if self._docket_file is None:
3595 if self._docket_file is None:
3593 if not self._inline:
3596 if not self._inline:
3594 res.append(self._datafile)
3597 res.append(self._datafile)
3595 else:
3598 else:
3596 res.append(self._docket_file)
3599 res.append(self._docket_file)
3597 res.extend(self._docket.old_index_filepaths(include_empty=False))
3600 res.extend(self._docket.old_index_filepaths(include_empty=False))
3598 if self._docket.data_end:
3601 if self._docket.data_end:
3599 res.append(self._datafile)
3602 res.append(self._datafile)
3600 res.extend(self._docket.old_data_filepaths(include_empty=False))
3603 res.extend(self._docket.old_data_filepaths(include_empty=False))
3601 if self._docket.sidedata_end:
3604 if self._docket.sidedata_end:
3602 res.append(self._sidedatafile)
3605 res.append(self._sidedatafile)
3603 res.extend(self._docket.old_sidedata_filepaths(include_empty=False))
3606 res.extend(self._docket.old_sidedata_filepaths(include_empty=False))
3604 return res
3607 return res
3605
3608
3606 def emitrevisions(
3609 def emitrevisions(
3607 self,
3610 self,
3608 nodes,
3611 nodes,
3609 nodesorder=None,
3612 nodesorder=None,
3610 revisiondata=False,
3613 revisiondata=False,
3611 assumehaveparentrevisions=False,
3614 assumehaveparentrevisions=False,
3612 deltamode=repository.CG_DELTAMODE_STD,
3615 deltamode=repository.CG_DELTAMODE_STD,
3613 sidedata_helpers=None,
3616 sidedata_helpers=None,
3614 debug_info=None,
3617 debug_info=None,
3615 ):
3618 ):
3616 if nodesorder not in (b'nodes', b'storage', b'linear', None):
3619 if nodesorder not in (b'nodes', b'storage', b'linear', None):
3617 raise error.ProgrammingError(
3620 raise error.ProgrammingError(
3618 b'unhandled value for nodesorder: %s' % nodesorder
3621 b'unhandled value for nodesorder: %s' % nodesorder
3619 )
3622 )
3620
3623
3621 if nodesorder is None and not self.delta_config.general_delta:
3624 if nodesorder is None and not self.delta_config.general_delta:
3622 nodesorder = b'storage'
3625 nodesorder = b'storage'
3623
3626
3624 if (
3627 if (
3625 not self._storedeltachains
3628 not self._storedeltachains
3626 and deltamode != repository.CG_DELTAMODE_PREV
3629 and deltamode != repository.CG_DELTAMODE_PREV
3627 ):
3630 ):
3628 deltamode = repository.CG_DELTAMODE_FULL
3631 deltamode = repository.CG_DELTAMODE_FULL
3629
3632
3630 return storageutil.emitrevisions(
3633 return storageutil.emitrevisions(
3631 self,
3634 self,
3632 nodes,
3635 nodes,
3633 nodesorder,
3636 nodesorder,
3634 revlogrevisiondelta,
3637 revlogrevisiondelta,
3635 deltaparentfn=self.deltaparent,
3638 deltaparentfn=self.deltaparent,
3636 candeltafn=self._candelta,
3639 candeltafn=self._candelta,
3637 rawsizefn=self.rawsize,
3640 rawsizefn=self.rawsize,
3638 revdifffn=self.revdiff,
3641 revdifffn=self.revdiff,
3639 flagsfn=self.flags,
3642 flagsfn=self.flags,
3640 deltamode=deltamode,
3643 deltamode=deltamode,
3641 revisiondata=revisiondata,
3644 revisiondata=revisiondata,
3642 assumehaveparentrevisions=assumehaveparentrevisions,
3645 assumehaveparentrevisions=assumehaveparentrevisions,
3643 sidedata_helpers=sidedata_helpers,
3646 sidedata_helpers=sidedata_helpers,
3644 debug_info=debug_info,
3647 debug_info=debug_info,
3645 )
3648 )
3646
3649
3647 DELTAREUSEALWAYS = b'always'
3650 DELTAREUSEALWAYS = b'always'
3648 DELTAREUSESAMEREVS = b'samerevs'
3651 DELTAREUSESAMEREVS = b'samerevs'
3649 DELTAREUSENEVER = b'never'
3652 DELTAREUSENEVER = b'never'
3650
3653
3651 DELTAREUSEFULLADD = b'fulladd'
3654 DELTAREUSEFULLADD = b'fulladd'
3652
3655
3653 DELTAREUSEALL = {b'always', b'samerevs', b'never', b'fulladd'}
3656 DELTAREUSEALL = {b'always', b'samerevs', b'never', b'fulladd'}
3654
3657
3655 def clone(
3658 def clone(
3656 self,
3659 self,
3657 tr,
3660 tr,
3658 destrevlog,
3661 destrevlog,
3659 addrevisioncb=None,
3662 addrevisioncb=None,
3660 deltareuse=DELTAREUSESAMEREVS,
3663 deltareuse=DELTAREUSESAMEREVS,
3661 forcedeltabothparents=None,
3664 forcedeltabothparents=None,
3662 sidedata_helpers=None,
3665 sidedata_helpers=None,
3663 ):
3666 ):
3664 """Copy this revlog to another, possibly with format changes.
3667 """Copy this revlog to another, possibly with format changes.
3665
3668
3666 The destination revlog will contain the same revisions and nodes.
3669 The destination revlog will contain the same revisions and nodes.
3667 However, it may not be bit-for-bit identical due to e.g. delta encoding
3670 However, it may not be bit-for-bit identical due to e.g. delta encoding
3668 differences.
3671 differences.
3669
3672
3670 The ``deltareuse`` argument control how deltas from the existing revlog
3673 The ``deltareuse`` argument control how deltas from the existing revlog
3671 are preserved in the destination revlog. The argument can have the
3674 are preserved in the destination revlog. The argument can have the
3672 following values:
3675 following values:
3673
3676
3674 DELTAREUSEALWAYS
3677 DELTAREUSEALWAYS
3675 Deltas will always be reused (if possible), even if the destination
3678 Deltas will always be reused (if possible), even if the destination
3676 revlog would not select the same revisions for the delta. This is the
3679 revlog would not select the same revisions for the delta. This is the
3677 fastest mode of operation.
3680 fastest mode of operation.
3678 DELTAREUSESAMEREVS
3681 DELTAREUSESAMEREVS
3679 Deltas will be reused if the destination revlog would pick the same
3682 Deltas will be reused if the destination revlog would pick the same
3680 revisions for the delta. This mode strikes a balance between speed
3683 revisions for the delta. This mode strikes a balance between speed
3681 and optimization.
3684 and optimization.
3682 DELTAREUSENEVER
3685 DELTAREUSENEVER
3683 Deltas will never be reused. This is the slowest mode of execution.
3686 Deltas will never be reused. This is the slowest mode of execution.
3684 This mode can be used to recompute deltas (e.g. if the diff/delta
3687 This mode can be used to recompute deltas (e.g. if the diff/delta
3685 algorithm changes).
3688 algorithm changes).
3686 DELTAREUSEFULLADD
3689 DELTAREUSEFULLADD
3687 Revision will be re-added as if their were new content. This is
3690 Revision will be re-added as if their were new content. This is
3688 slower than DELTAREUSEALWAYS but allow more mechanism to kicks in.
3691 slower than DELTAREUSEALWAYS but allow more mechanism to kicks in.
3689 eg: large file detection and handling.
3692 eg: large file detection and handling.
3690
3693
3691 Delta computation can be slow, so the choice of delta reuse policy can
3694 Delta computation can be slow, so the choice of delta reuse policy can
3692 significantly affect run time.
3695 significantly affect run time.
3693
3696
3694 The default policy (``DELTAREUSESAMEREVS``) strikes a balance between
3697 The default policy (``DELTAREUSESAMEREVS``) strikes a balance between
3695 two extremes. Deltas will be reused if they are appropriate. But if the
3698 two extremes. Deltas will be reused if they are appropriate. But if the
3696 delta could choose a better revision, it will do so. This means if you
3699 delta could choose a better revision, it will do so. This means if you
3697 are converting a non-generaldelta revlog to a generaldelta revlog,
3700 are converting a non-generaldelta revlog to a generaldelta revlog,
3698 deltas will be recomputed if the delta's parent isn't a parent of the
3701 deltas will be recomputed if the delta's parent isn't a parent of the
3699 revision.
3702 revision.
3700
3703
3701 In addition to the delta policy, the ``forcedeltabothparents``
3704 In addition to the delta policy, the ``forcedeltabothparents``
3702 argument controls whether to force compute deltas against both parents
3705 argument controls whether to force compute deltas against both parents
3703 for merges. By default, the current default is used.
3706 for merges. By default, the current default is used.
3704
3707
3705 See `revlogutil.sidedata.get_sidedata_helpers` for the doc on
3708 See `revlogutil.sidedata.get_sidedata_helpers` for the doc on
3706 `sidedata_helpers`.
3709 `sidedata_helpers`.
3707 """
3710 """
3708 if deltareuse not in self.DELTAREUSEALL:
3711 if deltareuse not in self.DELTAREUSEALL:
3709 raise ValueError(
3712 raise ValueError(
3710 _(b'value for deltareuse invalid: %s') % deltareuse
3713 _(b'value for deltareuse invalid: %s') % deltareuse
3711 )
3714 )
3712
3715
3713 if len(destrevlog):
3716 if len(destrevlog):
3714 raise ValueError(_(b'destination revlog is not empty'))
3717 raise ValueError(_(b'destination revlog is not empty'))
3715
3718
3716 if getattr(self, 'filteredrevs', None):
3719 if getattr(self, 'filteredrevs', None):
3717 raise ValueError(_(b'source revlog has filtered revisions'))
3720 raise ValueError(_(b'source revlog has filtered revisions'))
3718 if getattr(destrevlog, 'filteredrevs', None):
3721 if getattr(destrevlog, 'filteredrevs', None):
3719 raise ValueError(_(b'destination revlog has filtered revisions'))
3722 raise ValueError(_(b'destination revlog has filtered revisions'))
3720
3723
3721 # lazydelta and lazydeltabase controls whether to reuse a cached delta,
3724 # lazydelta and lazydeltabase controls whether to reuse a cached delta,
3722 # if possible.
3725 # if possible.
3723 old_delta_config = destrevlog.delta_config
3726 old_delta_config = destrevlog.delta_config
3724 destrevlog.delta_config = destrevlog.delta_config.copy()
3727 destrevlog.delta_config = destrevlog.delta_config.copy()
3725
3728
3726 try:
3729 try:
3727 if deltareuse == self.DELTAREUSEALWAYS:
3730 if deltareuse == self.DELTAREUSEALWAYS:
3728 destrevlog.delta_config.lazy_delta_base = True
3731 destrevlog.delta_config.lazy_delta_base = True
3729 destrevlog.delta_config.lazy_delta = True
3732 destrevlog.delta_config.lazy_delta = True
3730 elif deltareuse == self.DELTAREUSESAMEREVS:
3733 elif deltareuse == self.DELTAREUSESAMEREVS:
3731 destrevlog.delta_config.lazy_delta_base = False
3734 destrevlog.delta_config.lazy_delta_base = False
3732 destrevlog.delta_config.lazy_delta = True
3735 destrevlog.delta_config.lazy_delta = True
3733 elif deltareuse == self.DELTAREUSENEVER:
3736 elif deltareuse == self.DELTAREUSENEVER:
3734 destrevlog.delta_config.lazy_delta_base = False
3737 destrevlog.delta_config.lazy_delta_base = False
3735 destrevlog.delta_config.lazy_delta = False
3738 destrevlog.delta_config.lazy_delta = False
3736
3739
3737 delta_both_parents = (
3740 delta_both_parents = (
3738 forcedeltabothparents or old_delta_config.delta_both_parents
3741 forcedeltabothparents or old_delta_config.delta_both_parents
3739 )
3742 )
3740 destrevlog.delta_config.delta_both_parents = delta_both_parents
3743 destrevlog.delta_config.delta_both_parents = delta_both_parents
3741
3744
3742 with self.reading(), destrevlog._writing(tr):
3745 with self.reading(), destrevlog._writing(tr):
3743 self._clone(
3746 self._clone(
3744 tr,
3747 tr,
3745 destrevlog,
3748 destrevlog,
3746 addrevisioncb,
3749 addrevisioncb,
3747 deltareuse,
3750 deltareuse,
3748 forcedeltabothparents,
3751 forcedeltabothparents,
3749 sidedata_helpers,
3752 sidedata_helpers,
3750 )
3753 )
3751
3754
3752 finally:
3755 finally:
3753 destrevlog.delta_config = old_delta_config
3756 destrevlog.delta_config = old_delta_config
3754
3757
3755 def _clone(
3758 def _clone(
3756 self,
3759 self,
3757 tr,
3760 tr,
3758 destrevlog,
3761 destrevlog,
3759 addrevisioncb,
3762 addrevisioncb,
3760 deltareuse,
3763 deltareuse,
3761 forcedeltabothparents,
3764 forcedeltabothparents,
3762 sidedata_helpers,
3765 sidedata_helpers,
3763 ):
3766 ):
3764 """perform the core duty of `revlog.clone` after parameter processing"""
3767 """perform the core duty of `revlog.clone` after parameter processing"""
3765 write_debug = None
3768 write_debug = None
3766 if self.delta_config.debug_delta:
3769 if self.delta_config.debug_delta:
3767 write_debug = tr._report
3770 write_debug = tr._report
3768 deltacomputer = deltautil.deltacomputer(
3771 deltacomputer = deltautil.deltacomputer(
3769 destrevlog,
3772 destrevlog,
3770 write_debug=write_debug,
3773 write_debug=write_debug,
3771 )
3774 )
3772 index = self.index
3775 index = self.index
3773 for rev in self:
3776 for rev in self:
3774 entry = index[rev]
3777 entry = index[rev]
3775
3778
3776 # Some classes override linkrev to take filtered revs into
3779 # Some classes override linkrev to take filtered revs into
3777 # account. Use raw entry from index.
3780 # account. Use raw entry from index.
3778 flags = entry[0] & 0xFFFF
3781 flags = entry[0] & 0xFFFF
3779 linkrev = entry[4]
3782 linkrev = entry[4]
3780 p1 = index[entry[5]][7]
3783 p1 = index[entry[5]][7]
3781 p2 = index[entry[6]][7]
3784 p2 = index[entry[6]][7]
3782 node = entry[7]
3785 node = entry[7]
3783
3786
3784 # (Possibly) reuse the delta from the revlog if allowed and
3787 # (Possibly) reuse the delta from the revlog if allowed and
3785 # the revlog chunk is a delta.
3788 # the revlog chunk is a delta.
3786 cachedelta = None
3789 cachedelta = None
3787 rawtext = None
3790 rawtext = None
3788 if deltareuse == self.DELTAREUSEFULLADD:
3791 if deltareuse == self.DELTAREUSEFULLADD:
3789 text = self._revisiondata(rev)
3792 text = self._revisiondata(rev)
3790 sidedata = self.sidedata(rev)
3793 sidedata = self.sidedata(rev)
3791
3794
3792 if sidedata_helpers is not None:
3795 if sidedata_helpers is not None:
3793 (sidedata, new_flags) = sidedatautil.run_sidedata_helpers(
3796 (sidedata, new_flags) = sidedatautil.run_sidedata_helpers(
3794 self, sidedata_helpers, sidedata, rev
3797 self, sidedata_helpers, sidedata, rev
3795 )
3798 )
3796 flags = flags | new_flags[0] & ~new_flags[1]
3799 flags = flags | new_flags[0] & ~new_flags[1]
3797
3800
3798 destrevlog.addrevision(
3801 destrevlog.addrevision(
3799 text,
3802 text,
3800 tr,
3803 tr,
3801 linkrev,
3804 linkrev,
3802 p1,
3805 p1,
3803 p2,
3806 p2,
3804 cachedelta=cachedelta,
3807 cachedelta=cachedelta,
3805 node=node,
3808 node=node,
3806 flags=flags,
3809 flags=flags,
3807 deltacomputer=deltacomputer,
3810 deltacomputer=deltacomputer,
3808 sidedata=sidedata,
3811 sidedata=sidedata,
3809 )
3812 )
3810 else:
3813 else:
3811 if destrevlog.delta_config.lazy_delta:
3814 if destrevlog.delta_config.lazy_delta:
3812 dp = self.deltaparent(rev)
3815 dp = self.deltaparent(rev)
3813 if dp != nullrev:
3816 if dp != nullrev:
3814 cachedelta = (dp, bytes(self._inner._chunk(rev)))
3817 cachedelta = (dp, bytes(self._inner._chunk(rev)))
3815
3818
3816 sidedata = None
3819 sidedata = None
3817 if not cachedelta:
3820 if not cachedelta:
3818 try:
3821 try:
3819 rawtext = self._revisiondata(rev)
3822 rawtext = self._revisiondata(rev)
3820 except error.CensoredNodeError as censored:
3823 except error.CensoredNodeError as censored:
3821 assert flags & REVIDX_ISCENSORED
3824 assert flags & REVIDX_ISCENSORED
3822 rawtext = censored.tombstone
3825 rawtext = censored.tombstone
3823 sidedata = self.sidedata(rev)
3826 sidedata = self.sidedata(rev)
3824 if sidedata is None:
3827 if sidedata is None:
3825 sidedata = self.sidedata(rev)
3828 sidedata = self.sidedata(rev)
3826
3829
3827 if sidedata_helpers is not None:
3830 if sidedata_helpers is not None:
3828 (sidedata, new_flags) = sidedatautil.run_sidedata_helpers(
3831 (sidedata, new_flags) = sidedatautil.run_sidedata_helpers(
3829 self, sidedata_helpers, sidedata, rev
3832 self, sidedata_helpers, sidedata, rev
3830 )
3833 )
3831 flags = flags | new_flags[0] & ~new_flags[1]
3834 flags = flags | new_flags[0] & ~new_flags[1]
3832
3835
3833 destrevlog._addrevision(
3836 destrevlog._addrevision(
3834 node,
3837 node,
3835 rawtext,
3838 rawtext,
3836 tr,
3839 tr,
3837 linkrev,
3840 linkrev,
3838 p1,
3841 p1,
3839 p2,
3842 p2,
3840 flags,
3843 flags,
3841 cachedelta,
3844 cachedelta,
3842 deltacomputer=deltacomputer,
3845 deltacomputer=deltacomputer,
3843 sidedata=sidedata,
3846 sidedata=sidedata,
3844 )
3847 )
3845
3848
3846 if addrevisioncb:
3849 if addrevisioncb:
3847 addrevisioncb(self, rev, node)
3850 addrevisioncb(self, rev, node)
3848
3851
3849 def censorrevision(self, tr, censor_nodes, tombstone=b''):
3852 def censorrevision(self, tr, censor_nodes, tombstone=b''):
3850 if self._format_version == REVLOGV0:
3853 if self._format_version == REVLOGV0:
3851 raise error.RevlogError(
3854 raise error.RevlogError(
3852 _(b'cannot censor with version %d revlogs')
3855 _(b'cannot censor with version %d revlogs')
3853 % self._format_version
3856 % self._format_version
3854 )
3857 )
3855 elif self._format_version == REVLOGV1:
3858 elif self._format_version == REVLOGV1:
3856 rewrite.v1_censor(self, tr, censor_nodes, tombstone)
3859 rewrite.v1_censor(self, tr, censor_nodes, tombstone)
3857 else:
3860 else:
3858 rewrite.v2_censor(self, tr, censor_nodes, tombstone)
3861 rewrite.v2_censor(self, tr, censor_nodes, tombstone)
3859
3862
3860 def verifyintegrity(self, state):
3863 def verifyintegrity(self, state):
3861 """Verifies the integrity of the revlog.
3864 """Verifies the integrity of the revlog.
3862
3865
3863 Yields ``revlogproblem`` instances describing problems that are
3866 Yields ``revlogproblem`` instances describing problems that are
3864 found.
3867 found.
3865 """
3868 """
3866 dd, di = self.checksize()
3869 dd, di = self.checksize()
3867 if dd:
3870 if dd:
3868 yield revlogproblem(error=_(b'data length off by %d bytes') % dd)
3871 yield revlogproblem(error=_(b'data length off by %d bytes') % dd)
3869 if di:
3872 if di:
3870 yield revlogproblem(error=_(b'index contains %d extra bytes') % di)
3873 yield revlogproblem(error=_(b'index contains %d extra bytes') % di)
3871
3874
3872 version = self._format_version
3875 version = self._format_version
3873
3876
3874 # The verifier tells us what version revlog we should be.
3877 # The verifier tells us what version revlog we should be.
3875 if version != state[b'expectedversion']:
3878 if version != state[b'expectedversion']:
3876 yield revlogproblem(
3879 yield revlogproblem(
3877 warning=_(b"warning: '%s' uses revlog format %d; expected %d")
3880 warning=_(b"warning: '%s' uses revlog format %d; expected %d")
3878 % (self.display_id, version, state[b'expectedversion'])
3881 % (self.display_id, version, state[b'expectedversion'])
3879 )
3882 )
3880
3883
3881 state[b'skipread'] = set()
3884 state[b'skipread'] = set()
3882 state[b'safe_renamed'] = set()
3885 state[b'safe_renamed'] = set()
3883
3886
3884 for rev in self:
3887 for rev in self:
3885 node = self.node(rev)
3888 node = self.node(rev)
3886
3889
3887 # Verify contents. 4 cases to care about:
3890 # Verify contents. 4 cases to care about:
3888 #
3891 #
3889 # common: the most common case
3892 # common: the most common case
3890 # rename: with a rename
3893 # rename: with a rename
3891 # meta: file content starts with b'\1\n', the metadata
3894 # meta: file content starts with b'\1\n', the metadata
3892 # header defined in filelog.py, but without a rename
3895 # header defined in filelog.py, but without a rename
3893 # ext: content stored externally
3896 # ext: content stored externally
3894 #
3897 #
3895 # More formally, their differences are shown below:
3898 # More formally, their differences are shown below:
3896 #
3899 #
3897 # | common | rename | meta | ext
3900 # | common | rename | meta | ext
3898 # -------------------------------------------------------
3901 # -------------------------------------------------------
3899 # flags() | 0 | 0 | 0 | not 0
3902 # flags() | 0 | 0 | 0 | not 0
3900 # renamed() | False | True | False | ?
3903 # renamed() | False | True | False | ?
3901 # rawtext[0:2]=='\1\n'| False | True | True | ?
3904 # rawtext[0:2]=='\1\n'| False | True | True | ?
3902 #
3905 #
3903 # "rawtext" means the raw text stored in revlog data, which
3906 # "rawtext" means the raw text stored in revlog data, which
3904 # could be retrieved by "rawdata(rev)". "text"
3907 # could be retrieved by "rawdata(rev)". "text"
3905 # mentioned below is "revision(rev)".
3908 # mentioned below is "revision(rev)".
3906 #
3909 #
3907 # There are 3 different lengths stored physically:
3910 # There are 3 different lengths stored physically:
3908 # 1. L1: rawsize, stored in revlog index
3911 # 1. L1: rawsize, stored in revlog index
3909 # 2. L2: len(rawtext), stored in revlog data
3912 # 2. L2: len(rawtext), stored in revlog data
3910 # 3. L3: len(text), stored in revlog data if flags==0, or
3913 # 3. L3: len(text), stored in revlog data if flags==0, or
3911 # possibly somewhere else if flags!=0
3914 # possibly somewhere else if flags!=0
3912 #
3915 #
3913 # L1 should be equal to L2. L3 could be different from them.
3916 # L1 should be equal to L2. L3 could be different from them.
3914 # "text" may or may not affect commit hash depending on flag
3917 # "text" may or may not affect commit hash depending on flag
3915 # processors (see flagutil.addflagprocessor).
3918 # processors (see flagutil.addflagprocessor).
3916 #
3919 #
3917 # | common | rename | meta | ext
3920 # | common | rename | meta | ext
3918 # -------------------------------------------------
3921 # -------------------------------------------------
3919 # rawsize() | L1 | L1 | L1 | L1
3922 # rawsize() | L1 | L1 | L1 | L1
3920 # size() | L1 | L2-LM | L1(*) | L1 (?)
3923 # size() | L1 | L2-LM | L1(*) | L1 (?)
3921 # len(rawtext) | L2 | L2 | L2 | L2
3924 # len(rawtext) | L2 | L2 | L2 | L2
3922 # len(text) | L2 | L2 | L2 | L3
3925 # len(text) | L2 | L2 | L2 | L3
3923 # len(read()) | L2 | L2-LM | L2-LM | L3 (?)
3926 # len(read()) | L2 | L2-LM | L2-LM | L3 (?)
3924 #
3927 #
3925 # LM: length of metadata, depending on rawtext
3928 # LM: length of metadata, depending on rawtext
3926 # (*): not ideal, see comment in filelog.size
3929 # (*): not ideal, see comment in filelog.size
3927 # (?): could be "- len(meta)" if the resolved content has
3930 # (?): could be "- len(meta)" if the resolved content has
3928 # rename metadata
3931 # rename metadata
3929 #
3932 #
3930 # Checks needed to be done:
3933 # Checks needed to be done:
3931 # 1. length check: L1 == L2, in all cases.
3934 # 1. length check: L1 == L2, in all cases.
3932 # 2. hash check: depending on flag processor, we may need to
3935 # 2. hash check: depending on flag processor, we may need to
3933 # use either "text" (external), or "rawtext" (in revlog).
3936 # use either "text" (external), or "rawtext" (in revlog).
3934
3937
3935 try:
3938 try:
3936 skipflags = state.get(b'skipflags', 0)
3939 skipflags = state.get(b'skipflags', 0)
3937 if skipflags:
3940 if skipflags:
3938 skipflags &= self.flags(rev)
3941 skipflags &= self.flags(rev)
3939
3942
3940 _verify_revision(self, skipflags, state, node)
3943 _verify_revision(self, skipflags, state, node)
3941
3944
3942 l1 = self.rawsize(rev)
3945 l1 = self.rawsize(rev)
3943 l2 = len(self.rawdata(node))
3946 l2 = len(self.rawdata(node))
3944
3947
3945 if l1 != l2:
3948 if l1 != l2:
3946 yield revlogproblem(
3949 yield revlogproblem(
3947 error=_(b'unpacked size is %d, %d expected') % (l2, l1),
3950 error=_(b'unpacked size is %d, %d expected') % (l2, l1),
3948 node=node,
3951 node=node,
3949 )
3952 )
3950
3953
3951 except error.CensoredNodeError:
3954 except error.CensoredNodeError:
3952 if state[b'erroroncensored']:
3955 if state[b'erroroncensored']:
3953 yield revlogproblem(
3956 yield revlogproblem(
3954 error=_(b'censored file data'), node=node
3957 error=_(b'censored file data'), node=node
3955 )
3958 )
3956 state[b'skipread'].add(node)
3959 state[b'skipread'].add(node)
3957 except Exception as e:
3960 except Exception as e:
3958 yield revlogproblem(
3961 yield revlogproblem(
3959 error=_(b'unpacking %s: %s')
3962 error=_(b'unpacking %s: %s')
3960 % (short(node), stringutil.forcebytestr(e)),
3963 % (short(node), stringutil.forcebytestr(e)),
3961 node=node,
3964 node=node,
3962 )
3965 )
3963 state[b'skipread'].add(node)
3966 state[b'skipread'].add(node)
3964
3967
3965 def storageinfo(
3968 def storageinfo(
3966 self,
3969 self,
3967 exclusivefiles=False,
3970 exclusivefiles=False,
3968 sharedfiles=False,
3971 sharedfiles=False,
3969 revisionscount=False,
3972 revisionscount=False,
3970 trackedsize=False,
3973 trackedsize=False,
3971 storedsize=False,
3974 storedsize=False,
3972 ):
3975 ):
3973 d = {}
3976 d = {}
3974
3977
3975 if exclusivefiles:
3978 if exclusivefiles:
3976 d[b'exclusivefiles'] = [(self.opener, self._indexfile)]
3979 d[b'exclusivefiles'] = [(self.opener, self._indexfile)]
3977 if not self._inline:
3980 if not self._inline:
3978 d[b'exclusivefiles'].append((self.opener, self._datafile))
3981 d[b'exclusivefiles'].append((self.opener, self._datafile))
3979
3982
3980 if sharedfiles:
3983 if sharedfiles:
3981 d[b'sharedfiles'] = []
3984 d[b'sharedfiles'] = []
3982
3985
3983 if revisionscount:
3986 if revisionscount:
3984 d[b'revisionscount'] = len(self)
3987 d[b'revisionscount'] = len(self)
3985
3988
3986 if trackedsize:
3989 if trackedsize:
3987 d[b'trackedsize'] = sum(map(self.rawsize, iter(self)))
3990 d[b'trackedsize'] = sum(map(self.rawsize, iter(self)))
3988
3991
3989 if storedsize:
3992 if storedsize:
3990 d[b'storedsize'] = sum(
3993 d[b'storedsize'] = sum(
3991 self.opener.stat(path).st_size for path in self.files()
3994 self.opener.stat(path).st_size for path in self.files()
3992 )
3995 )
3993
3996
3994 return d
3997 return d
3995
3998
3996 def rewrite_sidedata(self, transaction, helpers, startrev, endrev):
3999 def rewrite_sidedata(self, transaction, helpers, startrev, endrev):
3997 if not self.feature_config.has_side_data:
4000 if not self.feature_config.has_side_data:
3998 return
4001 return
3999 # revlog formats with sidedata support does not support inline
4002 # revlog formats with sidedata support does not support inline
4000 assert not self._inline
4003 assert not self._inline
4001 if not helpers[1] and not helpers[2]:
4004 if not helpers[1] and not helpers[2]:
4002 # Nothing to generate or remove
4005 # Nothing to generate or remove
4003 return
4006 return
4004
4007
4005 new_entries = []
4008 new_entries = []
4006 # append the new sidedata
4009 # append the new sidedata
4007 with self._writing(transaction):
4010 with self._writing(transaction):
4008 ifh, dfh, sdfh = self._inner._writinghandles
4011 ifh, dfh, sdfh = self._inner._writinghandles
4009 dfh.seek(self._docket.sidedata_end, os.SEEK_SET)
4012 dfh.seek(self._docket.sidedata_end, os.SEEK_SET)
4010
4013
4011 current_offset = sdfh.tell()
4014 current_offset = sdfh.tell()
4012 for rev in range(startrev, endrev + 1):
4015 for rev in range(startrev, endrev + 1):
4013 entry = self.index[rev]
4016 entry = self.index[rev]
4014 new_sidedata, flags = sidedatautil.run_sidedata_helpers(
4017 new_sidedata, flags = sidedatautil.run_sidedata_helpers(
4015 store=self,
4018 store=self,
4016 sidedata_helpers=helpers,
4019 sidedata_helpers=helpers,
4017 sidedata={},
4020 sidedata={},
4018 rev=rev,
4021 rev=rev,
4019 )
4022 )
4020
4023
4021 serialized_sidedata = sidedatautil.serialize_sidedata(
4024 serialized_sidedata = sidedatautil.serialize_sidedata(
4022 new_sidedata
4025 new_sidedata
4023 )
4026 )
4024
4027
4025 sidedata_compression_mode = COMP_MODE_INLINE
4028 sidedata_compression_mode = COMP_MODE_INLINE
4026 if serialized_sidedata and self.feature_config.has_side_data:
4029 if serialized_sidedata and self.feature_config.has_side_data:
4027 sidedata_compression_mode = COMP_MODE_PLAIN
4030 sidedata_compression_mode = COMP_MODE_PLAIN
4028 h, comp_sidedata = self._inner.compress(serialized_sidedata)
4031 h, comp_sidedata = self._inner.compress(serialized_sidedata)
4029 if (
4032 if (
4030 h != b'u'
4033 h != b'u'
4031 and comp_sidedata[0] != b'\0'
4034 and comp_sidedata[0] != b'\0'
4032 and len(comp_sidedata) < len(serialized_sidedata)
4035 and len(comp_sidedata) < len(serialized_sidedata)
4033 ):
4036 ):
4034 assert not h
4037 assert not h
4035 if (
4038 if (
4036 comp_sidedata[0]
4039 comp_sidedata[0]
4037 == self._docket.default_compression_header
4040 == self._docket.default_compression_header
4038 ):
4041 ):
4039 sidedata_compression_mode = COMP_MODE_DEFAULT
4042 sidedata_compression_mode = COMP_MODE_DEFAULT
4040 serialized_sidedata = comp_sidedata
4043 serialized_sidedata = comp_sidedata
4041 else:
4044 else:
4042 sidedata_compression_mode = COMP_MODE_INLINE
4045 sidedata_compression_mode = COMP_MODE_INLINE
4043 serialized_sidedata = comp_sidedata
4046 serialized_sidedata = comp_sidedata
4044 if entry[8] != 0 or entry[9] != 0:
4047 if entry[8] != 0 or entry[9] != 0:
4045 # rewriting entries that already have sidedata is not
4048 # rewriting entries that already have sidedata is not
4046 # supported yet, because it introduces garbage data in the
4049 # supported yet, because it introduces garbage data in the
4047 # revlog.
4050 # revlog.
4048 msg = b"rewriting existing sidedata is not supported yet"
4051 msg = b"rewriting existing sidedata is not supported yet"
4049 raise error.Abort(msg)
4052 raise error.Abort(msg)
4050
4053
4051 # Apply (potential) flags to add and to remove after running
4054 # Apply (potential) flags to add and to remove after running
4052 # the sidedata helpers
4055 # the sidedata helpers
4053 new_offset_flags = entry[0] | flags[0] & ~flags[1]
4056 new_offset_flags = entry[0] | flags[0] & ~flags[1]
4054 entry_update = (
4057 entry_update = (
4055 current_offset,
4058 current_offset,
4056 len(serialized_sidedata),
4059 len(serialized_sidedata),
4057 new_offset_flags,
4060 new_offset_flags,
4058 sidedata_compression_mode,
4061 sidedata_compression_mode,
4059 )
4062 )
4060
4063
4061 # the sidedata computation might have move the file cursors around
4064 # the sidedata computation might have move the file cursors around
4062 sdfh.seek(current_offset, os.SEEK_SET)
4065 sdfh.seek(current_offset, os.SEEK_SET)
4063 sdfh.write(serialized_sidedata)
4066 sdfh.write(serialized_sidedata)
4064 new_entries.append(entry_update)
4067 new_entries.append(entry_update)
4065 current_offset += len(serialized_sidedata)
4068 current_offset += len(serialized_sidedata)
4066 self._docket.sidedata_end = sdfh.tell()
4069 self._docket.sidedata_end = sdfh.tell()
4067
4070
4068 # rewrite the new index entries
4071 # rewrite the new index entries
4069 ifh.seek(startrev * self.index.entry_size)
4072 ifh.seek(startrev * self.index.entry_size)
4070 for i, e in enumerate(new_entries):
4073 for i, e in enumerate(new_entries):
4071 rev = startrev + i
4074 rev = startrev + i
4072 self.index.replace_sidedata_info(rev, *e)
4075 self.index.replace_sidedata_info(rev, *e)
4073 packed = self.index.entry_binary(rev)
4076 packed = self.index.entry_binary(rev)
4074 if rev == 0 and self._docket is None:
4077 if rev == 0 and self._docket is None:
4075 header = self._format_flags | self._format_version
4078 header = self._format_flags | self._format_version
4076 header = self.index.pack_header(header)
4079 header = self.index.pack_header(header)
4077 packed = header + packed
4080 packed = header + packed
4078 ifh.write(packed)
4081 ifh.write(packed)
@@ -1,1253 +1,1253 b''
1
1
2 $ hg init repo
2 $ hg init repo
3 $ cd repo
3 $ cd repo
4
4
5 $ cat > $TESTTMP/hook.sh <<'EOF'
5 $ cat > $TESTTMP/hook.sh <<'EOF'
6 > echo "test-hook-bookmark: $HG_BOOKMARK: $HG_OLDNODE -> $HG_NODE"
6 > echo "test-hook-bookmark: $HG_BOOKMARK: $HG_OLDNODE -> $HG_NODE"
7 > EOF
7 > EOF
8 $ TESTHOOK="hooks.txnclose-bookmark.test=sh $TESTTMP/hook.sh"
8 $ TESTHOOK="hooks.txnclose-bookmark.test=sh $TESTTMP/hook.sh"
9
9
10 no bookmarks
10 no bookmarks
11
11
12 $ hg bookmarks
12 $ hg bookmarks
13 no bookmarks set
13 no bookmarks set
14
14
15 $ hg bookmarks -Tjson
15 $ hg bookmarks -Tjson
16 [
16 [
17 ]
17 ]
18
18
19 bookmark rev -1
19 bookmark rev -1
20
20
21 $ hg bookmark X --config "$TESTHOOK"
21 $ hg bookmark X --config "$TESTHOOK"
22 test-hook-bookmark: X: -> 0000000000000000000000000000000000000000
22 test-hook-bookmark: X: -> 0000000000000000000000000000000000000000
23
23
24 list bookmarks
24 list bookmarks
25
25
26 $ hg bookmarks
26 $ hg bookmarks
27 * X -1:000000000000
27 * X -1:000000000000
28
28
29 list bookmarks with color
29 list bookmarks with color
30
30
31 $ hg --config extensions.color= --config color.mode=ansi \
31 $ hg --config extensions.color= --config color.mode=ansi \
32 > bookmarks --color=always
32 > bookmarks --color=always
33 \x1b[0;32m * \x1b[0m\x1b[0;32mX\x1b[0m\x1b[0;32m -1:000000000000\x1b[0m (esc)
33 \x1b[0;32m * \x1b[0m\x1b[0;32mX\x1b[0m\x1b[0;32m -1:000000000000\x1b[0m (esc)
34
34
35 $ echo a > a
35 $ echo a > a
36 $ hg add a
36 $ hg add a
37 $ hg commit -m 0 --config "$TESTHOOK"
37 $ hg commit -m 0 --config "$TESTHOOK"
38 test-hook-bookmark: X: 0000000000000000000000000000000000000000 -> f7b1eb17ad24730a1651fccd46c43826d1bbc2ac
38 test-hook-bookmark: X: 0000000000000000000000000000000000000000 -> f7b1eb17ad24730a1651fccd46c43826d1bbc2ac
39
39
40 bookmark X moved to rev 0
40 bookmark X moved to rev 0
41
41
42 $ hg bookmarks
42 $ hg bookmarks
43 * X 0:f7b1eb17ad24
43 * X 0:f7b1eb17ad24
44
44
45 look up bookmark
45 look up bookmark
46
46
47 $ hg log -r X
47 $ hg log -r X
48 changeset: 0:f7b1eb17ad24
48 changeset: 0:f7b1eb17ad24
49 bookmark: X
49 bookmark: X
50 tag: tip
50 tag: tip
51 user: test
51 user: test
52 date: Thu Jan 01 00:00:00 1970 +0000
52 date: Thu Jan 01 00:00:00 1970 +0000
53 summary: 0
53 summary: 0
54
54
55
55
56 second bookmark for rev 0, command should work even with ui.strict on
56 second bookmark for rev 0, command should work even with ui.strict on
57
57
58 $ hg --config ui.strict=1 bookmark X2 --config "$TESTHOOK"
58 $ hg --config ui.strict=1 bookmark X2 --config "$TESTHOOK"
59 test-hook-bookmark: X2: -> f7b1eb17ad24730a1651fccd46c43826d1bbc2ac
59 test-hook-bookmark: X2: -> f7b1eb17ad24730a1651fccd46c43826d1bbc2ac
60
60
61 bookmark rev -1 again
61 bookmark rev -1 again
62
62
63 $ hg bookmark -r null Y
63 $ hg bookmark -r null Y
64
64
65 list bookmarks
65 list bookmarks
66
66
67 $ hg bookmarks
67 $ hg bookmarks
68 X 0:f7b1eb17ad24
68 X 0:f7b1eb17ad24
69 * X2 0:f7b1eb17ad24
69 * X2 0:f7b1eb17ad24
70 Y -1:000000000000
70 Y -1:000000000000
71 $ hg bookmarks -l
71 $ hg bookmarks -l
72 X 0:f7b1eb17ad24
72 X 0:f7b1eb17ad24
73 * X2 0:f7b1eb17ad24
73 * X2 0:f7b1eb17ad24
74 Y -1:000000000000
74 Y -1:000000000000
75 $ hg bookmarks -l X Y
75 $ hg bookmarks -l X Y
76 X 0:f7b1eb17ad24
76 X 0:f7b1eb17ad24
77 Y -1:000000000000
77 Y -1:000000000000
78 $ hg bookmarks -l .
78 $ hg bookmarks -l .
79 * X2 0:f7b1eb17ad24
79 * X2 0:f7b1eb17ad24
80 $ hg bookmarks -l X A Y
80 $ hg bookmarks -l X A Y
81 abort: bookmark 'A' does not exist
81 abort: bookmark 'A' does not exist
82 [10]
82 [10]
83 $ hg bookmarks -l -r0
83 $ hg bookmarks -l -r0
84 abort: cannot specify both --list and --rev
84 abort: cannot specify both --list and --rev
85 [10]
85 [10]
86 $ hg bookmarks -l --inactive
86 $ hg bookmarks -l --inactive
87 abort: cannot specify both --inactive and --list
87 abort: cannot specify both --inactive and --list
88 [10]
88 [10]
89
89
90 $ hg log -T '{bookmarks % "{rev} {bookmark}\n"}'
90 $ hg log -T '{bookmarks % "{rev} {bookmark}\n"}'
91 0 X
91 0 X
92 0 X2
92 0 X2
93
93
94 $ echo b > b
94 $ echo b > b
95 $ hg add b
95 $ hg add b
96 $ hg commit -m 1 --config "$TESTHOOK"
96 $ hg commit -m 1 --config "$TESTHOOK"
97 test-hook-bookmark: X2: f7b1eb17ad24730a1651fccd46c43826d1bbc2ac -> 925d80f479bb026b0fb3deb27503780b13f74123
97 test-hook-bookmark: X2: f7b1eb17ad24730a1651fccd46c43826d1bbc2ac -> 925d80f479bb026b0fb3deb27503780b13f74123
98
98
99 $ hg bookmarks -T '{rev}:{node|shortest} {bookmark} {desc|firstline}\n'
99 $ hg bookmarks -T '{rev}:{node|shortest} {bookmark} {desc|firstline}\n'
100 0:f7b1 X 0
100 0:f7b1 X 0
101 1:925d X2 1
101 1:925d X2 1
102 -1:0000 Y
102 -1:0000 Y
103
103
104 $ hg bookmarks -Tjson
104 $ hg bookmarks -Tjson
105 [
105 [
106 {
106 {
107 "active": false,
107 "active": false,
108 "bookmark": "X",
108 "bookmark": "X",
109 "node": "f7b1eb17ad24730a1651fccd46c43826d1bbc2ac",
109 "node": "f7b1eb17ad24730a1651fccd46c43826d1bbc2ac",
110 "rev": 0
110 "rev": 0
111 },
111 },
112 {
112 {
113 "active": true,
113 "active": true,
114 "bookmark": "X2",
114 "bookmark": "X2",
115 "node": "925d80f479bb026b0fb3deb27503780b13f74123",
115 "node": "925d80f479bb026b0fb3deb27503780b13f74123",
116 "rev": 1
116 "rev": 1
117 },
117 },
118 {
118 {
119 "active": false,
119 "active": false,
120 "bookmark": "Y",
120 "bookmark": "Y",
121 "node": "0000000000000000000000000000000000000000",
121 "node": "0000000000000000000000000000000000000000",
122 "rev": -1
122 "rev": -1
123 }
123 }
124 ]
124 ]
125
125
126 bookmarks revset
126 bookmarks revset
127
127
128 $ hg log -r 'bookmark()'
128 $ hg log -r 'bookmark()'
129 changeset: 0:f7b1eb17ad24
129 changeset: 0:f7b1eb17ad24
130 bookmark: X
130 bookmark: X
131 user: test
131 user: test
132 date: Thu Jan 01 00:00:00 1970 +0000
132 date: Thu Jan 01 00:00:00 1970 +0000
133 summary: 0
133 summary: 0
134
134
135 changeset: 1:925d80f479bb
135 changeset: 1:925d80f479bb
136 bookmark: X2
136 bookmark: X2
137 tag: tip
137 tag: tip
138 user: test
138 user: test
139 date: Thu Jan 01 00:00:00 1970 +0000
139 date: Thu Jan 01 00:00:00 1970 +0000
140 summary: 1
140 summary: 1
141
141
142 $ hg log -r 'bookmark(Y)'
142 $ hg log -r 'bookmark(Y)'
143 $ hg log -r 'bookmark(X2)'
143 $ hg log -r 'bookmark(X2)'
144 changeset: 1:925d80f479bb
144 changeset: 1:925d80f479bb
145 bookmark: X2
145 bookmark: X2
146 tag: tip
146 tag: tip
147 user: test
147 user: test
148 date: Thu Jan 01 00:00:00 1970 +0000
148 date: Thu Jan 01 00:00:00 1970 +0000
149 summary: 1
149 summary: 1
150
150
151 $ hg log -r 'bookmark("re:X")'
151 $ hg log -r 'bookmark("re:X")'
152 changeset: 0:f7b1eb17ad24
152 changeset: 0:f7b1eb17ad24
153 bookmark: X
153 bookmark: X
154 user: test
154 user: test
155 date: Thu Jan 01 00:00:00 1970 +0000
155 date: Thu Jan 01 00:00:00 1970 +0000
156 summary: 0
156 summary: 0
157
157
158 changeset: 1:925d80f479bb
158 changeset: 1:925d80f479bb
159 bookmark: X2
159 bookmark: X2
160 tag: tip
160 tag: tip
161 user: test
161 user: test
162 date: Thu Jan 01 00:00:00 1970 +0000
162 date: Thu Jan 01 00:00:00 1970 +0000
163 summary: 1
163 summary: 1
164
164
165 $ hg log -r 'bookmark("literal:X")'
165 $ hg log -r 'bookmark("literal:X")'
166 changeset: 0:f7b1eb17ad24
166 changeset: 0:f7b1eb17ad24
167 bookmark: X
167 bookmark: X
168 user: test
168 user: test
169 date: Thu Jan 01 00:00:00 1970 +0000
169 date: Thu Jan 01 00:00:00 1970 +0000
170 summary: 0
170 summary: 0
171
171
172
172
173 "." is expanded to the active bookmark:
173 "." is expanded to the active bookmark:
174
174
175 $ hg log -r 'bookmark(.)'
175 $ hg log -r 'bookmark(.)'
176 changeset: 1:925d80f479bb
176 changeset: 1:925d80f479bb
177 bookmark: X2
177 bookmark: X2
178 tag: tip
178 tag: tip
179 user: test
179 user: test
180 date: Thu Jan 01 00:00:00 1970 +0000
180 date: Thu Jan 01 00:00:00 1970 +0000
181 summary: 1
181 summary: 1
182
182
183
183
184 but "literal:." is not since "." seems not a literal bookmark:
184 but "literal:." is not since "." seems not a literal bookmark:
185
185
186 $ hg log -r 'bookmark("literal:.")'
186 $ hg log -r 'bookmark("literal:.")'
187 abort: bookmark '.' does not exist
187 abort: bookmark '.' does not exist
188 [10]
188 [10]
189
189
190 "." should fail if there's no active bookmark:
190 "." should fail if there's no active bookmark:
191
191
192 $ hg bookmark --inactive
192 $ hg bookmark --inactive
193 $ hg log -r 'bookmark(.)'
193 $ hg log -r 'bookmark(.)'
194 abort: no active bookmark
194 abort: no active bookmark
195 [10]
195 [10]
196 $ hg log -r 'present(bookmark(.))'
196 $ hg log -r 'present(bookmark(.))'
197
197
198 $ hg log -r 'bookmark(unknown)'
198 $ hg log -r 'bookmark(unknown)'
199 abort: bookmark 'unknown' does not exist
199 abort: bookmark 'unknown' does not exist
200 [10]
200 [10]
201 $ hg log -r 'bookmark("literal:unknown")'
201 $ hg log -r 'bookmark("literal:unknown")'
202 abort: bookmark 'unknown' does not exist
202 abort: bookmark 'unknown' does not exist
203 [10]
203 [10]
204 $ hg log -r 'bookmark("re:unknown")'
204 $ hg log -r 'bookmark("re:unknown")'
205 $ hg log -r 'present(bookmark("literal:unknown"))'
205 $ hg log -r 'present(bookmark("literal:unknown"))'
206 $ hg log -r 'present(bookmark("re:unknown"))'
206 $ hg log -r 'present(bookmark("re:unknown"))'
207
207
208 $ hg help revsets | grep 'bookmark('
208 $ hg help revsets | grep 'bookmark('
209 "bookmark([name])"
209 "bookmark([name])"
210
210
211 reactivate "X2"
211 reactivate "X2"
212
212
213 $ hg update X2
213 $ hg update X2
214 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
214 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
215 (activating bookmark X2)
215 (activating bookmark X2)
216
216
217 bookmarks X and X2 moved to rev 1, Y at rev -1
217 bookmarks X and X2 moved to rev 1, Y at rev -1
218
218
219 $ hg bookmarks
219 $ hg bookmarks
220 X 0:f7b1eb17ad24
220 X 0:f7b1eb17ad24
221 * X2 1:925d80f479bb
221 * X2 1:925d80f479bb
222 Y -1:000000000000
222 Y -1:000000000000
223
223
224 bookmark rev 0 again
224 bookmark rev 0 again
225
225
226 $ hg bookmark -r 0 Z
226 $ hg bookmark -r 0 Z
227
227
228 $ hg update X
228 $ hg update X
229 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
229 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
230 (activating bookmark X)
230 (activating bookmark X)
231 $ echo c > c
231 $ echo c > c
232 $ hg add c
232 $ hg add c
233 $ hg commit -m 2
233 $ hg commit -m 2
234 created new head
234 created new head
235
235
236 bookmarks X moved to rev 2, Y at rev -1, Z at rev 0
236 bookmarks X moved to rev 2, Y at rev -1, Z at rev 0
237
237
238 $ hg bookmarks
238 $ hg bookmarks
239 * X 2:db815d6d32e6
239 * X 2:db815d6d32e6
240 X2 1:925d80f479bb
240 X2 1:925d80f479bb
241 Y -1:000000000000
241 Y -1:000000000000
242 Z 0:f7b1eb17ad24
242 Z 0:f7b1eb17ad24
243
243
244 rename nonexistent bookmark
244 rename nonexistent bookmark
245
245
246 $ hg bookmark -m A B
246 $ hg bookmark -m A B
247 abort: bookmark 'A' does not exist
247 abort: bookmark 'A' does not exist
248 [10]
248 [10]
249
249
250 rename to existent bookmark
250 rename to existent bookmark
251
251
252 $ hg bookmark -m X Y
252 $ hg bookmark -m X Y
253 abort: bookmark 'Y' already exists (use -f to force)
253 abort: bookmark 'Y' already exists (use -f to force)
254 [255]
254 [255]
255
255
256 force rename to existent bookmark
256 force rename to existent bookmark
257
257
258 $ hg bookmark -f -m X Y
258 $ hg bookmark -f -m X Y
259
259
260 rename bookmark using .
260 rename bookmark using .
261
261
262 $ hg book rename-me
262 $ hg book rename-me
263 $ hg book -m . renamed --config "$TESTHOOK"
263 $ hg book -m . renamed --config "$TESTHOOK"
264 test-hook-bookmark: rename-me: db815d6d32e69058eadefc8cffbad37675707975 ->
264 test-hook-bookmark: rename-me: db815d6d32e69058eadefc8cffbad37675707975 ->
265 test-hook-bookmark: renamed: -> db815d6d32e69058eadefc8cffbad37675707975
265 test-hook-bookmark: renamed: -> db815d6d32e69058eadefc8cffbad37675707975
266 $ hg bookmark
266 $ hg bookmark
267 X2 1:925d80f479bb
267 X2 1:925d80f479bb
268 Y 2:db815d6d32e6
268 Y 2:db815d6d32e6
269 Z 0:f7b1eb17ad24
269 Z 0:f7b1eb17ad24
270 * renamed 2:db815d6d32e6
270 * renamed 2:db815d6d32e6
271 $ hg up -q Y
271 $ hg up -q Y
272 $ hg book -d renamed --config "$TESTHOOK"
272 $ hg book -d renamed --config "$TESTHOOK"
273 test-hook-bookmark: renamed: db815d6d32e69058eadefc8cffbad37675707975 ->
273 test-hook-bookmark: renamed: db815d6d32e69058eadefc8cffbad37675707975 ->
274
274
275 rename bookmark using . with no active bookmark
275 rename bookmark using . with no active bookmark
276
276
277 $ hg book rename-me
277 $ hg book rename-me
278 $ hg book -i rename-me
278 $ hg book -i rename-me
279 $ hg book -m . renamed
279 $ hg book -m . renamed
280 abort: no active bookmark
280 abort: no active bookmark
281 [10]
281 [10]
282 $ hg up -q Y
282 $ hg up -q Y
283 $ hg book -d rename-me
283 $ hg book -d rename-me
284
284
285 delete bookmark using .
285 delete bookmark using .
286
286
287 $ hg book delete-me
287 $ hg book delete-me
288 $ hg book -d .
288 $ hg book -d .
289 $ hg bookmark
289 $ hg bookmark
290 X2 1:925d80f479bb
290 X2 1:925d80f479bb
291 Y 2:db815d6d32e6
291 Y 2:db815d6d32e6
292 Z 0:f7b1eb17ad24
292 Z 0:f7b1eb17ad24
293 $ hg up -q Y
293 $ hg up -q Y
294
294
295 delete bookmark using . with no active bookmark
295 delete bookmark using . with no active bookmark
296
296
297 $ hg book delete-me
297 $ hg book delete-me
298 $ hg book -i delete-me
298 $ hg book -i delete-me
299 $ hg book -d .
299 $ hg book -d .
300 abort: no active bookmark
300 abort: no active bookmark
301 [10]
301 [10]
302 $ hg up -q Y
302 $ hg up -q Y
303 $ hg book -d delete-me
303 $ hg book -d delete-me
304
304
305 list bookmarks
305 list bookmarks
306
306
307 $ hg bookmark
307 $ hg bookmark
308 X2 1:925d80f479bb
308 X2 1:925d80f479bb
309 * Y 2:db815d6d32e6
309 * Y 2:db815d6d32e6
310 Z 0:f7b1eb17ad24
310 Z 0:f7b1eb17ad24
311
311
312 bookmarks from a revset
312 bookmarks from a revset
313 $ hg bookmark -r '.^1' REVSET
313 $ hg bookmark -r '.^1' REVSET
314 $ hg bookmark -r ':tip' TIP
314 $ hg bookmark -r ':tip' TIP
315 $ hg up -q TIP
315 $ hg up -q TIP
316 $ hg bookmarks
316 $ hg bookmarks
317 REVSET 0:f7b1eb17ad24
317 REVSET 0:f7b1eb17ad24
318 * TIP 2:db815d6d32e6
318 * TIP 2:db815d6d32e6
319 X2 1:925d80f479bb
319 X2 1:925d80f479bb
320 Y 2:db815d6d32e6
320 Y 2:db815d6d32e6
321 Z 0:f7b1eb17ad24
321 Z 0:f7b1eb17ad24
322
322
323 $ hg bookmark -d REVSET
323 $ hg bookmark -d REVSET
324 $ hg bookmark -d TIP
324 $ hg bookmark -d TIP
325
325
326 rename without new name or multiple names
326 rename without new name or multiple names
327
327
328 $ hg bookmark -m Y
328 $ hg bookmark -m Y
329 abort: new bookmark name required
329 abort: new bookmark name required
330 [10]
330 [10]
331 $ hg bookmark -m Y Y2 Y3
331 $ hg bookmark -m Y Y2 Y3
332 abort: only one new bookmark name allowed
332 abort: only one new bookmark name allowed
333 [10]
333 [10]
334
334
335 delete without name
335 delete without name
336
336
337 $ hg bookmark -d
337 $ hg bookmark -d
338 abort: bookmark name required
338 abort: bookmark name required
339 [10]
339 [10]
340
340
341 delete nonexistent bookmark
341 delete nonexistent bookmark
342
342
343 $ hg bookmark -d A
343 $ hg bookmark -d A
344 abort: bookmark 'A' does not exist
344 abort: bookmark 'A' does not exist
345 [10]
345 [10]
346
346
347 delete with --inactive
347 delete with --inactive
348
348
349 $ hg bookmark -d --inactive Y
349 $ hg bookmark -d --inactive Y
350 abort: cannot specify both --inactive and --delete
350 abort: cannot specify both --inactive and --delete
351 [10]
351 [10]
352
352
353 bookmark name with spaces should be stripped
353 bookmark name with spaces should be stripped
354
354
355 $ hg bookmark ' x y '
355 $ hg bookmark ' x y '
356
356
357 list bookmarks
357 list bookmarks
358
358
359 $ hg bookmarks
359 $ hg bookmarks
360 X2 1:925d80f479bb
360 X2 1:925d80f479bb
361 Y 2:db815d6d32e6
361 Y 2:db815d6d32e6
362 Z 0:f7b1eb17ad24
362 Z 0:f7b1eb17ad24
363 * x y 2:db815d6d32e6
363 * x y 2:db815d6d32e6
364 $ hg log -T '{bookmarks % "{rev} {bookmark}\n"}'
364 $ hg log -T '{bookmarks % "{rev} {bookmark}\n"}'
365 2 Y
365 2 Y
366 2 x y
366 2 x y
367 1 X2
367 1 X2
368 0 Z
368 0 Z
369
369
370 look up stripped bookmark name
370 look up stripped bookmark name
371
371
372 $ hg log -r '"x y"'
372 $ hg log -r '"x y"'
373 changeset: 2:db815d6d32e6
373 changeset: 2:db815d6d32e6
374 bookmark: Y
374 bookmark: Y
375 bookmark: x y
375 bookmark: x y
376 tag: tip
376 tag: tip
377 parent: 0:f7b1eb17ad24
377 parent: 0:f7b1eb17ad24
378 user: test
378 user: test
379 date: Thu Jan 01 00:00:00 1970 +0000
379 date: Thu Jan 01 00:00:00 1970 +0000
380 summary: 2
380 summary: 2
381
381
382
382
383 reject bookmark name with newline
383 reject bookmark name with newline
384
384
385 $ hg bookmark '
385 $ hg bookmark '
386 > '
386 > '
387 abort: bookmark names cannot consist entirely of whitespace
387 abort: bookmark names cannot consist entirely of whitespace
388 [10]
388 [10]
389
389
390 $ hg bookmark -m Z '
390 $ hg bookmark -m Z '
391 > '
391 > '
392 abort: bookmark names cannot consist entirely of whitespace
392 abort: bookmark names cannot consist entirely of whitespace
393 [10]
393 [10]
394
394
395 bookmark with reserved name
395 bookmark with reserved name
396
396
397 $ hg bookmark tip
397 $ hg bookmark tip
398 abort: the name 'tip' is reserved
398 abort: the name 'tip' is reserved
399 [10]
399 [10]
400
400
401 $ hg bookmark .
401 $ hg bookmark .
402 abort: the name '.' is reserved
402 abort: the name '.' is reserved
403 [10]
403 [10]
404
404
405 $ hg bookmark null
405 $ hg bookmark null
406 abort: the name 'null' is reserved
406 abort: the name 'null' is reserved
407 [10]
407 [10]
408
408
409
409
410 bookmark with existing name
410 bookmark with existing name
411
411
412 $ hg bookmark X2
412 $ hg bookmark X2
413 abort: bookmark 'X2' already exists (use -f to force)
413 abort: bookmark 'X2' already exists (use -f to force)
414 [255]
414 [255]
415
415
416 $ hg bookmark -m Y Z
416 $ hg bookmark -m Y Z
417 abort: bookmark 'Z' already exists (use -f to force)
417 abort: bookmark 'Z' already exists (use -f to force)
418 [255]
418 [255]
419
419
420 bookmark with name of branch
420 bookmark with name of branch
421
421
422 $ hg bookmark default
422 $ hg bookmark default
423 abort: a bookmark cannot have the name of an existing branch
423 abort: a bookmark cannot have the name of an existing branch
424 [255]
424 [255]
425
425
426 $ hg bookmark -m Y default
426 $ hg bookmark -m Y default
427 abort: a bookmark cannot have the name of an existing branch
427 abort: a bookmark cannot have the name of an existing branch
428 [255]
428 [255]
429
429
430 bookmark with integer name
430 bookmark with integer name
431
431
432 $ hg bookmark 10
432 $ hg bookmark 10
433 abort: cannot use an integer as a name
433 abort: cannot use an integer as a name
434 [10]
434 [10]
435
435
436 bookmark with a name that matches a node id
436 bookmark with a name that matches a node id
437 $ hg bookmark 925d80f479bb db815d6d32e6 --config "$TESTHOOK"
437 $ hg bookmark 925d80f479bb db815d6d32e6 --config "$TESTHOOK"
438 bookmark 925d80f479bb matches a changeset hash
438 bookmark 925d80f479bb matches a changeset hash
439 (did you leave a -r out of an 'hg bookmark' command?)
439 (did you leave a -r out of an 'hg bookmark' command?)
440 bookmark db815d6d32e6 matches a changeset hash
440 bookmark db815d6d32e6 matches a changeset hash
441 (did you leave a -r out of an 'hg bookmark' command?)
441 (did you leave a -r out of an 'hg bookmark' command?)
442 test-hook-bookmark: 925d80f479bb: -> db815d6d32e69058eadefc8cffbad37675707975
442 test-hook-bookmark: 925d80f479bb: -> db815d6d32e69058eadefc8cffbad37675707975
443 test-hook-bookmark: db815d6d32e6: -> db815d6d32e69058eadefc8cffbad37675707975
443 test-hook-bookmark: db815d6d32e6: -> db815d6d32e69058eadefc8cffbad37675707975
444 $ hg bookmark -d 925d80f479bb
444 $ hg bookmark -d 925d80f479bb
445 $ hg bookmark -d db815d6d32e6
445 $ hg bookmark -d db815d6d32e6
446
446
447 $ cd ..
447 $ cd ..
448
448
449 bookmark with a name that matches an ambiguous node id
449 bookmark with a name that matches an ambiguous node id
450
450
451 $ hg init ambiguous
451 $ hg init ambiguous
452 $ cd ambiguous
452 $ cd ambiguous
453 $ echo 0 > a
453 $ echo 0 > a
454 $ hg ci -qAm 0
454 $ hg ci -qAm 0
455 $ for i in 1057 2857 4025; do
455 $ for i in 1057 2857 4025; do
456 > hg up -q 0
456 > hg up -q 0
457 > echo $i > a
457 > echo $i > a
458 > hg ci -qm $i
458 > hg ci -qm $i
459 > done
459 > done
460 $ hg up -q null
460 $ hg up -q null
461 $ hg log -r0: -T '{rev}:{node}\n'
461 $ hg log -r0: -T '{rev}:{node}\n'
462 0:b4e73ffab476aa0ee32ed81ca51e07169844bc6a
462 0:b4e73ffab476aa0ee32ed81ca51e07169844bc6a
463 1:c56256a09cd28e5764f32e8e2810d0f01e2e357a
463 1:c56256a09cd28e5764f32e8e2810d0f01e2e357a
464 2:c5623987d205cd6d9d8389bfc40fff9dbb670b48
464 2:c5623987d205cd6d9d8389bfc40fff9dbb670b48
465 3:c562ddd9c94164376c20b86b0b4991636a3bf84f
465 3:c562ddd9c94164376c20b86b0b4991636a3bf84f
466
466
467 $ hg bookmark -r0 c562
467 $ hg bookmark -r0 c562
468 $ hg bookmarks
468 $ hg bookmarks
469 c562 0:b4e73ffab476
469 c562 0:b4e73ffab476
470
470
471 $ cd ..
471 $ cd ..
472
472
473 incompatible options
473 incompatible options
474
474
475 $ cd repo
475 $ cd repo
476
476
477 $ hg bookmark -m Y -d Z
477 $ hg bookmark -m Y -d Z
478 abort: cannot specify both --delete and --rename
478 abort: cannot specify both --delete and --rename
479 [10]
479 [10]
480
480
481 $ hg bookmark -r 1 -d Z
481 $ hg bookmark -r 1 -d Z
482 abort: cannot specify both --delete and --rev
482 abort: cannot specify both --delete and --rev
483 [10]
483 [10]
484
484
485 $ hg bookmark -r 1 -m Z Y
485 $ hg bookmark -r 1 -m Z Y
486 abort: cannot specify both --rename and --rev
486 abort: cannot specify both --rename and --rev
487 [10]
487 [10]
488
488
489 force bookmark with existing name
489 force bookmark with existing name
490
490
491 $ hg bookmark -f X2 --config "$TESTHOOK"
491 $ hg bookmark -f X2 --config "$TESTHOOK"
492 test-hook-bookmark: X2: 925d80f479bb026b0fb3deb27503780b13f74123 -> db815d6d32e69058eadefc8cffbad37675707975
492 test-hook-bookmark: X2: 925d80f479bb026b0fb3deb27503780b13f74123 -> db815d6d32e69058eadefc8cffbad37675707975
493
493
494 force bookmark back to where it was, should deactivate it
494 force bookmark back to where it was, should deactivate it
495
495
496 $ hg bookmark -fr1 X2
496 $ hg bookmark -fr1 X2
497 $ hg bookmarks
497 $ hg bookmarks
498 X2 1:925d80f479bb
498 X2 1:925d80f479bb
499 Y 2:db815d6d32e6
499 Y 2:db815d6d32e6
500 Z 0:f7b1eb17ad24
500 Z 0:f7b1eb17ad24
501 x y 2:db815d6d32e6
501 x y 2:db815d6d32e6
502
502
503 forward bookmark to descendant without --force
503 forward bookmark to descendant without --force
504
504
505 $ hg bookmark Z
505 $ hg bookmark Z
506 moving bookmark 'Z' forward from f7b1eb17ad24
506 moving bookmark 'Z' forward from f7b1eb17ad24
507
507
508 list bookmarks
508 list bookmarks
509
509
510 $ hg bookmark
510 $ hg bookmark
511 X2 1:925d80f479bb
511 X2 1:925d80f479bb
512 Y 2:db815d6d32e6
512 Y 2:db815d6d32e6
513 * Z 2:db815d6d32e6
513 * Z 2:db815d6d32e6
514 x y 2:db815d6d32e6
514 x y 2:db815d6d32e6
515 $ hg log -T '{bookmarks % "{rev} {bookmark}\n"}'
515 $ hg log -T '{bookmarks % "{rev} {bookmark}\n"}'
516 2 Y
516 2 Y
517 2 Z
517 2 Z
518 2 x y
518 2 x y
519 1 X2
519 1 X2
520
520
521 revision but no bookmark name
521 revision but no bookmark name
522
522
523 $ hg bookmark -r .
523 $ hg bookmark -r .
524 abort: bookmark name required
524 abort: bookmark name required
525 [10]
525 [10]
526
526
527 bookmark name with whitespace only
527 bookmark name with whitespace only
528
528
529 $ hg bookmark ' '
529 $ hg bookmark ' '
530 abort: bookmark names cannot consist entirely of whitespace
530 abort: bookmark names cannot consist entirely of whitespace
531 [10]
531 [10]
532
532
533 $ hg bookmark -m Y ' '
533 $ hg bookmark -m Y ' '
534 abort: bookmark names cannot consist entirely of whitespace
534 abort: bookmark names cannot consist entirely of whitespace
535 [10]
535 [10]
536
536
537 invalid bookmark
537 invalid bookmark
538
538
539 $ hg bookmark 'foo:bar'
539 $ hg bookmark 'foo:bar'
540 abort: ':' cannot be used in a name
540 abort: ':' cannot be used in a name
541 [10]
541 [10]
542
542
543 $ hg bookmark 'foo
543 $ hg bookmark 'foo
544 > bar'
544 > bar'
545 abort: '\n' cannot be used in a name
545 abort: '\n' cannot be used in a name
546 [10]
546 [10]
547
547
548 the bookmark extension should be ignored now that it is part of core
548 the bookmark extension should be ignored now that it is part of core
549
549
550 $ echo "[extensions]" >> $HGRCPATH
550 $ echo "[extensions]" >> $HGRCPATH
551 $ echo "bookmarks=" >> $HGRCPATH
551 $ echo "bookmarks=" >> $HGRCPATH
552 $ hg bookmarks
552 $ hg bookmarks
553 X2 1:925d80f479bb
553 X2 1:925d80f479bb
554 Y 2:db815d6d32e6
554 Y 2:db815d6d32e6
555 * Z 2:db815d6d32e6
555 * Z 2:db815d6d32e6
556 x y 2:db815d6d32e6
556 x y 2:db815d6d32e6
557
557
558 test summary
558 test summary
559
559
560 $ hg summary
560 $ hg summary
561 parent: 2:db815d6d32e6 tip
561 parent: 2:db815d6d32e6 tip
562 2
562 2
563 branch: default
563 branch: default
564 bookmarks: *Z Y x y
564 bookmarks: *Z Y x y
565 commit: (clean)
565 commit: (clean)
566 update: 1 new changesets, 2 branch heads (merge)
566 update: 1 new changesets, 2 branch heads (merge)
567 phases: 3 draft
567 phases: 3 draft
568
568
569 test id
569 test id
570
570
571 $ hg id
571 $ hg id
572 db815d6d32e6 tip Y/Z/x y
572 db815d6d32e6 tip Y/Z/x y
573
573
574 test rollback
574 test rollback
575
575
576 $ echo foo > f1
576 $ echo foo > f1
577 $ hg bookmark tmp-rollback
577 $ hg bookmark tmp-rollback
578 $ hg add .
578 $ hg add .
579 adding f1
579 adding f1
580 $ hg ci -mr
580 $ hg ci -mr
581 $ hg bookmarks
581 $ hg bookmarks
582 X2 1:925d80f479bb
582 X2 1:925d80f479bb
583 Y 2:db815d6d32e6
583 Y 2:db815d6d32e6
584 Z 2:db815d6d32e6
584 Z 2:db815d6d32e6
585 * tmp-rollback 3:2bf5cfec5864
585 * tmp-rollback 3:2bf5cfec5864
586 x y 2:db815d6d32e6
586 x y 2:db815d6d32e6
587 $ hg rollback
587 $ hg rollback
588 repository tip rolled back to revision 2 (undo commit)
588 repository tip rolled back to revision 2 (undo commit)
589 working directory now based on revision 2
589 working directory now based on revision 2
590 $ hg bookmarks
590 $ hg bookmarks
591 X2 1:925d80f479bb
591 X2 1:925d80f479bb
592 Y 2:db815d6d32e6
592 Y 2:db815d6d32e6
593 Z 2:db815d6d32e6
593 Z 2:db815d6d32e6
594 * tmp-rollback 2:db815d6d32e6
594 * tmp-rollback 2:db815d6d32e6
595 x y 2:db815d6d32e6
595 x y 2:db815d6d32e6
596 $ hg bookmark -f Z -r 1
596 $ hg bookmark -f Z -r 1
597 $ hg rollback
597 $ hg rollback
598 repository tip rolled back to revision 2 (undo bookmark)
598 repository tip rolled back to revision 2 (undo bookmark)
599 $ hg bookmarks
599 $ hg bookmarks
600 X2 1:925d80f479bb
600 X2 1:925d80f479bb
601 Y 2:db815d6d32e6
601 Y 2:db815d6d32e6
602 Z 2:db815d6d32e6
602 Z 2:db815d6d32e6
603 * tmp-rollback 2:db815d6d32e6
603 * tmp-rollback 2:db815d6d32e6
604 x y 2:db815d6d32e6
604 x y 2:db815d6d32e6
605 $ hg bookmark -d tmp-rollback
605 $ hg bookmark -d tmp-rollback
606
606
607 activate bookmark on working dir parent without --force
607 activate bookmark on working dir parent without --force
608
608
609 $ hg bookmark --inactive Z
609 $ hg bookmark --inactive Z
610 $ hg bookmark Z
610 $ hg bookmark Z
611
611
612 deactivate current 'Z', but also add 'Y'
612 deactivate current 'Z', but also add 'Y'
613
613
614 $ hg bookmark -d Y
614 $ hg bookmark -d Y
615 $ hg bookmark --inactive Z Y
615 $ hg bookmark --inactive Z Y
616 $ hg bookmark -l
616 $ hg bookmark -l
617 X2 1:925d80f479bb
617 X2 1:925d80f479bb
618 Y 2:db815d6d32e6
618 Y 2:db815d6d32e6
619 Z 2:db815d6d32e6
619 Z 2:db815d6d32e6
620 x y 2:db815d6d32e6
620 x y 2:db815d6d32e6
621 $ hg bookmark Z
621 $ hg bookmark Z
622
622
623 bookmark wdir to activate it (issue6218)
623 bookmark wdir to activate it (issue6218)
624
624
625 $ hg bookmark -d Z
625 $ hg bookmark -d Z
626 $ hg bookmark -r 'wdir()' Z
626 $ hg bookmark -r 'wdir()' Z
627 $ hg bookmark -l
627 $ hg bookmark -l
628 X2 1:925d80f479bb
628 X2 1:925d80f479bb
629 Y 2:db815d6d32e6
629 Y 2:db815d6d32e6
630 * Z 2:db815d6d32e6
630 * Z 2:db815d6d32e6
631 x y 2:db815d6d32e6
631 x y 2:db815d6d32e6
632
632
633 test clone
633 test clone
634
634
635 $ hg bookmark -r 2 -i @
635 $ hg bookmark -r 2 -i @
636 $ hg bookmark -r 2 -i a@
636 $ hg bookmark -r 2 -i a@
637 $ hg bookmarks
637 $ hg bookmarks
638 @ 2:db815d6d32e6
638 @ 2:db815d6d32e6
639 X2 1:925d80f479bb
639 X2 1:925d80f479bb
640 Y 2:db815d6d32e6
640 Y 2:db815d6d32e6
641 * Z 2:db815d6d32e6
641 * Z 2:db815d6d32e6
642 a@ 2:db815d6d32e6
642 a@ 2:db815d6d32e6
643 x y 2:db815d6d32e6
643 x y 2:db815d6d32e6
644 $ hg clone . cloned-bookmarks
644 $ hg clone . cloned-bookmarks
645 updating to bookmark @
645 updating to bookmark @
646 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
646 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
647 $ hg -R cloned-bookmarks bookmarks
647 $ hg -R cloned-bookmarks bookmarks
648 * @ 2:db815d6d32e6
648 * @ 2:db815d6d32e6
649 X2 1:925d80f479bb
649 X2 1:925d80f479bb
650 Y 2:db815d6d32e6
650 Y 2:db815d6d32e6
651 Z 2:db815d6d32e6
651 Z 2:db815d6d32e6
652 a@ 2:db815d6d32e6
652 a@ 2:db815d6d32e6
653 x y 2:db815d6d32e6
653 x y 2:db815d6d32e6
654
654
655 test clone with pull protocol
655 test clone with pull protocol
656
656
657 $ hg clone --pull . cloned-bookmarks-pull
657 $ hg clone --pull . cloned-bookmarks-pull
658 requesting all changes
658 requesting all changes
659 adding changesets
659 adding changesets
660 adding manifests
660 adding manifests
661 adding file changes
661 adding file changes
662 added 3 changesets with 3 changes to 3 files (+1 heads)
662 added 3 changesets with 3 changes to 3 files (+1 heads)
663 new changesets f7b1eb17ad24:db815d6d32e6
663 new changesets f7b1eb17ad24:db815d6d32e6
664 updating to bookmark @
664 updating to bookmark @
665 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
665 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
666 $ hg -R cloned-bookmarks-pull bookmarks
666 $ hg -R cloned-bookmarks-pull bookmarks
667 * @ 2:db815d6d32e6
667 * @ 2:db815d6d32e6
668 X2 1:925d80f479bb
668 X2 1:925d80f479bb
669 Y 2:db815d6d32e6
669 Y 2:db815d6d32e6
670 Z 2:db815d6d32e6
670 Z 2:db815d6d32e6
671 a@ 2:db815d6d32e6
671 a@ 2:db815d6d32e6
672 x y 2:db815d6d32e6
672 x y 2:db815d6d32e6
673
673
674 delete multiple bookmarks at once
674 delete multiple bookmarks at once
675
675
676 $ hg bookmark -d @ a@
676 $ hg bookmark -d @ a@
677
677
678 test clone with a bookmark named "default" (issue3677)
678 test clone with a bookmark named "default" (issue3677)
679
679
680 $ hg bookmark -r 1 -f -i default
680 $ hg bookmark -r 1 -f -i default
681 $ hg clone . cloned-bookmark-default
681 $ hg clone . cloned-bookmark-default
682 updating to branch default
682 updating to branch default
683 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
683 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
684 $ hg -R cloned-bookmark-default bookmarks
684 $ hg -R cloned-bookmark-default bookmarks
685 X2 1:925d80f479bb
685 X2 1:925d80f479bb
686 Y 2:db815d6d32e6
686 Y 2:db815d6d32e6
687 Z 2:db815d6d32e6
687 Z 2:db815d6d32e6
688 default 1:925d80f479bb
688 default 1:925d80f479bb
689 x y 2:db815d6d32e6
689 x y 2:db815d6d32e6
690 $ hg -R cloned-bookmark-default parents -q
690 $ hg -R cloned-bookmark-default parents -q
691 2:db815d6d32e6
691 2:db815d6d32e6
692 $ hg bookmark -d default
692 $ hg bookmark -d default
693
693
694 test clone with a specific revision
694 test clone with a specific revision
695
695
696 $ hg clone -r 925d80 . cloned-bookmarks-rev
696 $ hg clone -r 925d80 . cloned-bookmarks-rev
697 adding changesets
697 adding changesets
698 adding manifests
698 adding manifests
699 adding file changes
699 adding file changes
700 added 2 changesets with 2 changes to 2 files
700 added 2 changesets with 2 changes to 2 files
701 new changesets f7b1eb17ad24:925d80f479bb
701 new changesets f7b1eb17ad24:925d80f479bb
702 updating to branch default
702 updating to branch default
703 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
703 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
704 $ hg -R cloned-bookmarks-rev bookmarks
704 $ hg -R cloned-bookmarks-rev bookmarks
705 X2 1:925d80f479bb
705 X2 1:925d80f479bb
706
706
707 test clone with update to a bookmark
707 test clone with update to a bookmark
708
708
709 $ hg clone -u Z . ../cloned-bookmarks-update
709 $ hg clone -u Z . ../cloned-bookmarks-update
710 updating to branch default
710 updating to branch default
711 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
711 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
712 $ hg -R ../cloned-bookmarks-update bookmarks
712 $ hg -R ../cloned-bookmarks-update bookmarks
713 X2 1:925d80f479bb
713 X2 1:925d80f479bb
714 Y 2:db815d6d32e6
714 Y 2:db815d6d32e6
715 * Z 2:db815d6d32e6
715 * Z 2:db815d6d32e6
716 x y 2:db815d6d32e6
716 x y 2:db815d6d32e6
717
717
718 create bundle with two heads
718 create bundle with two heads
719
719
720 $ hg clone . tobundle
720 $ hg clone . tobundle
721 updating to branch default
721 updating to branch default
722 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
722 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
723 $ echo x > tobundle/x
723 $ echo x > tobundle/x
724 $ hg -R tobundle add tobundle/x
724 $ hg -R tobundle add tobundle/x
725 $ hg -R tobundle commit -m'x'
725 $ hg -R tobundle commit -m'x'
726 $ hg -R tobundle update -r -2
726 $ hg -R tobundle update -r -2
727 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
727 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
728 $ echo y > tobundle/y
728 $ echo y > tobundle/y
729 $ hg -R tobundle branch test
729 $ hg -R tobundle branch test
730 marked working directory as branch test
730 marked working directory as branch test
731 (branches are permanent and global, did you want a bookmark?)
731 (branches are permanent and global, did you want a bookmark?)
732 $ hg -R tobundle add tobundle/y
732 $ hg -R tobundle add tobundle/y
733 $ hg -R tobundle commit -m'y'
733 $ hg -R tobundle commit -m'y'
734 $ hg -R tobundle bundle tobundle.hg
734 $ hg -R tobundle bundle tobundle.hg
735 searching for changes
735 searching for changes
736 2 changesets found
736 2 changesets found
737 $ hg unbundle tobundle.hg
737 $ hg unbundle tobundle.hg
738 adding changesets
738 adding changesets
739 adding manifests
739 adding manifests
740 adding file changes
740 adding file changes
741 added 2 changesets with 2 changes to 2 files (+1 heads)
741 added 2 changesets with 2 changes to 2 files (+1 heads)
742 new changesets 125c9a1d6df6:9ba5f110a0b3 (2 drafts)
742 new changesets 125c9a1d6df6:9ba5f110a0b3 (2 drafts)
743 (run 'hg heads' to see heads, 'hg merge' to merge)
743 (run 'hg heads' to see heads, 'hg merge' to merge)
744
744
745 update to active bookmark if it's not the parent
745 update to active bookmark if it's not the parent
746
746
747 (it is known issue that fsmonitor can't handle nested repositories. In
747 (it is known issue that fsmonitor can't handle nested repositories. In
748 this test scenario, cloned-bookmark-default and tobundle exist in the
748 this test scenario, cloned-bookmark-default and tobundle exist in the
749 working directory of current repository)
749 working directory of current repository)
750
750
751 $ hg summary
751 $ hg summary
752 parent: 2:db815d6d32e6
752 parent: 2:db815d6d32e6
753 2
753 2
754 branch: default
754 branch: default
755 bookmarks: *Z Y x y
755 bookmarks: *Z Y x y
756 commit: 1 added, 1 unknown (new branch head) (no-fsmonitor !)
756 commit: 1 added, 1 unknown (new branch head) (no-fsmonitor !)
757 commit: 1 added, * unknown (new branch head) (glob) (fsmonitor !)
757 commit: 1 added, * unknown (new branch head) (glob) (fsmonitor !)
758 update: 2 new changesets (update)
758 update: 2 new changesets (update)
759 phases: 5 draft
759 phases: 5 draft
760 $ hg update
760 $ hg update
761 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
761 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
762 updating bookmark Z
762 updating bookmark Z
763 $ hg bookmarks
763 $ hg bookmarks
764 X2 1:925d80f479bb
764 X2 1:925d80f479bb
765 Y 2:db815d6d32e6
765 Y 2:db815d6d32e6
766 * Z 3:125c9a1d6df6
766 * Z 3:125c9a1d6df6
767 x y 2:db815d6d32e6
767 x y 2:db815d6d32e6
768
768
769 pull --update works the same as pull && update
769 pull --update works the same as pull && update
770
770
771 $ hg bookmark -r3 Y
771 $ hg bookmark -r3 Y
772 moving bookmark 'Y' forward from db815d6d32e6
772 moving bookmark 'Y' forward from db815d6d32e6
773 $ cp -R ../cloned-bookmarks-update ../cloned-bookmarks-manual-update
773 $ cp -R ../cloned-bookmarks-update ../cloned-bookmarks-manual-update
774 $ cp -R ../cloned-bookmarks-update ../cloned-bookmarks-manual-update-with-divergence
774 $ cp -R ../cloned-bookmarks-update ../cloned-bookmarks-manual-update-with-divergence
775
775
776 (manual version)
776 (manual version)
777
777
778 $ hg -R ../cloned-bookmarks-manual-update update Y
778 $ hg -R ../cloned-bookmarks-manual-update update Y
779 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
779 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
780 (activating bookmark Y)
780 (activating bookmark Y)
781 $ hg -R ../cloned-bookmarks-manual-update pull .
781 $ hg -R ../cloned-bookmarks-manual-update pull .
782 pulling from .
782 pulling from .
783 searching for changes
783 searching for changes
784 adding changesets
784 adding changesets
785 adding manifests
785 adding manifests
786 adding file changes
786 adding file changes
787 updating bookmark Y
787 updating bookmark Y
788 updating bookmark Z
788 updating bookmark Z
789 added 2 changesets with 2 changes to 2 files (+1 heads)
789 added 2 changesets with 2 changes to 2 files (+1 heads)
790 new changesets 125c9a1d6df6:9ba5f110a0b3
790 new changesets 125c9a1d6df6:9ba5f110a0b3
791 (run 'hg heads' to see heads, 'hg merge' to merge)
791 (run 'hg heads' to see heads, 'hg merge' to merge)
792
792
793 (# tests strange but with --date crashing when bookmark have to move)
793 (# tests strange but with --date crashing when bookmark have to move)
794
794
795 $ hg -R ../cloned-bookmarks-manual-update update -d 1986
795 $ hg -R ../cloned-bookmarks-manual-update update -d 1986
796 abort: revision matching date not found
796 abort: revision matching date not found
797 [10]
797 [10]
798 $ hg -R ../cloned-bookmarks-manual-update update
798 $ hg -R ../cloned-bookmarks-manual-update update
799 updating to active bookmark Y
799 updating to active bookmark Y
800 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
800 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
801
801
802 (all in one version)
802 (all in one version)
803
803
804 $ hg -R ../cloned-bookmarks-update update Y
804 $ hg -R ../cloned-bookmarks-update update Y
805 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
805 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
806 (activating bookmark Y)
806 (activating bookmark Y)
807 $ hg -R ../cloned-bookmarks-update pull --update .
807 $ hg -R ../cloned-bookmarks-update pull --update .
808 pulling from .
808 pulling from .
809 searching for changes
809 searching for changes
810 adding changesets
810 adding changesets
811 adding manifests
811 adding manifests
812 adding file changes
812 adding file changes
813 updating bookmark Y
813 updating bookmark Y
814 updating bookmark Z
814 updating bookmark Z
815 added 2 changesets with 2 changes to 2 files (+1 heads)
815 added 2 changesets with 2 changes to 2 files (+1 heads)
816 new changesets 125c9a1d6df6:9ba5f110a0b3
816 new changesets 125c9a1d6df6:9ba5f110a0b3
817 updating to active bookmark Y
817 updating to active bookmark Y
818 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
818 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
819
819
820 We warn about divergent during bare update to the active bookmark
820 We warn about divergent during bare update to the active bookmark
821
821
822 $ hg -R ../cloned-bookmarks-manual-update-with-divergence update Y
822 $ hg -R ../cloned-bookmarks-manual-update-with-divergence update Y
823 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
823 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
824 (activating bookmark Y)
824 (activating bookmark Y)
825 $ hg -R ../cloned-bookmarks-manual-update-with-divergence bookmarks -r X2 Y@1
825 $ hg -R ../cloned-bookmarks-manual-update-with-divergence bookmarks -r X2 Y@1
826 $ hg -R ../cloned-bookmarks-manual-update-with-divergence bookmarks
826 $ hg -R ../cloned-bookmarks-manual-update-with-divergence bookmarks
827 X2 1:925d80f479bb
827 X2 1:925d80f479bb
828 * Y 2:db815d6d32e6
828 * Y 2:db815d6d32e6
829 Y@1 1:925d80f479bb
829 Y@1 1:925d80f479bb
830 Z 2:db815d6d32e6
830 Z 2:db815d6d32e6
831 x y 2:db815d6d32e6
831 x y 2:db815d6d32e6
832 $ hg -R ../cloned-bookmarks-manual-update-with-divergence pull
832 $ hg -R ../cloned-bookmarks-manual-update-with-divergence pull
833 pulling from $TESTTMP/repo
833 pulling from $TESTTMP/repo
834 searching for changes
834 searching for changes
835 adding changesets
835 adding changesets
836 adding manifests
836 adding manifests
837 adding file changes
837 adding file changes
838 updating bookmark Y
838 updating bookmark Y
839 updating bookmark Z
839 updating bookmark Z
840 added 2 changesets with 2 changes to 2 files (+1 heads)
840 added 2 changesets with 2 changes to 2 files (+1 heads)
841 new changesets 125c9a1d6df6:9ba5f110a0b3
841 new changesets 125c9a1d6df6:9ba5f110a0b3
842 (run 'hg heads' to see heads, 'hg merge' to merge)
842 (run 'hg heads' to see heads, 'hg merge' to merge)
843 $ hg -R ../cloned-bookmarks-manual-update-with-divergence update
843 $ hg -R ../cloned-bookmarks-manual-update-with-divergence update
844 updating to active bookmark Y
844 updating to active bookmark Y
845 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
845 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
846 1 other divergent bookmarks for "Y"
846 1 other divergent bookmarks for "Y"
847
847
848 test wrongly formated bookmark
848 test wrongly formated bookmark
849
849
850 $ echo '' >> .hg/bookmarks
850 $ echo '' >> .hg/bookmarks
851 $ hg bookmarks
851 $ hg bookmarks
852 X2 1:925d80f479bb
852 X2 1:925d80f479bb
853 Y 3:125c9a1d6df6
853 Y 3:125c9a1d6df6
854 * Z 3:125c9a1d6df6
854 * Z 3:125c9a1d6df6
855 x y 2:db815d6d32e6
855 x y 2:db815d6d32e6
856 $ echo "Ican'thasformatedlines" >> .hg/bookmarks
856 $ echo "Ican'thasformatedlines" >> .hg/bookmarks
857 $ hg bookmarks
857 $ hg bookmarks
858 malformed line in .hg/bookmarks: "Ican'thasformatedlines"
858 malformed line in .hg/bookmarks: "Ican'thasformatedlines"
859 X2 1:925d80f479bb
859 X2 1:925d80f479bb
860 Y 3:125c9a1d6df6
860 Y 3:125c9a1d6df6
861 * Z 3:125c9a1d6df6
861 * Z 3:125c9a1d6df6
862 x y 2:db815d6d32e6
862 x y 2:db815d6d32e6
863
863
864 test missing revisions
864 test missing revisions
865
865
866 $ echo "925d80f479b925d80f479bc925d80f479bccabab z" > .hg/bookmarks
866 $ echo "925d80f479b925d80f479bc925d80f479bccabab z" > .hg/bookmarks
867 $ hg book
867 $ hg book
868 no bookmarks set
868 no bookmarks set
869
869
870 test stripping a non-checked-out but bookmarked revision
870 test stripping a non-checked-out but bookmarked revision
871
871
872 $ hg log --graph
872 $ hg log --graph
873 o changeset: 4:9ba5f110a0b3
873 o changeset: 4:9ba5f110a0b3
874 | branch: test
874 | branch: test
875 | tag: tip
875 | tag: tip
876 | parent: 2:db815d6d32e6
876 | parent: 2:db815d6d32e6
877 | user: test
877 | user: test
878 | date: Thu Jan 01 00:00:00 1970 +0000
878 | date: Thu Jan 01 00:00:00 1970 +0000
879 | summary: y
879 | summary: y
880 |
880 |
881 | @ changeset: 3:125c9a1d6df6
881 | @ changeset: 3:125c9a1d6df6
882 |/ user: test
882 |/ user: test
883 | date: Thu Jan 01 00:00:00 1970 +0000
883 | date: Thu Jan 01 00:00:00 1970 +0000
884 | summary: x
884 | summary: x
885 |
885 |
886 o changeset: 2:db815d6d32e6
886 o changeset: 2:db815d6d32e6
887 | parent: 0:f7b1eb17ad24
887 | parent: 0:f7b1eb17ad24
888 | user: test
888 | user: test
889 | date: Thu Jan 01 00:00:00 1970 +0000
889 | date: Thu Jan 01 00:00:00 1970 +0000
890 | summary: 2
890 | summary: 2
891 |
891 |
892 | o changeset: 1:925d80f479bb
892 | o changeset: 1:925d80f479bb
893 |/ user: test
893 |/ user: test
894 | date: Thu Jan 01 00:00:00 1970 +0000
894 | date: Thu Jan 01 00:00:00 1970 +0000
895 | summary: 1
895 | summary: 1
896 |
896 |
897 o changeset: 0:f7b1eb17ad24
897 o changeset: 0:f7b1eb17ad24
898 user: test
898 user: test
899 date: Thu Jan 01 00:00:00 1970 +0000
899 date: Thu Jan 01 00:00:00 1970 +0000
900 summary: 0
900 summary: 0
901
901
902 $ hg book should-end-on-two
902 $ hg book should-end-on-two
903 $ hg co --clean 4
903 $ hg co --clean 4
904 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
904 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
905 (leaving bookmark should-end-on-two)
905 (leaving bookmark should-end-on-two)
906 $ hg book four
906 $ hg book four
907 $ hg --config extensions.mq= strip 3
907 $ hg --config extensions.mq= strip 3
908 saved backup bundle to * (glob)
908 saved backup bundle to * (glob)
909 should-end-on-two should end up pointing to revision 2, as that's the
909 should-end-on-two should end up pointing to revision 2, as that's the
910 tipmost surviving ancestor of the stripped revision.
910 tipmost surviving ancestor of the stripped revision.
911 $ hg log --graph
911 $ hg log --graph
912 @ changeset: 3:9ba5f110a0b3
912 @ changeset: 3:9ba5f110a0b3
913 | branch: test
913 | branch: test
914 | bookmark: four
914 | bookmark: four
915 | tag: tip
915 | tag: tip
916 | user: test
916 | user: test
917 | date: Thu Jan 01 00:00:00 1970 +0000
917 | date: Thu Jan 01 00:00:00 1970 +0000
918 | summary: y
918 | summary: y
919 |
919 |
920 o changeset: 2:db815d6d32e6
920 o changeset: 2:db815d6d32e6
921 | bookmark: should-end-on-two
921 | bookmark: should-end-on-two
922 | parent: 0:f7b1eb17ad24
922 | parent: 0:f7b1eb17ad24
923 | user: test
923 | user: test
924 | date: Thu Jan 01 00:00:00 1970 +0000
924 | date: Thu Jan 01 00:00:00 1970 +0000
925 | summary: 2
925 | summary: 2
926 |
926 |
927 | o changeset: 1:925d80f479bb
927 | o changeset: 1:925d80f479bb
928 |/ user: test
928 |/ user: test
929 | date: Thu Jan 01 00:00:00 1970 +0000
929 | date: Thu Jan 01 00:00:00 1970 +0000
930 | summary: 1
930 | summary: 1
931 |
931 |
932 o changeset: 0:f7b1eb17ad24
932 o changeset: 0:f7b1eb17ad24
933 user: test
933 user: test
934 date: Thu Jan 01 00:00:00 1970 +0000
934 date: Thu Jan 01 00:00:00 1970 +0000
935 summary: 0
935 summary: 0
936
936
937
937
938 no-op update doesn't deactivate bookmarks
938 no-op update doesn't deactivate bookmarks
939
939
940 (it is known issue that fsmonitor can't handle nested repositories. In
940 (it is known issue that fsmonitor can't handle nested repositories. In
941 this test scenario, cloned-bookmark-default and tobundle exist in the
941 this test scenario, cloned-bookmark-default and tobundle exist in the
942 working directory of current repository)
942 working directory of current repository)
943
943
944 $ hg bookmarks
944 $ hg bookmarks
945 * four 3:9ba5f110a0b3
945 * four 3:9ba5f110a0b3
946 should-end-on-two 2:db815d6d32e6
946 should-end-on-two 2:db815d6d32e6
947 $ hg up four
947 $ hg up four
948 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
948 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
949 $ hg up
949 $ hg up
950 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
950 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
951 $ hg sum
951 $ hg sum
952 parent: 3:9ba5f110a0b3 tip
952 parent: 3:9ba5f110a0b3 tip
953 y
953 y
954 branch: test
954 branch: test
955 bookmarks: *four
955 bookmarks: *four
956 commit: 2 unknown (clean) (no-fsmonitor !)
956 commit: 2 unknown (clean) (no-fsmonitor !)
957 commit: * unknown (clean) (glob) (fsmonitor !)
957 commit: * unknown (clean) (glob) (fsmonitor !)
958 update: (current)
958 update: (current)
959 phases: 4 draft
959 phases: 4 draft
960
960
961 test clearing divergent bookmarks of linear ancestors
961 test clearing divergent bookmarks of linear ancestors
962
962
963 $ hg bookmark Z -r 0
963 $ hg bookmark Z -r 0
964 $ hg bookmark Z@1 -r 1
964 $ hg bookmark Z@1 -r 1
965 $ hg bookmark Z@2 -r 2
965 $ hg bookmark Z@2 -r 2
966 $ hg bookmark Z@3 -r 3
966 $ hg bookmark Z@3 -r 3
967 $ hg book
967 $ hg book
968 Z 0:f7b1eb17ad24
968 Z 0:f7b1eb17ad24
969 Z@1 1:925d80f479bb
969 Z@1 1:925d80f479bb
970 Z@2 2:db815d6d32e6
970 Z@2 2:db815d6d32e6
971 Z@3 3:9ba5f110a0b3
971 Z@3 3:9ba5f110a0b3
972 * four 3:9ba5f110a0b3
972 * four 3:9ba5f110a0b3
973 should-end-on-two 2:db815d6d32e6
973 should-end-on-two 2:db815d6d32e6
974 $ hg bookmark Z
974 $ hg bookmark Z
975 moving bookmark 'Z' forward from f7b1eb17ad24
975 moving bookmark 'Z' forward from f7b1eb17ad24
976 $ hg book
976 $ hg book
977 * Z 3:9ba5f110a0b3
977 * Z 3:9ba5f110a0b3
978 Z@1 1:925d80f479bb
978 Z@1 1:925d80f479bb
979 four 3:9ba5f110a0b3
979 four 3:9ba5f110a0b3
980 should-end-on-two 2:db815d6d32e6
980 should-end-on-two 2:db815d6d32e6
981
981
982 test clearing only a single divergent bookmark across branches
982 test clearing only a single divergent bookmark across branches
983
983
984 $ hg book foo -r 1
984 $ hg book foo -r 1
985 $ hg book foo@1 -r 0
985 $ hg book foo@1 -r 0
986 $ hg book foo@2 -r 2
986 $ hg book foo@2 -r 2
987 $ hg book foo@3 -r 3
987 $ hg book foo@3 -r 3
988 $ hg book foo -r foo@3
988 $ hg book foo -r foo@3
989 $ hg book
989 $ hg book
990 * Z 3:9ba5f110a0b3
990 * Z 3:9ba5f110a0b3
991 Z@1 1:925d80f479bb
991 Z@1 1:925d80f479bb
992 foo 3:9ba5f110a0b3
992 foo 3:9ba5f110a0b3
993 foo@1 0:f7b1eb17ad24
993 foo@1 0:f7b1eb17ad24
994 foo@2 2:db815d6d32e6
994 foo@2 2:db815d6d32e6
995 four 3:9ba5f110a0b3
995 four 3:9ba5f110a0b3
996 should-end-on-two 2:db815d6d32e6
996 should-end-on-two 2:db815d6d32e6
997
997
998 pull --update works the same as pull && update (case #2)
998 pull --update works the same as pull && update (case #2)
999
999
1000 It is assumed that "hg pull" itself doesn't update current active
1000 It is assumed that "hg pull" itself doesn't update current active
1001 bookmark ('Y' in tests below).
1001 bookmark ('Y' in tests below).
1002
1002
1003 $ hg pull -q ../cloned-bookmarks-update
1003 $ hg pull -q ../cloned-bookmarks-update
1004 divergent bookmark Z stored as Z@2
1004 divergent bookmark Z stored as Z@2
1005
1005
1006 (pulling revision on another named branch with --update updates
1006 (pulling revision on another named branch with --update updates
1007 neither the working directory nor current active bookmark: "no-op"
1007 neither the working directory nor current active bookmark: "no-op"
1008 case)
1008 case)
1009
1009
1010 $ echo yy >> y
1010 $ echo yy >> y
1011 $ hg commit -m yy
1011 $ hg commit -m yy
1012
1012
1013 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1013 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1014 * Y 3:125c9a1d6df6
1014 * Y 3:125c9a1d6df6
1015 $ hg -R ../cloned-bookmarks-update path
1015 $ hg -R ../cloned-bookmarks-update path
1016 default = $TESTTMP/repo
1016 default = $TESTTMP/repo
1017 $ pwd
1017 $ pwd
1018 $TESTTMP/repo
1018 $TESTTMP/repo
1019 $ hg -R ../cloned-bookmarks-update pull . --update
1019 $ hg -R ../cloned-bookmarks-update pull . --update
1020 pulling from .
1020 pulling from .
1021 searching for changes
1021 searching for changes
1022 adding changesets
1022 adding changesets
1023 adding manifests
1023 adding manifests
1024 adding file changes
1024 adding file changes
1025 divergent bookmark Z stored as Z@default
1025 divergent bookmark Z stored as Z@default
1026 adding remote bookmark foo
1026 adding remote bookmark foo
1027 adding remote bookmark four
1027 adding remote bookmark four
1028 adding remote bookmark should-end-on-two
1028 adding remote bookmark should-end-on-two
1029 added 1 changesets with 1 changes to 1 files
1029 added 1 changesets with 1 changes to 1 files
1030 new changesets 5fb12f0f2d51
1030 new changesets 5fb12f0f2d51
1031 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1031 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1032 $ hg -R ../cloned-bookmarks-update parents -T "{rev}:{node|short}\n"
1032 $ hg -R ../cloned-bookmarks-update parents -T "{rev}:{node|short}\n"
1033 3:125c9a1d6df6
1033 3:125c9a1d6df6
1034 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1034 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1035 * Y 3:125c9a1d6df6
1035 * Y 3:125c9a1d6df6
1036
1036
1037 (pulling revision on current named/topological branch with --update
1037 (pulling revision on current named/topological branch with --update
1038 updates the working directory and current active bookmark)
1038 updates the working directory and current active bookmark)
1039
1039
1040 $ hg update -C -q 125c9a1d6df6
1040 $ hg update -C -q 125c9a1d6df6
1041 $ echo xx >> x
1041 $ echo xx >> x
1042 $ hg commit -m xx
1042 $ hg commit -m xx
1043
1043
1044 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1044 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1045 * Y 3:125c9a1d6df6
1045 * Y 3:125c9a1d6df6
1046 $ hg -R ../cloned-bookmarks-update pull . --update
1046 $ hg -R ../cloned-bookmarks-update pull . --update
1047 pulling from .
1047 pulling from .
1048 searching for changes
1048 searching for changes
1049 adding changesets
1049 adding changesets
1050 adding manifests
1050 adding manifests
1051 adding file changes
1051 adding file changes
1052 divergent bookmark Z stored as Z@default
1052 divergent bookmark Z stored as Z@default
1053 added 1 changesets with 1 changes to 1 files
1053 added 1 changesets with 1 changes to 1 files
1054 new changesets 81dcce76aa0b
1054 new changesets 81dcce76aa0b
1055 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1055 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1056 updating bookmark Y
1056 updating bookmark Y
1057 $ hg -R ../cloned-bookmarks-update parents -T "{rev}:{node|short}\n"
1057 $ hg -R ../cloned-bookmarks-update parents -T "{rev}:{node|short}\n"
1058 6:81dcce76aa0b
1058 6:81dcce76aa0b
1059 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1059 $ hg -R ../cloned-bookmarks-update bookmarks | grep ' Y '
1060 * Y 6:81dcce76aa0b
1060 * Y 6:81dcce76aa0b
1061
1061
1062 $ cd ..
1062 $ cd ..
1063
1063
1064 ensure changelog is written before bookmarks
1064 ensure changelog is written before bookmarks
1065 $ hg init orderrepo
1065 $ hg init orderrepo
1066 $ cd orderrepo
1066 $ cd orderrepo
1067 $ touch a
1067 $ touch a
1068 $ hg commit -Aqm one
1068 $ hg commit -Aqm one
1069 $ hg book mybook
1069 $ hg book mybook
1070 $ echo a > a
1070 $ echo a > a
1071
1071
1072 $ cat > $TESTTMP/pausefinalize.py <<EOF
1072 $ cat > $TESTTMP/pausefinalize.py <<EOF
1073 > import os
1073 > import os
1074 > import time
1074 > import time
1075 > from mercurial import extensions, localrepo
1075 > from mercurial import extensions, localrepo
1076 > def transaction(orig, self, desc, report=None):
1076 > def transaction(orig, self, desc, report=None):
1077 > tr = orig(self, desc, report)
1077 > tr = orig(self, desc, report)
1078 > def sleep(*args, **kwargs):
1078 > def sleep(*args, **kwargs):
1079 > retry = 20
1079 > retry = 20
1080 > while retry > 0 and not os.path.exists(b"$TESTTMP/unpause"):
1080 > while retry > 0 and not os.path.exists(b"$TESTTMP/unpause"):
1081 > retry -= 1
1081 > retry -= 1
1082 > time.sleep(0.5)
1082 > time.sleep(0.5)
1083 > if os.path.exists(b"$TESTTMP/unpause"):
1083 > if os.path.exists(b"$TESTTMP/unpause"):
1084 > os.remove(b"$TESTTMP/unpause")
1084 > os.remove(b"$TESTTMP/unpause")
1085 > # It is important that this finalizer start with 'a', so it runs before
1085 > # It is important that this finalizer start with '000-a', so it runs
1086 > # the changelog finalizer appends to the changelog.
1086 > # before the changelog finalizer appends to the changelog.
1087 > tr.addfinalize(b'a-sleep', sleep)
1087 > tr.addfinalize(b'000-a-sleep', sleep)
1088 > return tr
1088 > return tr
1089 >
1089 >
1090 > def extsetup(ui):
1090 > def extsetup(ui):
1091 > # This extension inserts an artifical pause during the transaction
1091 > # This extension inserts an artifical pause during the transaction
1092 > # finalizer, so we can run commands mid-transaction-close.
1092 > # finalizer, so we can run commands mid-transaction-close.
1093 > extensions.wrapfunction(localrepo.localrepository, 'transaction',
1093 > extensions.wrapfunction(localrepo.localrepository, 'transaction',
1094 > transaction)
1094 > transaction)
1095 > EOF
1095 > EOF
1096 $ hg commit -qm two --config extensions.pausefinalize=$TESTTMP/pausefinalize.py &
1096 $ hg commit -qm two --config extensions.pausefinalize=$TESTTMP/pausefinalize.py &
1097 $ sleep 2
1097 $ sleep 2
1098 $ hg log -r .
1098 $ hg log -r .
1099 changeset: 0:867bc5792c8c
1099 changeset: 0:867bc5792c8c
1100 bookmark: mybook
1100 bookmark: mybook
1101 tag: tip
1101 tag: tip
1102 user: test
1102 user: test
1103 date: Thu Jan 01 00:00:00 1970 +0000
1103 date: Thu Jan 01 00:00:00 1970 +0000
1104 summary: one
1104 summary: one
1105
1105
1106 $ hg bookmarks
1106 $ hg bookmarks
1107 * mybook 0:867bc5792c8c
1107 * mybook 0:867bc5792c8c
1108 $ touch $TESTTMP/unpause
1108 $ touch $TESTTMP/unpause
1109
1109
1110 $ cd ..
1110 $ cd ..
1111
1111
1112 check whether HG_PENDING makes pending changes only in related
1112 check whether HG_PENDING makes pending changes only in related
1113 repositories visible to an external hook.
1113 repositories visible to an external hook.
1114
1114
1115 (emulate a transaction running concurrently by copied
1115 (emulate a transaction running concurrently by copied
1116 .hg/bookmarks.pending in subsequent test)
1116 .hg/bookmarks.pending in subsequent test)
1117
1117
1118 $ cat > $TESTTMP/savepending.sh <<EOF
1118 $ cat > $TESTTMP/savepending.sh <<EOF
1119 > cp .hg/bookmarks.pending .hg/bookmarks.pending.saved
1119 > cp .hg/bookmarks.pending .hg/bookmarks.pending.saved
1120 > exit 1 # to avoid adding new bookmark for subsequent tests
1120 > exit 1 # to avoid adding new bookmark for subsequent tests
1121 > EOF
1121 > EOF
1122
1122
1123 $ hg init unrelated
1123 $ hg init unrelated
1124 $ cd unrelated
1124 $ cd unrelated
1125 $ echo a > a
1125 $ echo a > a
1126 $ hg add a
1126 $ hg add a
1127 $ hg commit -m '#0'
1127 $ hg commit -m '#0'
1128 $ hg --config hooks.pretxnclose="sh $TESTTMP/savepending.sh" bookmarks INVISIBLE
1128 $ hg --config hooks.pretxnclose="sh $TESTTMP/savepending.sh" bookmarks INVISIBLE
1129 abort: pretxnclose hook exited with status 1
1129 abort: pretxnclose hook exited with status 1
1130 [40]
1130 [40]
1131 $ cp .hg/bookmarks.pending.saved .hg/bookmarks.pending
1131 $ cp .hg/bookmarks.pending.saved .hg/bookmarks.pending
1132
1132
1133 (check visible bookmarks while transaction running in repo)
1133 (check visible bookmarks while transaction running in repo)
1134
1134
1135 $ cat > $TESTTMP/checkpending.sh <<EOF
1135 $ cat > $TESTTMP/checkpending.sh <<EOF
1136 > echo "@repo"
1136 > echo "@repo"
1137 > hg -R "$TESTTMP/repo" bookmarks
1137 > hg -R "$TESTTMP/repo" bookmarks
1138 > echo "@unrelated"
1138 > echo "@unrelated"
1139 > hg -R "$TESTTMP/unrelated" bookmarks
1139 > hg -R "$TESTTMP/unrelated" bookmarks
1140 > exit 1 # to avoid adding new bookmark for subsequent tests
1140 > exit 1 # to avoid adding new bookmark for subsequent tests
1141 > EOF
1141 > EOF
1142
1142
1143 $ cd ../repo
1143 $ cd ../repo
1144 $ hg --config hooks.pretxnclose="sh $TESTTMP/checkpending.sh" bookmarks NEW
1144 $ hg --config hooks.pretxnclose="sh $TESTTMP/checkpending.sh" bookmarks NEW
1145 @repo
1145 @repo
1146 * NEW 6:81dcce76aa0b
1146 * NEW 6:81dcce76aa0b
1147 X2 1:925d80f479bb
1147 X2 1:925d80f479bb
1148 Y 4:125c9a1d6df6
1148 Y 4:125c9a1d6df6
1149 Z 5:5fb12f0f2d51
1149 Z 5:5fb12f0f2d51
1150 Z@1 1:925d80f479bb
1150 Z@1 1:925d80f479bb
1151 Z@2 4:125c9a1d6df6
1151 Z@2 4:125c9a1d6df6
1152 foo 3:9ba5f110a0b3
1152 foo 3:9ba5f110a0b3
1153 foo@1 0:f7b1eb17ad24
1153 foo@1 0:f7b1eb17ad24
1154 foo@2 2:db815d6d32e6
1154 foo@2 2:db815d6d32e6
1155 four 3:9ba5f110a0b3
1155 four 3:9ba5f110a0b3
1156 should-end-on-two 2:db815d6d32e6
1156 should-end-on-two 2:db815d6d32e6
1157 x y 2:db815d6d32e6
1157 x y 2:db815d6d32e6
1158 @unrelated
1158 @unrelated
1159 no bookmarks set
1159 no bookmarks set
1160 abort: pretxnclose hook exited with status 1
1160 abort: pretxnclose hook exited with status 1
1161 [40]
1161 [40]
1162
1162
1163 Check pretxnclose-bookmark can abort a transaction
1163 Check pretxnclose-bookmark can abort a transaction
1164 --------------------------------------------------
1164 --------------------------------------------------
1165
1165
1166 add hooks:
1166 add hooks:
1167
1167
1168 * to prevent NEW bookmark on a non-public changeset
1168 * to prevent NEW bookmark on a non-public changeset
1169 * to prevent non-forward move of NEW bookmark
1169 * to prevent non-forward move of NEW bookmark
1170
1170
1171 $ cat << EOF >> .hg/hgrc
1171 $ cat << EOF >> .hg/hgrc
1172 > [hooks]
1172 > [hooks]
1173 > pretxnclose-bookmark.force-public = sh -c "(echo \$HG_BOOKMARK| grep -v NEW > /dev/null) || [ -z \"\$HG_NODE\" ] || (hg log -r \"\$HG_NODE\" -T '{phase}' | grep public > /dev/null)"
1173 > pretxnclose-bookmark.force-public = sh -c "(echo \$HG_BOOKMARK| grep -v NEW > /dev/null) || [ -z \"\$HG_NODE\" ] || (hg log -r \"\$HG_NODE\" -T '{phase}' | grep public > /dev/null)"
1174 > pretxnclose-bookmark.force-forward = sh -c "(echo \$HG_BOOKMARK| grep -v NEW > /dev/null) || [ -z \"\$HG_NODE\" ] || (hg log -r \"max(\$HG_OLDNODE::\$HG_NODE)\" -T 'MATCH' | grep MATCH > /dev/null)"
1174 > pretxnclose-bookmark.force-forward = sh -c "(echo \$HG_BOOKMARK| grep -v NEW > /dev/null) || [ -z \"\$HG_NODE\" ] || (hg log -r \"max(\$HG_OLDNODE::\$HG_NODE)\" -T 'MATCH' | grep MATCH > /dev/null)"
1175 > EOF
1175 > EOF
1176
1176
1177 $ hg log -G -T phases
1177 $ hg log -G -T phases
1178 @ changeset: 6:81dcce76aa0b
1178 @ changeset: 6:81dcce76aa0b
1179 | tag: tip
1179 | tag: tip
1180 | phase: draft
1180 | phase: draft
1181 | parent: 4:125c9a1d6df6
1181 | parent: 4:125c9a1d6df6
1182 | user: test
1182 | user: test
1183 | date: Thu Jan 01 00:00:00 1970 +0000
1183 | date: Thu Jan 01 00:00:00 1970 +0000
1184 | summary: xx
1184 | summary: xx
1185 |
1185 |
1186 | o changeset: 5:5fb12f0f2d51
1186 | o changeset: 5:5fb12f0f2d51
1187 | | branch: test
1187 | | branch: test
1188 | | bookmark: Z
1188 | | bookmark: Z
1189 | | phase: draft
1189 | | phase: draft
1190 | | parent: 3:9ba5f110a0b3
1190 | | parent: 3:9ba5f110a0b3
1191 | | user: test
1191 | | user: test
1192 | | date: Thu Jan 01 00:00:00 1970 +0000
1192 | | date: Thu Jan 01 00:00:00 1970 +0000
1193 | | summary: yy
1193 | | summary: yy
1194 | |
1194 | |
1195 o | changeset: 4:125c9a1d6df6
1195 o | changeset: 4:125c9a1d6df6
1196 | | bookmark: Y
1196 | | bookmark: Y
1197 | | bookmark: Z@2
1197 | | bookmark: Z@2
1198 | | phase: public
1198 | | phase: public
1199 | | parent: 2:db815d6d32e6
1199 | | parent: 2:db815d6d32e6
1200 | | user: test
1200 | | user: test
1201 | | date: Thu Jan 01 00:00:00 1970 +0000
1201 | | date: Thu Jan 01 00:00:00 1970 +0000
1202 | | summary: x
1202 | | summary: x
1203 | |
1203 | |
1204 | o changeset: 3:9ba5f110a0b3
1204 | o changeset: 3:9ba5f110a0b3
1205 |/ branch: test
1205 |/ branch: test
1206 | bookmark: foo
1206 | bookmark: foo
1207 | bookmark: four
1207 | bookmark: four
1208 | phase: public
1208 | phase: public
1209 | user: test
1209 | user: test
1210 | date: Thu Jan 01 00:00:00 1970 +0000
1210 | date: Thu Jan 01 00:00:00 1970 +0000
1211 | summary: y
1211 | summary: y
1212 |
1212 |
1213 o changeset: 2:db815d6d32e6
1213 o changeset: 2:db815d6d32e6
1214 | bookmark: foo@2
1214 | bookmark: foo@2
1215 | bookmark: should-end-on-two
1215 | bookmark: should-end-on-two
1216 | bookmark: x y
1216 | bookmark: x y
1217 | phase: public
1217 | phase: public
1218 | parent: 0:f7b1eb17ad24
1218 | parent: 0:f7b1eb17ad24
1219 | user: test
1219 | user: test
1220 | date: Thu Jan 01 00:00:00 1970 +0000
1220 | date: Thu Jan 01 00:00:00 1970 +0000
1221 | summary: 2
1221 | summary: 2
1222 |
1222 |
1223 | o changeset: 1:925d80f479bb
1223 | o changeset: 1:925d80f479bb
1224 |/ bookmark: X2
1224 |/ bookmark: X2
1225 | bookmark: Z@1
1225 | bookmark: Z@1
1226 | phase: public
1226 | phase: public
1227 | user: test
1227 | user: test
1228 | date: Thu Jan 01 00:00:00 1970 +0000
1228 | date: Thu Jan 01 00:00:00 1970 +0000
1229 | summary: 1
1229 | summary: 1
1230 |
1230 |
1231 o changeset: 0:f7b1eb17ad24
1231 o changeset: 0:f7b1eb17ad24
1232 bookmark: foo@1
1232 bookmark: foo@1
1233 phase: public
1233 phase: public
1234 user: test
1234 user: test
1235 date: Thu Jan 01 00:00:00 1970 +0000
1235 date: Thu Jan 01 00:00:00 1970 +0000
1236 summary: 0
1236 summary: 0
1237
1237
1238
1238
1239 attempt to create on a default changeset
1239 attempt to create on a default changeset
1240
1240
1241 $ hg bookmark -r 81dcce76aa0b NEW
1241 $ hg bookmark -r 81dcce76aa0b NEW
1242 abort: pretxnclose-bookmark.force-public hook exited with status 1
1242 abort: pretxnclose-bookmark.force-public hook exited with status 1
1243 [40]
1243 [40]
1244
1244
1245 create on a public changeset
1245 create on a public changeset
1246
1246
1247 $ hg bookmark -r 9ba5f110a0b3 NEW
1247 $ hg bookmark -r 9ba5f110a0b3 NEW
1248
1248
1249 move to the other branch
1249 move to the other branch
1250
1250
1251 $ hg bookmark -f -r 125c9a1d6df6 NEW
1251 $ hg bookmark -f -r 125c9a1d6df6 NEW
1252 abort: pretxnclose-bookmark.force-forward hook exited with status 1
1252 abort: pretxnclose-bookmark.force-forward hook exited with status 1
1253 [40]
1253 [40]
General Comments 0
You need to be logged in to leave comments. Login now