##// END OF EJS Templates
obsmarker: crash more helpfully when metadata fields are >255bytes (issue5681)...
Simon Whitaker -
r34408:b6692ba7 default
parent child Browse files
Show More
@@ -0,0 +1,23 b''
1 Create a repo, set the username to something more than 255 bytes, then run hg amend on it.
2
3 $ unset HGUSER
4 $ cat >> $HGRCPATH << EOF
5 > [ui]
6 > username = aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa <very.long.name@example.com>
7 > [extensions]
8 > amend =
9 > [experimental]
10 > stabilization=createmarkers,exchange
11 > EOF
12 $ hg init tmpa
13 $ cd tmpa
14 $ echo a > a
15 $ hg add
16 adding a
17 $ hg commit -m "Initial commit"
18 $ echo a >> a
19 $ hg amend 2>&1 | egrep -v '^(\*\*| )'
20 transaction abort!
21 rollback completed
22 Traceback (most recent call last):
23 mercurial.error.ProgrammingError: obsstore metadata value cannot be longer than 255 bytes (value "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa <very.long.name@example.com>" for key "user" is 285 bytes)
@@ -1,1075 +1,1083 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "predecessor" and possible
23 The old obsoleted changeset is called a "predecessor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a predecessor are called "successor markers of X" because they hold
25 a predecessor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "predecessor markers of Y" because they hold
27 a successors are call "predecessor markers of Y" because they hold
28 information about the predecessors of Y.
28 information about the predecessors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker is used:
45 - When changeset A is split into B and C, a single marker is used:
46
46
47 (A, (B, C))
47 (A, (B, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66 The header is followed by the markers. Marker format depend of the version. See
66 The header is followed by the markers. Marker format depend of the version. See
67 comment associated with each format for details.
67 comment associated with each format for details.
68
68
69 """
69 """
70 from __future__ import absolute_import
70 from __future__ import absolute_import
71
71
72 import errno
72 import errno
73 import struct
73 import struct
74
74
75 from .i18n import _
75 from .i18n import _
76 from . import (
76 from . import (
77 error,
77 error,
78 node,
78 node,
79 obsutil,
79 obsutil,
80 phases,
80 phases,
81 policy,
81 policy,
82 util,
82 util,
83 )
83 )
84
84
85 parsers = policy.importmod(r'parsers')
85 parsers = policy.importmod(r'parsers')
86
86
87 _pack = struct.pack
87 _pack = struct.pack
88 _unpack = struct.unpack
88 _unpack = struct.unpack
89 _calcsize = struct.calcsize
89 _calcsize = struct.calcsize
90 propertycache = util.propertycache
90 propertycache = util.propertycache
91
91
92 # the obsolete feature is not mature enough to be enabled by default.
92 # the obsolete feature is not mature enough to be enabled by default.
93 # you have to rely on third party extension extension to enable this.
93 # you have to rely on third party extension extension to enable this.
94 _enabled = False
94 _enabled = False
95
95
96 # Options for obsolescence
96 # Options for obsolescence
97 createmarkersopt = 'createmarkers'
97 createmarkersopt = 'createmarkers'
98 allowunstableopt = 'allowunstable'
98 allowunstableopt = 'allowunstable'
99 exchangeopt = 'exchange'
99 exchangeopt = 'exchange'
100
100
101 def isenabled(repo, option):
101 def isenabled(repo, option):
102 """Returns True if the given repository has the given obsolete option
102 """Returns True if the given repository has the given obsolete option
103 enabled.
103 enabled.
104 """
104 """
105 result = set(repo.ui.configlist('experimental', 'stabilization'))
105 result = set(repo.ui.configlist('experimental', 'stabilization'))
106 if 'all' in result:
106 if 'all' in result:
107 return True
107 return True
108
108
109 # For migration purposes, temporarily return true if the config hasn't been
109 # For migration purposes, temporarily return true if the config hasn't been
110 # set but _enabled is true.
110 # set but _enabled is true.
111 if len(result) == 0 and _enabled:
111 if len(result) == 0 and _enabled:
112 return True
112 return True
113
113
114 # createmarkers must be enabled if other options are enabled
114 # createmarkers must be enabled if other options are enabled
115 if ((allowunstableopt in result or exchangeopt in result) and
115 if ((allowunstableopt in result or exchangeopt in result) and
116 not createmarkersopt in result):
116 not createmarkersopt in result):
117 raise error.Abort(_("'createmarkers' obsolete option must be enabled "
117 raise error.Abort(_("'createmarkers' obsolete option must be enabled "
118 "if other obsolete options are enabled"))
118 "if other obsolete options are enabled"))
119
119
120 return option in result
120 return option in result
121
121
122 ### obsolescence marker flag
122 ### obsolescence marker flag
123
123
124 ## bumpedfix flag
124 ## bumpedfix flag
125 #
125 #
126 # When a changeset A' succeed to a changeset A which became public, we call A'
126 # When a changeset A' succeed to a changeset A which became public, we call A'
127 # "bumped" because it's a successors of a public changesets
127 # "bumped" because it's a successors of a public changesets
128 #
128 #
129 # o A' (bumped)
129 # o A' (bumped)
130 # |`:
130 # |`:
131 # | o A
131 # | o A
132 # |/
132 # |/
133 # o Z
133 # o Z
134 #
134 #
135 # The way to solve this situation is to create a new changeset Ad as children
135 # The way to solve this situation is to create a new changeset Ad as children
136 # of A. This changeset have the same content than A'. So the diff from A to A'
136 # of A. This changeset have the same content than A'. So the diff from A to A'
137 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
137 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
138 #
138 #
139 # o Ad
139 # o Ad
140 # |`:
140 # |`:
141 # | x A'
141 # | x A'
142 # |'|
142 # |'|
143 # o | A
143 # o | A
144 # |/
144 # |/
145 # o Z
145 # o Z
146 #
146 #
147 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
147 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
148 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
148 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
149 # This flag mean that the successors express the changes between the public and
149 # This flag mean that the successors express the changes between the public and
150 # bumped version and fix the situation, breaking the transitivity of
150 # bumped version and fix the situation, breaking the transitivity of
151 # "bumped" here.
151 # "bumped" here.
152 bumpedfix = 1
152 bumpedfix = 1
153 usingsha256 = 2
153 usingsha256 = 2
154
154
155 ## Parsing and writing of version "0"
155 ## Parsing and writing of version "0"
156 #
156 #
157 # The header is followed by the markers. Each marker is made of:
157 # The header is followed by the markers. Each marker is made of:
158 #
158 #
159 # - 1 uint8 : number of new changesets "N", can be zero.
159 # - 1 uint8 : number of new changesets "N", can be zero.
160 #
160 #
161 # - 1 uint32: metadata size "M" in bytes.
161 # - 1 uint32: metadata size "M" in bytes.
162 #
162 #
163 # - 1 byte: a bit field. It is reserved for flags used in common
163 # - 1 byte: a bit field. It is reserved for flags used in common
164 # obsolete marker operations, to avoid repeated decoding of metadata
164 # obsolete marker operations, to avoid repeated decoding of metadata
165 # entries.
165 # entries.
166 #
166 #
167 # - 20 bytes: obsoleted changeset identifier.
167 # - 20 bytes: obsoleted changeset identifier.
168 #
168 #
169 # - N*20 bytes: new changesets identifiers.
169 # - N*20 bytes: new changesets identifiers.
170 #
170 #
171 # - M bytes: metadata as a sequence of nul-terminated strings. Each
171 # - M bytes: metadata as a sequence of nul-terminated strings. Each
172 # string contains a key and a value, separated by a colon ':', without
172 # string contains a key and a value, separated by a colon ':', without
173 # additional encoding. Keys cannot contain '\0' or ':' and values
173 # additional encoding. Keys cannot contain '\0' or ':' and values
174 # cannot contain '\0'.
174 # cannot contain '\0'.
175 _fm0version = 0
175 _fm0version = 0
176 _fm0fixed = '>BIB20s'
176 _fm0fixed = '>BIB20s'
177 _fm0node = '20s'
177 _fm0node = '20s'
178 _fm0fsize = _calcsize(_fm0fixed)
178 _fm0fsize = _calcsize(_fm0fixed)
179 _fm0fnodesize = _calcsize(_fm0node)
179 _fm0fnodesize = _calcsize(_fm0node)
180
180
181 def _fm0readmarkers(data, off, stop):
181 def _fm0readmarkers(data, off, stop):
182 # Loop on markers
182 # Loop on markers
183 while off < stop:
183 while off < stop:
184 # read fixed part
184 # read fixed part
185 cur = data[off:off + _fm0fsize]
185 cur = data[off:off + _fm0fsize]
186 off += _fm0fsize
186 off += _fm0fsize
187 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
187 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
188 # read replacement
188 # read replacement
189 sucs = ()
189 sucs = ()
190 if numsuc:
190 if numsuc:
191 s = (_fm0fnodesize * numsuc)
191 s = (_fm0fnodesize * numsuc)
192 cur = data[off:off + s]
192 cur = data[off:off + s]
193 sucs = _unpack(_fm0node * numsuc, cur)
193 sucs = _unpack(_fm0node * numsuc, cur)
194 off += s
194 off += s
195 # read metadata
195 # read metadata
196 # (metadata will be decoded on demand)
196 # (metadata will be decoded on demand)
197 metadata = data[off:off + mdsize]
197 metadata = data[off:off + mdsize]
198 if len(metadata) != mdsize:
198 if len(metadata) != mdsize:
199 raise error.Abort(_('parsing obsolete marker: metadata is too '
199 raise error.Abort(_('parsing obsolete marker: metadata is too '
200 'short, %d bytes expected, got %d')
200 'short, %d bytes expected, got %d')
201 % (mdsize, len(metadata)))
201 % (mdsize, len(metadata)))
202 off += mdsize
202 off += mdsize
203 metadata = _fm0decodemeta(metadata)
203 metadata = _fm0decodemeta(metadata)
204 try:
204 try:
205 when, offset = metadata.pop('date', '0 0').split(' ')
205 when, offset = metadata.pop('date', '0 0').split(' ')
206 date = float(when), int(offset)
206 date = float(when), int(offset)
207 except ValueError:
207 except ValueError:
208 date = (0., 0)
208 date = (0., 0)
209 parents = None
209 parents = None
210 if 'p2' in metadata:
210 if 'p2' in metadata:
211 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
211 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
212 elif 'p1' in metadata:
212 elif 'p1' in metadata:
213 parents = (metadata.pop('p1', None),)
213 parents = (metadata.pop('p1', None),)
214 elif 'p0' in metadata:
214 elif 'p0' in metadata:
215 parents = ()
215 parents = ()
216 if parents is not None:
216 if parents is not None:
217 try:
217 try:
218 parents = tuple(node.bin(p) for p in parents)
218 parents = tuple(node.bin(p) for p in parents)
219 # if parent content is not a nodeid, drop the data
219 # if parent content is not a nodeid, drop the data
220 for p in parents:
220 for p in parents:
221 if len(p) != 20:
221 if len(p) != 20:
222 parents = None
222 parents = None
223 break
223 break
224 except TypeError:
224 except TypeError:
225 # if content cannot be translated to nodeid drop the data.
225 # if content cannot be translated to nodeid drop the data.
226 parents = None
226 parents = None
227
227
228 metadata = tuple(sorted(metadata.iteritems()))
228 metadata = tuple(sorted(metadata.iteritems()))
229
229
230 yield (pre, sucs, flags, metadata, date, parents)
230 yield (pre, sucs, flags, metadata, date, parents)
231
231
232 def _fm0encodeonemarker(marker):
232 def _fm0encodeonemarker(marker):
233 pre, sucs, flags, metadata, date, parents = marker
233 pre, sucs, flags, metadata, date, parents = marker
234 if flags & usingsha256:
234 if flags & usingsha256:
235 raise error.Abort(_('cannot handle sha256 with old obsstore format'))
235 raise error.Abort(_('cannot handle sha256 with old obsstore format'))
236 metadata = dict(metadata)
236 metadata = dict(metadata)
237 time, tz = date
237 time, tz = date
238 metadata['date'] = '%r %i' % (time, tz)
238 metadata['date'] = '%r %i' % (time, tz)
239 if parents is not None:
239 if parents is not None:
240 if not parents:
240 if not parents:
241 # mark that we explicitly recorded no parents
241 # mark that we explicitly recorded no parents
242 metadata['p0'] = ''
242 metadata['p0'] = ''
243 for i, p in enumerate(parents, 1):
243 for i, p in enumerate(parents, 1):
244 metadata['p%i' % i] = node.hex(p)
244 metadata['p%i' % i] = node.hex(p)
245 metadata = _fm0encodemeta(metadata)
245 metadata = _fm0encodemeta(metadata)
246 numsuc = len(sucs)
246 numsuc = len(sucs)
247 format = _fm0fixed + (_fm0node * numsuc)
247 format = _fm0fixed + (_fm0node * numsuc)
248 data = [numsuc, len(metadata), flags, pre]
248 data = [numsuc, len(metadata), flags, pre]
249 data.extend(sucs)
249 data.extend(sucs)
250 return _pack(format, *data) + metadata
250 return _pack(format, *data) + metadata
251
251
252 def _fm0encodemeta(meta):
252 def _fm0encodemeta(meta):
253 """Return encoded metadata string to string mapping.
253 """Return encoded metadata string to string mapping.
254
254
255 Assume no ':' in key and no '\0' in both key and value."""
255 Assume no ':' in key and no '\0' in both key and value."""
256 for key, value in meta.iteritems():
256 for key, value in meta.iteritems():
257 if ':' in key or '\0' in key:
257 if ':' in key or '\0' in key:
258 raise ValueError("':' and '\0' are forbidden in metadata key'")
258 raise ValueError("':' and '\0' are forbidden in metadata key'")
259 if '\0' in value:
259 if '\0' in value:
260 raise ValueError("':' is forbidden in metadata value'")
260 raise ValueError("':' is forbidden in metadata value'")
261 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
261 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
262
262
263 def _fm0decodemeta(data):
263 def _fm0decodemeta(data):
264 """Return string to string dictionary from encoded version."""
264 """Return string to string dictionary from encoded version."""
265 d = {}
265 d = {}
266 for l in data.split('\0'):
266 for l in data.split('\0'):
267 if l:
267 if l:
268 key, value = l.split(':')
268 key, value = l.split(':')
269 d[key] = value
269 d[key] = value
270 return d
270 return d
271
271
272 ## Parsing and writing of version "1"
272 ## Parsing and writing of version "1"
273 #
273 #
274 # The header is followed by the markers. Each marker is made of:
274 # The header is followed by the markers. Each marker is made of:
275 #
275 #
276 # - uint32: total size of the marker (including this field)
276 # - uint32: total size of the marker (including this field)
277 #
277 #
278 # - float64: date in seconds since epoch
278 # - float64: date in seconds since epoch
279 #
279 #
280 # - int16: timezone offset in minutes
280 # - int16: timezone offset in minutes
281 #
281 #
282 # - uint16: a bit field. It is reserved for flags used in common
282 # - uint16: a bit field. It is reserved for flags used in common
283 # obsolete marker operations, to avoid repeated decoding of metadata
283 # obsolete marker operations, to avoid repeated decoding of metadata
284 # entries.
284 # entries.
285 #
285 #
286 # - uint8: number of successors "N", can be zero.
286 # - uint8: number of successors "N", can be zero.
287 #
287 #
288 # - uint8: number of parents "P", can be zero.
288 # - uint8: number of parents "P", can be zero.
289 #
289 #
290 # 0: parents data stored but no parent,
290 # 0: parents data stored but no parent,
291 # 1: one parent stored,
291 # 1: one parent stored,
292 # 2: two parents stored,
292 # 2: two parents stored,
293 # 3: no parent data stored
293 # 3: no parent data stored
294 #
294 #
295 # - uint8: number of metadata entries M
295 # - uint8: number of metadata entries M
296 #
296 #
297 # - 20 or 32 bytes: predecessor changeset identifier.
297 # - 20 or 32 bytes: predecessor changeset identifier.
298 #
298 #
299 # - N*(20 or 32) bytes: successors changesets identifiers.
299 # - N*(20 or 32) bytes: successors changesets identifiers.
300 #
300 #
301 # - P*(20 or 32) bytes: parents of the predecessors changesets.
301 # - P*(20 or 32) bytes: parents of the predecessors changesets.
302 #
302 #
303 # - M*(uint8, uint8): size of all metadata entries (key and value)
303 # - M*(uint8, uint8): size of all metadata entries (key and value)
304 #
304 #
305 # - remaining bytes: the metadata, each (key, value) pair after the other.
305 # - remaining bytes: the metadata, each (key, value) pair after the other.
306 _fm1version = 1
306 _fm1version = 1
307 _fm1fixed = '>IdhHBBB20s'
307 _fm1fixed = '>IdhHBBB20s'
308 _fm1nodesha1 = '20s'
308 _fm1nodesha1 = '20s'
309 _fm1nodesha256 = '32s'
309 _fm1nodesha256 = '32s'
310 _fm1nodesha1size = _calcsize(_fm1nodesha1)
310 _fm1nodesha1size = _calcsize(_fm1nodesha1)
311 _fm1nodesha256size = _calcsize(_fm1nodesha256)
311 _fm1nodesha256size = _calcsize(_fm1nodesha256)
312 _fm1fsize = _calcsize(_fm1fixed)
312 _fm1fsize = _calcsize(_fm1fixed)
313 _fm1parentnone = 3
313 _fm1parentnone = 3
314 _fm1parentshift = 14
314 _fm1parentshift = 14
315 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
315 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
316 _fm1metapair = 'BB'
316 _fm1metapair = 'BB'
317 _fm1metapairsize = _calcsize(_fm1metapair)
317 _fm1metapairsize = _calcsize(_fm1metapair)
318
318
319 def _fm1purereadmarkers(data, off, stop):
319 def _fm1purereadmarkers(data, off, stop):
320 # make some global constants local for performance
320 # make some global constants local for performance
321 noneflag = _fm1parentnone
321 noneflag = _fm1parentnone
322 sha2flag = usingsha256
322 sha2flag = usingsha256
323 sha1size = _fm1nodesha1size
323 sha1size = _fm1nodesha1size
324 sha2size = _fm1nodesha256size
324 sha2size = _fm1nodesha256size
325 sha1fmt = _fm1nodesha1
325 sha1fmt = _fm1nodesha1
326 sha2fmt = _fm1nodesha256
326 sha2fmt = _fm1nodesha256
327 metasize = _fm1metapairsize
327 metasize = _fm1metapairsize
328 metafmt = _fm1metapair
328 metafmt = _fm1metapair
329 fsize = _fm1fsize
329 fsize = _fm1fsize
330 unpack = _unpack
330 unpack = _unpack
331
331
332 # Loop on markers
332 # Loop on markers
333 ufixed = struct.Struct(_fm1fixed).unpack
333 ufixed = struct.Struct(_fm1fixed).unpack
334
334
335 while off < stop:
335 while off < stop:
336 # read fixed part
336 # read fixed part
337 o1 = off + fsize
337 o1 = off + fsize
338 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
338 t, secs, tz, flags, numsuc, numpar, nummeta, prec = ufixed(data[off:o1])
339
339
340 if flags & sha2flag:
340 if flags & sha2flag:
341 # FIXME: prec was read as a SHA1, needs to be amended
341 # FIXME: prec was read as a SHA1, needs to be amended
342
342
343 # read 0 or more successors
343 # read 0 or more successors
344 if numsuc == 1:
344 if numsuc == 1:
345 o2 = o1 + sha2size
345 o2 = o1 + sha2size
346 sucs = (data[o1:o2],)
346 sucs = (data[o1:o2],)
347 else:
347 else:
348 o2 = o1 + sha2size * numsuc
348 o2 = o1 + sha2size * numsuc
349 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
349 sucs = unpack(sha2fmt * numsuc, data[o1:o2])
350
350
351 # read parents
351 # read parents
352 if numpar == noneflag:
352 if numpar == noneflag:
353 o3 = o2
353 o3 = o2
354 parents = None
354 parents = None
355 elif numpar == 1:
355 elif numpar == 1:
356 o3 = o2 + sha2size
356 o3 = o2 + sha2size
357 parents = (data[o2:o3],)
357 parents = (data[o2:o3],)
358 else:
358 else:
359 o3 = o2 + sha2size * numpar
359 o3 = o2 + sha2size * numpar
360 parents = unpack(sha2fmt * numpar, data[o2:o3])
360 parents = unpack(sha2fmt * numpar, data[o2:o3])
361 else:
361 else:
362 # read 0 or more successors
362 # read 0 or more successors
363 if numsuc == 1:
363 if numsuc == 1:
364 o2 = o1 + sha1size
364 o2 = o1 + sha1size
365 sucs = (data[o1:o2],)
365 sucs = (data[o1:o2],)
366 else:
366 else:
367 o2 = o1 + sha1size * numsuc
367 o2 = o1 + sha1size * numsuc
368 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
368 sucs = unpack(sha1fmt * numsuc, data[o1:o2])
369
369
370 # read parents
370 # read parents
371 if numpar == noneflag:
371 if numpar == noneflag:
372 o3 = o2
372 o3 = o2
373 parents = None
373 parents = None
374 elif numpar == 1:
374 elif numpar == 1:
375 o3 = o2 + sha1size
375 o3 = o2 + sha1size
376 parents = (data[o2:o3],)
376 parents = (data[o2:o3],)
377 else:
377 else:
378 o3 = o2 + sha1size * numpar
378 o3 = o2 + sha1size * numpar
379 parents = unpack(sha1fmt * numpar, data[o2:o3])
379 parents = unpack(sha1fmt * numpar, data[o2:o3])
380
380
381 # read metadata
381 # read metadata
382 off = o3 + metasize * nummeta
382 off = o3 + metasize * nummeta
383 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
383 metapairsize = unpack('>' + (metafmt * nummeta), data[o3:off])
384 metadata = []
384 metadata = []
385 for idx in xrange(0, len(metapairsize), 2):
385 for idx in xrange(0, len(metapairsize), 2):
386 o1 = off + metapairsize[idx]
386 o1 = off + metapairsize[idx]
387 o2 = o1 + metapairsize[idx + 1]
387 o2 = o1 + metapairsize[idx + 1]
388 metadata.append((data[off:o1], data[o1:o2]))
388 metadata.append((data[off:o1], data[o1:o2]))
389 off = o2
389 off = o2
390
390
391 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
391 yield (prec, sucs, flags, tuple(metadata), (secs, tz * 60), parents)
392
392
393 def _fm1encodeonemarker(marker):
393 def _fm1encodeonemarker(marker):
394 pre, sucs, flags, metadata, date, parents = marker
394 pre, sucs, flags, metadata, date, parents = marker
395 # determine node size
395 # determine node size
396 _fm1node = _fm1nodesha1
396 _fm1node = _fm1nodesha1
397 if flags & usingsha256:
397 if flags & usingsha256:
398 _fm1node = _fm1nodesha256
398 _fm1node = _fm1nodesha256
399 numsuc = len(sucs)
399 numsuc = len(sucs)
400 numextranodes = numsuc
400 numextranodes = numsuc
401 if parents is None:
401 if parents is None:
402 numpar = _fm1parentnone
402 numpar = _fm1parentnone
403 else:
403 else:
404 numpar = len(parents)
404 numpar = len(parents)
405 numextranodes += numpar
405 numextranodes += numpar
406 formatnodes = _fm1node * numextranodes
406 formatnodes = _fm1node * numextranodes
407 formatmeta = _fm1metapair * len(metadata)
407 formatmeta = _fm1metapair * len(metadata)
408 format = _fm1fixed + formatnodes + formatmeta
408 format = _fm1fixed + formatnodes + formatmeta
409 # tz is stored in minutes so we divide by 60
409 # tz is stored in minutes so we divide by 60
410 tz = date[1]//60
410 tz = date[1]//60
411 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
411 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
412 data.extend(sucs)
412 data.extend(sucs)
413 if parents is not None:
413 if parents is not None:
414 data.extend(parents)
414 data.extend(parents)
415 totalsize = _calcsize(format)
415 totalsize = _calcsize(format)
416 for key, value in metadata:
416 for key, value in metadata:
417 lk = len(key)
417 lk = len(key)
418 lv = len(value)
418 lv = len(value)
419 if lk > 255:
420 msg = ('obsstore metadata key cannot be longer than 255 bytes'
421 ' (key "%s" is %u bytes)') % (key, lk)
422 raise error.ProgrammingError(msg)
423 if lv > 255:
424 msg = ('obsstore metadata value cannot be longer than 255 bytes'
425 ' (value "%s" for key "%s" is %u bytes)') % (value, key, lv)
426 raise error.ProgrammingError(msg)
419 data.append(lk)
427 data.append(lk)
420 data.append(lv)
428 data.append(lv)
421 totalsize += lk + lv
429 totalsize += lk + lv
422 data[0] = totalsize
430 data[0] = totalsize
423 data = [_pack(format, *data)]
431 data = [_pack(format, *data)]
424 for key, value in metadata:
432 for key, value in metadata:
425 data.append(key)
433 data.append(key)
426 data.append(value)
434 data.append(value)
427 return ''.join(data)
435 return ''.join(data)
428
436
429 def _fm1readmarkers(data, off, stop):
437 def _fm1readmarkers(data, off, stop):
430 native = getattr(parsers, 'fm1readmarkers', None)
438 native = getattr(parsers, 'fm1readmarkers', None)
431 if not native:
439 if not native:
432 return _fm1purereadmarkers(data, off, stop)
440 return _fm1purereadmarkers(data, off, stop)
433 return native(data, off, stop)
441 return native(data, off, stop)
434
442
435 # mapping to read/write various marker formats
443 # mapping to read/write various marker formats
436 # <version> -> (decoder, encoder)
444 # <version> -> (decoder, encoder)
437 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
445 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
438 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
446 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
439
447
440 def _readmarkerversion(data):
448 def _readmarkerversion(data):
441 return _unpack('>B', data[0:1])[0]
449 return _unpack('>B', data[0:1])[0]
442
450
443 @util.nogc
451 @util.nogc
444 def _readmarkers(data, off=None, stop=None):
452 def _readmarkers(data, off=None, stop=None):
445 """Read and enumerate markers from raw data"""
453 """Read and enumerate markers from raw data"""
446 diskversion = _readmarkerversion(data)
454 diskversion = _readmarkerversion(data)
447 if not off:
455 if not off:
448 off = 1 # skip 1 byte version number
456 off = 1 # skip 1 byte version number
449 if stop is None:
457 if stop is None:
450 stop = len(data)
458 stop = len(data)
451 if diskversion not in formats:
459 if diskversion not in formats:
452 msg = _('parsing obsolete marker: unknown version %r') % diskversion
460 msg = _('parsing obsolete marker: unknown version %r') % diskversion
453 raise error.UnknownVersion(msg, version=diskversion)
461 raise error.UnknownVersion(msg, version=diskversion)
454 return diskversion, formats[diskversion][0](data, off, stop)
462 return diskversion, formats[diskversion][0](data, off, stop)
455
463
456 def encodeheader(version=_fm0version):
464 def encodeheader(version=_fm0version):
457 return _pack('>B', version)
465 return _pack('>B', version)
458
466
459 def encodemarkers(markers, addheader=False, version=_fm0version):
467 def encodemarkers(markers, addheader=False, version=_fm0version):
460 # Kept separate from flushmarkers(), it will be reused for
468 # Kept separate from flushmarkers(), it will be reused for
461 # markers exchange.
469 # markers exchange.
462 encodeone = formats[version][1]
470 encodeone = formats[version][1]
463 if addheader:
471 if addheader:
464 yield encodeheader(version)
472 yield encodeheader(version)
465 for marker in markers:
473 for marker in markers:
466 yield encodeone(marker)
474 yield encodeone(marker)
467
475
468 @util.nogc
476 @util.nogc
469 def _addsuccessors(successors, markers):
477 def _addsuccessors(successors, markers):
470 for mark in markers:
478 for mark in markers:
471 successors.setdefault(mark[0], set()).add(mark)
479 successors.setdefault(mark[0], set()).add(mark)
472
480
473 def _addprecursors(*args, **kwargs):
481 def _addprecursors(*args, **kwargs):
474 msg = ("'obsolete._addprecursors' is deprecated, "
482 msg = ("'obsolete._addprecursors' is deprecated, "
475 "use 'obsolete._addpredecessors'")
483 "use 'obsolete._addpredecessors'")
476 util.nouideprecwarn(msg, '4.4')
484 util.nouideprecwarn(msg, '4.4')
477
485
478 return _addpredecessors(*args, **kwargs)
486 return _addpredecessors(*args, **kwargs)
479
487
480 @util.nogc
488 @util.nogc
481 def _addpredecessors(predecessors, markers):
489 def _addpredecessors(predecessors, markers):
482 for mark in markers:
490 for mark in markers:
483 for suc in mark[1]:
491 for suc in mark[1]:
484 predecessors.setdefault(suc, set()).add(mark)
492 predecessors.setdefault(suc, set()).add(mark)
485
493
486 @util.nogc
494 @util.nogc
487 def _addchildren(children, markers):
495 def _addchildren(children, markers):
488 for mark in markers:
496 for mark in markers:
489 parents = mark[5]
497 parents = mark[5]
490 if parents is not None:
498 if parents is not None:
491 for p in parents:
499 for p in parents:
492 children.setdefault(p, set()).add(mark)
500 children.setdefault(p, set()).add(mark)
493
501
494 def _checkinvalidmarkers(markers):
502 def _checkinvalidmarkers(markers):
495 """search for marker with invalid data and raise error if needed
503 """search for marker with invalid data and raise error if needed
496
504
497 Exist as a separated function to allow the evolve extension for a more
505 Exist as a separated function to allow the evolve extension for a more
498 subtle handling.
506 subtle handling.
499 """
507 """
500 for mark in markers:
508 for mark in markers:
501 if node.nullid in mark[1]:
509 if node.nullid in mark[1]:
502 raise error.Abort(_('bad obsolescence marker detected: '
510 raise error.Abort(_('bad obsolescence marker detected: '
503 'invalid successors nullid'))
511 'invalid successors nullid'))
504
512
505 class obsstore(object):
513 class obsstore(object):
506 """Store obsolete markers
514 """Store obsolete markers
507
515
508 Markers can be accessed with two mappings:
516 Markers can be accessed with two mappings:
509 - predecessors[x] -> set(markers on predecessors edges of x)
517 - predecessors[x] -> set(markers on predecessors edges of x)
510 - successors[x] -> set(markers on successors edges of x)
518 - successors[x] -> set(markers on successors edges of x)
511 - children[x] -> set(markers on predecessors edges of children(x)
519 - children[x] -> set(markers on predecessors edges of children(x)
512 """
520 """
513
521
514 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
522 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
515 # prec: nodeid, predecessors changesets
523 # prec: nodeid, predecessors changesets
516 # succs: tuple of nodeid, successor changesets (0-N length)
524 # succs: tuple of nodeid, successor changesets (0-N length)
517 # flag: integer, flag field carrying modifier for the markers (see doc)
525 # flag: integer, flag field carrying modifier for the markers (see doc)
518 # meta: binary blob, encoded metadata dictionary
526 # meta: binary blob, encoded metadata dictionary
519 # date: (float, int) tuple, date of marker creation
527 # date: (float, int) tuple, date of marker creation
520 # parents: (tuple of nodeid) or None, parents of predecessors
528 # parents: (tuple of nodeid) or None, parents of predecessors
521 # None is used when no data has been recorded
529 # None is used when no data has been recorded
522
530
523 def __init__(self, svfs, defaultformat=_fm1version, readonly=False):
531 def __init__(self, svfs, defaultformat=_fm1version, readonly=False):
524 # caches for various obsolescence related cache
532 # caches for various obsolescence related cache
525 self.caches = {}
533 self.caches = {}
526 self.svfs = svfs
534 self.svfs = svfs
527 self._defaultformat = defaultformat
535 self._defaultformat = defaultformat
528 self._readonly = readonly
536 self._readonly = readonly
529
537
530 def __iter__(self):
538 def __iter__(self):
531 return iter(self._all)
539 return iter(self._all)
532
540
533 def __len__(self):
541 def __len__(self):
534 return len(self._all)
542 return len(self._all)
535
543
536 def __nonzero__(self):
544 def __nonzero__(self):
537 if not self._cached('_all'):
545 if not self._cached('_all'):
538 try:
546 try:
539 return self.svfs.stat('obsstore').st_size > 1
547 return self.svfs.stat('obsstore').st_size > 1
540 except OSError as inst:
548 except OSError as inst:
541 if inst.errno != errno.ENOENT:
549 if inst.errno != errno.ENOENT:
542 raise
550 raise
543 # just build an empty _all list if no obsstore exists, which
551 # just build an empty _all list if no obsstore exists, which
544 # avoids further stat() syscalls
552 # avoids further stat() syscalls
545 return bool(self._all)
553 return bool(self._all)
546
554
547 __bool__ = __nonzero__
555 __bool__ = __nonzero__
548
556
549 @property
557 @property
550 def readonly(self):
558 def readonly(self):
551 """True if marker creation is disabled
559 """True if marker creation is disabled
552
560
553 Remove me in the future when obsolete marker is always on."""
561 Remove me in the future when obsolete marker is always on."""
554 return self._readonly
562 return self._readonly
555
563
556 def create(self, transaction, prec, succs=(), flag=0, parents=None,
564 def create(self, transaction, prec, succs=(), flag=0, parents=None,
557 date=None, metadata=None, ui=None):
565 date=None, metadata=None, ui=None):
558 """obsolete: add a new obsolete marker
566 """obsolete: add a new obsolete marker
559
567
560 * ensuring it is hashable
568 * ensuring it is hashable
561 * check mandatory metadata
569 * check mandatory metadata
562 * encode metadata
570 * encode metadata
563
571
564 If you are a human writing code creating marker you want to use the
572 If you are a human writing code creating marker you want to use the
565 `createmarkers` function in this module instead.
573 `createmarkers` function in this module instead.
566
574
567 return True if a new marker have been added, False if the markers
575 return True if a new marker have been added, False if the markers
568 already existed (no op).
576 already existed (no op).
569 """
577 """
570 if metadata is None:
578 if metadata is None:
571 metadata = {}
579 metadata = {}
572 if date is None:
580 if date is None:
573 if 'date' in metadata:
581 if 'date' in metadata:
574 # as a courtesy for out-of-tree extensions
582 # as a courtesy for out-of-tree extensions
575 date = util.parsedate(metadata.pop('date'))
583 date = util.parsedate(metadata.pop('date'))
576 elif ui is not None:
584 elif ui is not None:
577 date = ui.configdate('devel', 'default-date')
585 date = ui.configdate('devel', 'default-date')
578 if date is None:
586 if date is None:
579 date = util.makedate()
587 date = util.makedate()
580 else:
588 else:
581 date = util.makedate()
589 date = util.makedate()
582 if len(prec) != 20:
590 if len(prec) != 20:
583 raise ValueError(prec)
591 raise ValueError(prec)
584 for succ in succs:
592 for succ in succs:
585 if len(succ) != 20:
593 if len(succ) != 20:
586 raise ValueError(succ)
594 raise ValueError(succ)
587 if prec in succs:
595 if prec in succs:
588 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
596 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
589
597
590 metadata = tuple(sorted(metadata.iteritems()))
598 metadata = tuple(sorted(metadata.iteritems()))
591
599
592 marker = (bytes(prec), tuple(succs), int(flag), metadata, date, parents)
600 marker = (bytes(prec), tuple(succs), int(flag), metadata, date, parents)
593 return bool(self.add(transaction, [marker]))
601 return bool(self.add(transaction, [marker]))
594
602
595 def add(self, transaction, markers):
603 def add(self, transaction, markers):
596 """Add new markers to the store
604 """Add new markers to the store
597
605
598 Take care of filtering duplicate.
606 Take care of filtering duplicate.
599 Return the number of new marker."""
607 Return the number of new marker."""
600 if self._readonly:
608 if self._readonly:
601 raise error.Abort(_('creating obsolete markers is not enabled on '
609 raise error.Abort(_('creating obsolete markers is not enabled on '
602 'this repo'))
610 'this repo'))
603 known = set()
611 known = set()
604 getsuccessors = self.successors.get
612 getsuccessors = self.successors.get
605 new = []
613 new = []
606 for m in markers:
614 for m in markers:
607 if m not in getsuccessors(m[0], ()) and m not in known:
615 if m not in getsuccessors(m[0], ()) and m not in known:
608 known.add(m)
616 known.add(m)
609 new.append(m)
617 new.append(m)
610 if new:
618 if new:
611 f = self.svfs('obsstore', 'ab')
619 f = self.svfs('obsstore', 'ab')
612 try:
620 try:
613 offset = f.tell()
621 offset = f.tell()
614 transaction.add('obsstore', offset)
622 transaction.add('obsstore', offset)
615 # offset == 0: new file - add the version header
623 # offset == 0: new file - add the version header
616 data = b''.join(encodemarkers(new, offset == 0, self._version))
624 data = b''.join(encodemarkers(new, offset == 0, self._version))
617 f.write(data)
625 f.write(data)
618 finally:
626 finally:
619 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
627 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
620 # call 'filecacheentry.refresh()' here
628 # call 'filecacheentry.refresh()' here
621 f.close()
629 f.close()
622 addedmarkers = transaction.changes.get('obsmarkers')
630 addedmarkers = transaction.changes.get('obsmarkers')
623 if addedmarkers is not None:
631 if addedmarkers is not None:
624 addedmarkers.update(new)
632 addedmarkers.update(new)
625 self._addmarkers(new, data)
633 self._addmarkers(new, data)
626 # new marker *may* have changed several set. invalidate the cache.
634 # new marker *may* have changed several set. invalidate the cache.
627 self.caches.clear()
635 self.caches.clear()
628 # records the number of new markers for the transaction hooks
636 # records the number of new markers for the transaction hooks
629 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
637 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
630 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
638 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
631 return len(new)
639 return len(new)
632
640
633 def mergemarkers(self, transaction, data):
641 def mergemarkers(self, transaction, data):
634 """merge a binary stream of markers inside the obsstore
642 """merge a binary stream of markers inside the obsstore
635
643
636 Returns the number of new markers added."""
644 Returns the number of new markers added."""
637 version, markers = _readmarkers(data)
645 version, markers = _readmarkers(data)
638 return self.add(transaction, markers)
646 return self.add(transaction, markers)
639
647
640 @propertycache
648 @propertycache
641 def _data(self):
649 def _data(self):
642 return self.svfs.tryread('obsstore')
650 return self.svfs.tryread('obsstore')
643
651
644 @propertycache
652 @propertycache
645 def _version(self):
653 def _version(self):
646 if len(self._data) >= 1:
654 if len(self._data) >= 1:
647 return _readmarkerversion(self._data)
655 return _readmarkerversion(self._data)
648 else:
656 else:
649 return self._defaultformat
657 return self._defaultformat
650
658
651 @propertycache
659 @propertycache
652 def _all(self):
660 def _all(self):
653 data = self._data
661 data = self._data
654 if not data:
662 if not data:
655 return []
663 return []
656 self._version, markers = _readmarkers(data)
664 self._version, markers = _readmarkers(data)
657 markers = list(markers)
665 markers = list(markers)
658 _checkinvalidmarkers(markers)
666 _checkinvalidmarkers(markers)
659 return markers
667 return markers
660
668
661 @propertycache
669 @propertycache
662 def successors(self):
670 def successors(self):
663 successors = {}
671 successors = {}
664 _addsuccessors(successors, self._all)
672 _addsuccessors(successors, self._all)
665 return successors
673 return successors
666
674
667 @property
675 @property
668 def precursors(self):
676 def precursors(self):
669 msg = ("'obsstore.precursors' is deprecated, "
677 msg = ("'obsstore.precursors' is deprecated, "
670 "use 'obsstore.predecessors'")
678 "use 'obsstore.predecessors'")
671 util.nouideprecwarn(msg, '4.4')
679 util.nouideprecwarn(msg, '4.4')
672
680
673 return self.predecessors
681 return self.predecessors
674
682
675 @propertycache
683 @propertycache
676 def predecessors(self):
684 def predecessors(self):
677 predecessors = {}
685 predecessors = {}
678 _addpredecessors(predecessors, self._all)
686 _addpredecessors(predecessors, self._all)
679 return predecessors
687 return predecessors
680
688
681 @propertycache
689 @propertycache
682 def children(self):
690 def children(self):
683 children = {}
691 children = {}
684 _addchildren(children, self._all)
692 _addchildren(children, self._all)
685 return children
693 return children
686
694
687 def _cached(self, attr):
695 def _cached(self, attr):
688 return attr in self.__dict__
696 return attr in self.__dict__
689
697
690 def _addmarkers(self, markers, rawdata):
698 def _addmarkers(self, markers, rawdata):
691 markers = list(markers) # to allow repeated iteration
699 markers = list(markers) # to allow repeated iteration
692 self._data = self._data + rawdata
700 self._data = self._data + rawdata
693 self._all.extend(markers)
701 self._all.extend(markers)
694 if self._cached('successors'):
702 if self._cached('successors'):
695 _addsuccessors(self.successors, markers)
703 _addsuccessors(self.successors, markers)
696 if self._cached('predecessors'):
704 if self._cached('predecessors'):
697 _addpredecessors(self.predecessors, markers)
705 _addpredecessors(self.predecessors, markers)
698 if self._cached('children'):
706 if self._cached('children'):
699 _addchildren(self.children, markers)
707 _addchildren(self.children, markers)
700 _checkinvalidmarkers(markers)
708 _checkinvalidmarkers(markers)
701
709
702 def relevantmarkers(self, nodes):
710 def relevantmarkers(self, nodes):
703 """return a set of all obsolescence markers relevant to a set of nodes.
711 """return a set of all obsolescence markers relevant to a set of nodes.
704
712
705 "relevant" to a set of nodes mean:
713 "relevant" to a set of nodes mean:
706
714
707 - marker that use this changeset as successor
715 - marker that use this changeset as successor
708 - prune marker of direct children on this changeset
716 - prune marker of direct children on this changeset
709 - recursive application of the two rules on predecessors of these
717 - recursive application of the two rules on predecessors of these
710 markers
718 markers
711
719
712 It is a set so you cannot rely on order."""
720 It is a set so you cannot rely on order."""
713
721
714 pendingnodes = set(nodes)
722 pendingnodes = set(nodes)
715 seenmarkers = set()
723 seenmarkers = set()
716 seennodes = set(pendingnodes)
724 seennodes = set(pendingnodes)
717 precursorsmarkers = self.predecessors
725 precursorsmarkers = self.predecessors
718 succsmarkers = self.successors
726 succsmarkers = self.successors
719 children = self.children
727 children = self.children
720 while pendingnodes:
728 while pendingnodes:
721 direct = set()
729 direct = set()
722 for current in pendingnodes:
730 for current in pendingnodes:
723 direct.update(precursorsmarkers.get(current, ()))
731 direct.update(precursorsmarkers.get(current, ()))
724 pruned = [m for m in children.get(current, ()) if not m[1]]
732 pruned = [m for m in children.get(current, ()) if not m[1]]
725 direct.update(pruned)
733 direct.update(pruned)
726 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
734 pruned = [m for m in succsmarkers.get(current, ()) if not m[1]]
727 direct.update(pruned)
735 direct.update(pruned)
728 direct -= seenmarkers
736 direct -= seenmarkers
729 pendingnodes = set([m[0] for m in direct])
737 pendingnodes = set([m[0] for m in direct])
730 seenmarkers |= direct
738 seenmarkers |= direct
731 pendingnodes -= seennodes
739 pendingnodes -= seennodes
732 seennodes |= pendingnodes
740 seennodes |= pendingnodes
733 return seenmarkers
741 return seenmarkers
734
742
735 def makestore(ui, repo):
743 def makestore(ui, repo):
736 """Create an obsstore instance from a repo."""
744 """Create an obsstore instance from a repo."""
737 # read default format for new obsstore.
745 # read default format for new obsstore.
738 # developer config: format.obsstore-version
746 # developer config: format.obsstore-version
739 defaultformat = ui.configint('format', 'obsstore-version')
747 defaultformat = ui.configint('format', 'obsstore-version')
740 # rely on obsstore class default when possible.
748 # rely on obsstore class default when possible.
741 kwargs = {}
749 kwargs = {}
742 if defaultformat is not None:
750 if defaultformat is not None:
743 kwargs['defaultformat'] = defaultformat
751 kwargs['defaultformat'] = defaultformat
744 readonly = not isenabled(repo, createmarkersopt)
752 readonly = not isenabled(repo, createmarkersopt)
745 store = obsstore(repo.svfs, readonly=readonly, **kwargs)
753 store = obsstore(repo.svfs, readonly=readonly, **kwargs)
746 if store and readonly:
754 if store and readonly:
747 ui.warn(_('obsolete feature not enabled but %i markers found!\n')
755 ui.warn(_('obsolete feature not enabled but %i markers found!\n')
748 % len(list(store)))
756 % len(list(store)))
749 return store
757 return store
750
758
751 def commonversion(versions):
759 def commonversion(versions):
752 """Return the newest version listed in both versions and our local formats.
760 """Return the newest version listed in both versions and our local formats.
753
761
754 Returns None if no common version exists.
762 Returns None if no common version exists.
755 """
763 """
756 versions.sort(reverse=True)
764 versions.sort(reverse=True)
757 # search for highest version known on both side
765 # search for highest version known on both side
758 for v in versions:
766 for v in versions:
759 if v in formats:
767 if v in formats:
760 return v
768 return v
761 return None
769 return None
762
770
763 # arbitrary picked to fit into 8K limit from HTTP server
771 # arbitrary picked to fit into 8K limit from HTTP server
764 # you have to take in account:
772 # you have to take in account:
765 # - the version header
773 # - the version header
766 # - the base85 encoding
774 # - the base85 encoding
767 _maxpayload = 5300
775 _maxpayload = 5300
768
776
769 def _pushkeyescape(markers):
777 def _pushkeyescape(markers):
770 """encode markers into a dict suitable for pushkey exchange
778 """encode markers into a dict suitable for pushkey exchange
771
779
772 - binary data is base85 encoded
780 - binary data is base85 encoded
773 - split in chunks smaller than 5300 bytes"""
781 - split in chunks smaller than 5300 bytes"""
774 keys = {}
782 keys = {}
775 parts = []
783 parts = []
776 currentlen = _maxpayload * 2 # ensure we create a new part
784 currentlen = _maxpayload * 2 # ensure we create a new part
777 for marker in markers:
785 for marker in markers:
778 nextdata = _fm0encodeonemarker(marker)
786 nextdata = _fm0encodeonemarker(marker)
779 if (len(nextdata) + currentlen > _maxpayload):
787 if (len(nextdata) + currentlen > _maxpayload):
780 currentpart = []
788 currentpart = []
781 currentlen = 0
789 currentlen = 0
782 parts.append(currentpart)
790 parts.append(currentpart)
783 currentpart.append(nextdata)
791 currentpart.append(nextdata)
784 currentlen += len(nextdata)
792 currentlen += len(nextdata)
785 for idx, part in enumerate(reversed(parts)):
793 for idx, part in enumerate(reversed(parts)):
786 data = ''.join([_pack('>B', _fm0version)] + part)
794 data = ''.join([_pack('>B', _fm0version)] + part)
787 keys['dump%i' % idx] = util.b85encode(data)
795 keys['dump%i' % idx] = util.b85encode(data)
788 return keys
796 return keys
789
797
790 def listmarkers(repo):
798 def listmarkers(repo):
791 """List markers over pushkey"""
799 """List markers over pushkey"""
792 if not repo.obsstore:
800 if not repo.obsstore:
793 return {}
801 return {}
794 return _pushkeyescape(sorted(repo.obsstore))
802 return _pushkeyescape(sorted(repo.obsstore))
795
803
796 def pushmarker(repo, key, old, new):
804 def pushmarker(repo, key, old, new):
797 """Push markers over pushkey"""
805 """Push markers over pushkey"""
798 if not key.startswith('dump'):
806 if not key.startswith('dump'):
799 repo.ui.warn(_('unknown key: %r') % key)
807 repo.ui.warn(_('unknown key: %r') % key)
800 return False
808 return False
801 if old:
809 if old:
802 repo.ui.warn(_('unexpected old value for %r') % key)
810 repo.ui.warn(_('unexpected old value for %r') % key)
803 return False
811 return False
804 data = util.b85decode(new)
812 data = util.b85decode(new)
805 lock = repo.lock()
813 lock = repo.lock()
806 try:
814 try:
807 tr = repo.transaction('pushkey: obsolete markers')
815 tr = repo.transaction('pushkey: obsolete markers')
808 try:
816 try:
809 repo.obsstore.mergemarkers(tr, data)
817 repo.obsstore.mergemarkers(tr, data)
810 repo.invalidatevolatilesets()
818 repo.invalidatevolatilesets()
811 tr.close()
819 tr.close()
812 return True
820 return True
813 finally:
821 finally:
814 tr.release()
822 tr.release()
815 finally:
823 finally:
816 lock.release()
824 lock.release()
817
825
818 # keep compatibility for the 4.3 cycle
826 # keep compatibility for the 4.3 cycle
819 def allprecursors(obsstore, nodes, ignoreflags=0):
827 def allprecursors(obsstore, nodes, ignoreflags=0):
820 movemsg = 'obsolete.allprecursors moved to obsutil.allprecursors'
828 movemsg = 'obsolete.allprecursors moved to obsutil.allprecursors'
821 util.nouideprecwarn(movemsg, '4.3')
829 util.nouideprecwarn(movemsg, '4.3')
822 return obsutil.allprecursors(obsstore, nodes, ignoreflags)
830 return obsutil.allprecursors(obsstore, nodes, ignoreflags)
823
831
824 def allsuccessors(obsstore, nodes, ignoreflags=0):
832 def allsuccessors(obsstore, nodes, ignoreflags=0):
825 movemsg = 'obsolete.allsuccessors moved to obsutil.allsuccessors'
833 movemsg = 'obsolete.allsuccessors moved to obsutil.allsuccessors'
826 util.nouideprecwarn(movemsg, '4.3')
834 util.nouideprecwarn(movemsg, '4.3')
827 return obsutil.allsuccessors(obsstore, nodes, ignoreflags)
835 return obsutil.allsuccessors(obsstore, nodes, ignoreflags)
828
836
829 def marker(repo, data):
837 def marker(repo, data):
830 movemsg = 'obsolete.marker moved to obsutil.marker'
838 movemsg = 'obsolete.marker moved to obsutil.marker'
831 repo.ui.deprecwarn(movemsg, '4.3')
839 repo.ui.deprecwarn(movemsg, '4.3')
832 return obsutil.marker(repo, data)
840 return obsutil.marker(repo, data)
833
841
834 def getmarkers(repo, nodes=None, exclusive=False):
842 def getmarkers(repo, nodes=None, exclusive=False):
835 movemsg = 'obsolete.getmarkers moved to obsutil.getmarkers'
843 movemsg = 'obsolete.getmarkers moved to obsutil.getmarkers'
836 repo.ui.deprecwarn(movemsg, '4.3')
844 repo.ui.deprecwarn(movemsg, '4.3')
837 return obsutil.getmarkers(repo, nodes=nodes, exclusive=exclusive)
845 return obsutil.getmarkers(repo, nodes=nodes, exclusive=exclusive)
838
846
839 def exclusivemarkers(repo, nodes):
847 def exclusivemarkers(repo, nodes):
840 movemsg = 'obsolete.exclusivemarkers moved to obsutil.exclusivemarkers'
848 movemsg = 'obsolete.exclusivemarkers moved to obsutil.exclusivemarkers'
841 repo.ui.deprecwarn(movemsg, '4.3')
849 repo.ui.deprecwarn(movemsg, '4.3')
842 return obsutil.exclusivemarkers(repo, nodes)
850 return obsutil.exclusivemarkers(repo, nodes)
843
851
844 def foreground(repo, nodes):
852 def foreground(repo, nodes):
845 movemsg = 'obsolete.foreground moved to obsutil.foreground'
853 movemsg = 'obsolete.foreground moved to obsutil.foreground'
846 repo.ui.deprecwarn(movemsg, '4.3')
854 repo.ui.deprecwarn(movemsg, '4.3')
847 return obsutil.foreground(repo, nodes)
855 return obsutil.foreground(repo, nodes)
848
856
849 def successorssets(repo, initialnode, cache=None):
857 def successorssets(repo, initialnode, cache=None):
850 movemsg = 'obsolete.successorssets moved to obsutil.successorssets'
858 movemsg = 'obsolete.successorssets moved to obsutil.successorssets'
851 repo.ui.deprecwarn(movemsg, '4.3')
859 repo.ui.deprecwarn(movemsg, '4.3')
852 return obsutil.successorssets(repo, initialnode, cache=cache)
860 return obsutil.successorssets(repo, initialnode, cache=cache)
853
861
854 # mapping of 'set-name' -> <function to compute this set>
862 # mapping of 'set-name' -> <function to compute this set>
855 cachefuncs = {}
863 cachefuncs = {}
856 def cachefor(name):
864 def cachefor(name):
857 """Decorator to register a function as computing the cache for a set"""
865 """Decorator to register a function as computing the cache for a set"""
858 def decorator(func):
866 def decorator(func):
859 if name in cachefuncs:
867 if name in cachefuncs:
860 msg = "duplicated registration for volatileset '%s' (existing: %r)"
868 msg = "duplicated registration for volatileset '%s' (existing: %r)"
861 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
869 raise error.ProgrammingError(msg % (name, cachefuncs[name]))
862 cachefuncs[name] = func
870 cachefuncs[name] = func
863 return func
871 return func
864 return decorator
872 return decorator
865
873
866 def getrevs(repo, name):
874 def getrevs(repo, name):
867 """Return the set of revision that belong to the <name> set
875 """Return the set of revision that belong to the <name> set
868
876
869 Such access may compute the set and cache it for future use"""
877 Such access may compute the set and cache it for future use"""
870 repo = repo.unfiltered()
878 repo = repo.unfiltered()
871 if not repo.obsstore:
879 if not repo.obsstore:
872 return frozenset()
880 return frozenset()
873 if name not in repo.obsstore.caches:
881 if name not in repo.obsstore.caches:
874 repo.obsstore.caches[name] = cachefuncs[name](repo)
882 repo.obsstore.caches[name] = cachefuncs[name](repo)
875 return repo.obsstore.caches[name]
883 return repo.obsstore.caches[name]
876
884
877 # To be simple we need to invalidate obsolescence cache when:
885 # To be simple we need to invalidate obsolescence cache when:
878 #
886 #
879 # - new changeset is added:
887 # - new changeset is added:
880 # - public phase is changed
888 # - public phase is changed
881 # - obsolescence marker are added
889 # - obsolescence marker are added
882 # - strip is used a repo
890 # - strip is used a repo
883 def clearobscaches(repo):
891 def clearobscaches(repo):
884 """Remove all obsolescence related cache from a repo
892 """Remove all obsolescence related cache from a repo
885
893
886 This remove all cache in obsstore is the obsstore already exist on the
894 This remove all cache in obsstore is the obsstore already exist on the
887 repo.
895 repo.
888
896
889 (We could be smarter here given the exact event that trigger the cache
897 (We could be smarter here given the exact event that trigger the cache
890 clearing)"""
898 clearing)"""
891 # only clear cache is there is obsstore data in this repo
899 # only clear cache is there is obsstore data in this repo
892 if 'obsstore' in repo._filecache:
900 if 'obsstore' in repo._filecache:
893 repo.obsstore.caches.clear()
901 repo.obsstore.caches.clear()
894
902
895 def _mutablerevs(repo):
903 def _mutablerevs(repo):
896 """the set of mutable revision in the repository"""
904 """the set of mutable revision in the repository"""
897 return repo._phasecache.getrevset(repo, (phases.draft, phases.secret))
905 return repo._phasecache.getrevset(repo, (phases.draft, phases.secret))
898
906
899 @cachefor('obsolete')
907 @cachefor('obsolete')
900 def _computeobsoleteset(repo):
908 def _computeobsoleteset(repo):
901 """the set of obsolete revisions"""
909 """the set of obsolete revisions"""
902 getnode = repo.changelog.node
910 getnode = repo.changelog.node
903 notpublic = _mutablerevs(repo)
911 notpublic = _mutablerevs(repo)
904 isobs = repo.obsstore.successors.__contains__
912 isobs = repo.obsstore.successors.__contains__
905 obs = set(r for r in notpublic if isobs(getnode(r)))
913 obs = set(r for r in notpublic if isobs(getnode(r)))
906 return obs
914 return obs
907
915
908 @cachefor('unstable')
916 @cachefor('unstable')
909 def _computeunstableset(repo):
917 def _computeunstableset(repo):
910 msg = ("'unstable' volatile set is deprecated, "
918 msg = ("'unstable' volatile set is deprecated, "
911 "use 'orphan'")
919 "use 'orphan'")
912 repo.ui.deprecwarn(msg, '4.4')
920 repo.ui.deprecwarn(msg, '4.4')
913
921
914 return _computeorphanset(repo)
922 return _computeorphanset(repo)
915
923
916 @cachefor('orphan')
924 @cachefor('orphan')
917 def _computeorphanset(repo):
925 def _computeorphanset(repo):
918 """the set of non obsolete revisions with obsolete parents"""
926 """the set of non obsolete revisions with obsolete parents"""
919 pfunc = repo.changelog.parentrevs
927 pfunc = repo.changelog.parentrevs
920 mutable = _mutablerevs(repo)
928 mutable = _mutablerevs(repo)
921 obsolete = getrevs(repo, 'obsolete')
929 obsolete = getrevs(repo, 'obsolete')
922 others = mutable - obsolete
930 others = mutable - obsolete
923 unstable = set()
931 unstable = set()
924 for r in sorted(others):
932 for r in sorted(others):
925 # A rev is unstable if one of its parent is obsolete or unstable
933 # A rev is unstable if one of its parent is obsolete or unstable
926 # this works since we traverse following growing rev order
934 # this works since we traverse following growing rev order
927 for p in pfunc(r):
935 for p in pfunc(r):
928 if p in obsolete or p in unstable:
936 if p in obsolete or p in unstable:
929 unstable.add(r)
937 unstable.add(r)
930 break
938 break
931 return unstable
939 return unstable
932
940
933 @cachefor('suspended')
941 @cachefor('suspended')
934 def _computesuspendedset(repo):
942 def _computesuspendedset(repo):
935 """the set of obsolete parents with non obsolete descendants"""
943 """the set of obsolete parents with non obsolete descendants"""
936 suspended = repo.changelog.ancestors(getrevs(repo, 'orphan'))
944 suspended = repo.changelog.ancestors(getrevs(repo, 'orphan'))
937 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
945 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
938
946
939 @cachefor('extinct')
947 @cachefor('extinct')
940 def _computeextinctset(repo):
948 def _computeextinctset(repo):
941 """the set of obsolete parents without non obsolete descendants"""
949 """the set of obsolete parents without non obsolete descendants"""
942 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
950 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
943
951
944 @cachefor('bumped')
952 @cachefor('bumped')
945 def _computebumpedset(repo):
953 def _computebumpedset(repo):
946 msg = ("'bumped' volatile set is deprecated, "
954 msg = ("'bumped' volatile set is deprecated, "
947 "use 'phasedivergent'")
955 "use 'phasedivergent'")
948 repo.ui.deprecwarn(msg, '4.4')
956 repo.ui.deprecwarn(msg, '4.4')
949
957
950 return _computephasedivergentset(repo)
958 return _computephasedivergentset(repo)
951
959
952 @cachefor('phasedivergent')
960 @cachefor('phasedivergent')
953 def _computephasedivergentset(repo):
961 def _computephasedivergentset(repo):
954 """the set of revs trying to obsolete public revisions"""
962 """the set of revs trying to obsolete public revisions"""
955 bumped = set()
963 bumped = set()
956 # util function (avoid attribute lookup in the loop)
964 # util function (avoid attribute lookup in the loop)
957 phase = repo._phasecache.phase # would be faster to grab the full list
965 phase = repo._phasecache.phase # would be faster to grab the full list
958 public = phases.public
966 public = phases.public
959 cl = repo.changelog
967 cl = repo.changelog
960 torev = cl.nodemap.get
968 torev = cl.nodemap.get
961 for ctx in repo.set('(not public()) and (not obsolete())'):
969 for ctx in repo.set('(not public()) and (not obsolete())'):
962 rev = ctx.rev()
970 rev = ctx.rev()
963 # We only evaluate mutable, non-obsolete revision
971 # We only evaluate mutable, non-obsolete revision
964 node = ctx.node()
972 node = ctx.node()
965 # (future) A cache of predecessors may worth if split is very common
973 # (future) A cache of predecessors may worth if split is very common
966 for pnode in obsutil.allpredecessors(repo.obsstore, [node],
974 for pnode in obsutil.allpredecessors(repo.obsstore, [node],
967 ignoreflags=bumpedfix):
975 ignoreflags=bumpedfix):
968 prev = torev(pnode) # unfiltered! but so is phasecache
976 prev = torev(pnode) # unfiltered! but so is phasecache
969 if (prev is not None) and (phase(repo, prev) <= public):
977 if (prev is not None) and (phase(repo, prev) <= public):
970 # we have a public predecessor
978 # we have a public predecessor
971 bumped.add(rev)
979 bumped.add(rev)
972 break # Next draft!
980 break # Next draft!
973 return bumped
981 return bumped
974
982
975 @cachefor('divergent')
983 @cachefor('divergent')
976 def _computedivergentset(repo):
984 def _computedivergentset(repo):
977 msg = ("'divergent' volatile set is deprecated, "
985 msg = ("'divergent' volatile set is deprecated, "
978 "use 'contentdivergent'")
986 "use 'contentdivergent'")
979 repo.ui.deprecwarn(msg, '4.4')
987 repo.ui.deprecwarn(msg, '4.4')
980
988
981 return _computecontentdivergentset(repo)
989 return _computecontentdivergentset(repo)
982
990
983 @cachefor('contentdivergent')
991 @cachefor('contentdivergent')
984 def _computecontentdivergentset(repo):
992 def _computecontentdivergentset(repo):
985 """the set of rev that compete to be the final successors of some revision.
993 """the set of rev that compete to be the final successors of some revision.
986 """
994 """
987 divergent = set()
995 divergent = set()
988 obsstore = repo.obsstore
996 obsstore = repo.obsstore
989 newermap = {}
997 newermap = {}
990 for ctx in repo.set('(not public()) - obsolete()'):
998 for ctx in repo.set('(not public()) - obsolete()'):
991 mark = obsstore.predecessors.get(ctx.node(), ())
999 mark = obsstore.predecessors.get(ctx.node(), ())
992 toprocess = set(mark)
1000 toprocess = set(mark)
993 seen = set()
1001 seen = set()
994 while toprocess:
1002 while toprocess:
995 prec = toprocess.pop()[0]
1003 prec = toprocess.pop()[0]
996 if prec in seen:
1004 if prec in seen:
997 continue # emergency cycle hanging prevention
1005 continue # emergency cycle hanging prevention
998 seen.add(prec)
1006 seen.add(prec)
999 if prec not in newermap:
1007 if prec not in newermap:
1000 obsutil.successorssets(repo, prec, cache=newermap)
1008 obsutil.successorssets(repo, prec, cache=newermap)
1001 newer = [n for n in newermap[prec] if n]
1009 newer = [n for n in newermap[prec] if n]
1002 if len(newer) > 1:
1010 if len(newer) > 1:
1003 divergent.add(ctx.rev())
1011 divergent.add(ctx.rev())
1004 break
1012 break
1005 toprocess.update(obsstore.predecessors.get(prec, ()))
1013 toprocess.update(obsstore.predecessors.get(prec, ()))
1006 return divergent
1014 return divergent
1007
1015
1008
1016
1009 def createmarkers(repo, relations, flag=0, date=None, metadata=None,
1017 def createmarkers(repo, relations, flag=0, date=None, metadata=None,
1010 operation=None):
1018 operation=None):
1011 """Add obsolete markers between changesets in a repo
1019 """Add obsolete markers between changesets in a repo
1012
1020
1013 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1021 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1014 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1022 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1015 containing metadata for this marker only. It is merged with the global
1023 containing metadata for this marker only. It is merged with the global
1016 metadata specified through the `metadata` argument of this function,
1024 metadata specified through the `metadata` argument of this function,
1017
1025
1018 Trying to obsolete a public changeset will raise an exception.
1026 Trying to obsolete a public changeset will raise an exception.
1019
1027
1020 Current user and date are used except if specified otherwise in the
1028 Current user and date are used except if specified otherwise in the
1021 metadata attribute.
1029 metadata attribute.
1022
1030
1023 This function operates within a transaction of its own, but does
1031 This function operates within a transaction of its own, but does
1024 not take any lock on the repo.
1032 not take any lock on the repo.
1025 """
1033 """
1026 # prepare metadata
1034 # prepare metadata
1027 if metadata is None:
1035 if metadata is None:
1028 metadata = {}
1036 metadata = {}
1029 if 'user' not in metadata:
1037 if 'user' not in metadata:
1030 metadata['user'] = repo.ui.username()
1038 metadata['user'] = repo.ui.username()
1031
1039
1032 # Operation metadata handling
1040 # Operation metadata handling
1033 useoperation = repo.ui.configbool('experimental',
1041 useoperation = repo.ui.configbool('experimental',
1034 'stabilization.track-operation')
1042 'stabilization.track-operation')
1035 if useoperation and operation:
1043 if useoperation and operation:
1036 metadata['operation'] = operation
1044 metadata['operation'] = operation
1037
1045
1038 tr = repo.transaction('add-obsolescence-marker')
1046 tr = repo.transaction('add-obsolescence-marker')
1039 try:
1047 try:
1040 markerargs = []
1048 markerargs = []
1041 for rel in relations:
1049 for rel in relations:
1042 prec = rel[0]
1050 prec = rel[0]
1043 sucs = rel[1]
1051 sucs = rel[1]
1044 localmetadata = metadata.copy()
1052 localmetadata = metadata.copy()
1045 if 2 < len(rel):
1053 if 2 < len(rel):
1046 localmetadata.update(rel[2])
1054 localmetadata.update(rel[2])
1047
1055
1048 if not prec.mutable():
1056 if not prec.mutable():
1049 raise error.Abort(_("cannot obsolete public changeset: %s")
1057 raise error.Abort(_("cannot obsolete public changeset: %s")
1050 % prec,
1058 % prec,
1051 hint="see 'hg help phases' for details")
1059 hint="see 'hg help phases' for details")
1052 nprec = prec.node()
1060 nprec = prec.node()
1053 nsucs = tuple(s.node() for s in sucs)
1061 nsucs = tuple(s.node() for s in sucs)
1054 npare = None
1062 npare = None
1055 if not nsucs:
1063 if not nsucs:
1056 npare = tuple(p.node() for p in prec.parents())
1064 npare = tuple(p.node() for p in prec.parents())
1057 if nprec in nsucs:
1065 if nprec in nsucs:
1058 raise error.Abort(_("changeset %s cannot obsolete itself")
1066 raise error.Abort(_("changeset %s cannot obsolete itself")
1059 % prec)
1067 % prec)
1060
1068
1061 # Creating the marker causes the hidden cache to become invalid,
1069 # Creating the marker causes the hidden cache to become invalid,
1062 # which causes recomputation when we ask for prec.parents() above.
1070 # which causes recomputation when we ask for prec.parents() above.
1063 # Resulting in n^2 behavior. So let's prepare all of the args
1071 # Resulting in n^2 behavior. So let's prepare all of the args
1064 # first, then create the markers.
1072 # first, then create the markers.
1065 markerargs.append((nprec, nsucs, npare, localmetadata))
1073 markerargs.append((nprec, nsucs, npare, localmetadata))
1066
1074
1067 for args in markerargs:
1075 for args in markerargs:
1068 nprec, nsucs, npare, localmetadata = args
1076 nprec, nsucs, npare, localmetadata = args
1069 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1077 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1070 date=date, metadata=localmetadata,
1078 date=date, metadata=localmetadata,
1071 ui=repo.ui)
1079 ui=repo.ui)
1072 repo.filteredrevcache.clear()
1080 repo.filteredrevcache.clear()
1073 tr.close()
1081 tr.close()
1074 finally:
1082 finally:
1075 tr.release()
1083 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now