##// END OF EJS Templates
obsstore: update create docstring to point to the coder friendly function...
Pierre-Yves David -
r20516:3af218cf default
parent child Browse files
Show More
@@ -1,843 +1,846 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 import phases
87 import phases
88 from i18n import _
88 from i18n import _
89
89
90 _pack = struct.pack
90 _pack = struct.pack
91 _unpack = struct.unpack
91 _unpack = struct.unpack
92
92
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
94
94
95 # the obsolete feature is not mature enough to be enabled by default.
95 # the obsolete feature is not mature enough to be enabled by default.
96 # you have to rely on third party extension extension to enable this.
96 # you have to rely on third party extension extension to enable this.
97 _enabled = False
97 _enabled = False
98
98
99 # data used for parsing and writing
99 # data used for parsing and writing
100 _fmversion = 0
100 _fmversion = 0
101 _fmfixed = '>BIB20s'
101 _fmfixed = '>BIB20s'
102 _fmnode = '20s'
102 _fmnode = '20s'
103 _fmfsize = struct.calcsize(_fmfixed)
103 _fmfsize = struct.calcsize(_fmfixed)
104 _fnodesize = struct.calcsize(_fmnode)
104 _fnodesize = struct.calcsize(_fmnode)
105
105
106 ### obsolescence marker flag
106 ### obsolescence marker flag
107
107
108 ## bumpedfix flag
108 ## bumpedfix flag
109 #
109 #
110 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # When a changeset A' succeed to a changeset A which became public, we call A'
111 # "bumped" because it's a successors of a public changesets
111 # "bumped" because it's a successors of a public changesets
112 #
112 #
113 # o A' (bumped)
113 # o A' (bumped)
114 # |`:
114 # |`:
115 # | o A
115 # | o A
116 # |/
116 # |/
117 # o Z
117 # o Z
118 #
118 #
119 # The way to solve this situation is to create a new changeset Ad as children
119 # The way to solve this situation is to create a new changeset Ad as children
120 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # of A. This changeset have the same content than A'. So the diff from A to A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
122 #
122 #
123 # o Ad
123 # o Ad
124 # |`:
124 # |`:
125 # | x A'
125 # | x A'
126 # |'|
126 # |'|
127 # o | A
127 # o | A
128 # |/
128 # |/
129 # o Z
129 # o Z
130 #
130 #
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
133 # This flag mean that the successors express the changes between the public and
133 # This flag mean that the successors express the changes between the public and
134 # bumped version and fix the situation, breaking the transitivity of
134 # bumped version and fix the situation, breaking the transitivity of
135 # "bumped" here.
135 # "bumped" here.
136 bumpedfix = 1
136 bumpedfix = 1
137
137
138 def _readmarkers(data):
138 def _readmarkers(data):
139 """Read and enumerate markers from raw data"""
139 """Read and enumerate markers from raw data"""
140 off = 0
140 off = 0
141 diskversion = _unpack('>B', data[off:off + 1])[0]
141 diskversion = _unpack('>B', data[off:off + 1])[0]
142 off += 1
142 off += 1
143 if diskversion != _fmversion:
143 if diskversion != _fmversion:
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
145 % diskversion)
145 % diskversion)
146
146
147 # Loop on markers
147 # Loop on markers
148 l = len(data)
148 l = len(data)
149 while off + _fmfsize <= l:
149 while off + _fmfsize <= l:
150 # read fixed part
150 # read fixed part
151 cur = data[off:off + _fmfsize]
151 cur = data[off:off + _fmfsize]
152 off += _fmfsize
152 off += _fmfsize
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
154 # read replacement
154 # read replacement
155 sucs = ()
155 sucs = ()
156 if nbsuc:
156 if nbsuc:
157 s = (_fnodesize * nbsuc)
157 s = (_fnodesize * nbsuc)
158 cur = data[off:off + s]
158 cur = data[off:off + s]
159 sucs = _unpack(_fmnode * nbsuc, cur)
159 sucs = _unpack(_fmnode * nbsuc, cur)
160 off += s
160 off += s
161 # read metadata
161 # read metadata
162 # (metadata will be decoded on demand)
162 # (metadata will be decoded on demand)
163 metadata = data[off:off + mdsize]
163 metadata = data[off:off + mdsize]
164 if len(metadata) != mdsize:
164 if len(metadata) != mdsize:
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
166 'short, %d bytes expected, got %d')
166 'short, %d bytes expected, got %d')
167 % (mdsize, len(metadata)))
167 % (mdsize, len(metadata)))
168 off += mdsize
168 off += mdsize
169 yield (pre, sucs, flags, metadata)
169 yield (pre, sucs, flags, metadata)
170
170
171 def encodemeta(meta):
171 def encodemeta(meta):
172 """Return encoded metadata string to string mapping.
172 """Return encoded metadata string to string mapping.
173
173
174 Assume no ':' in key and no '\0' in both key and value."""
174 Assume no ':' in key and no '\0' in both key and value."""
175 for key, value in meta.iteritems():
175 for key, value in meta.iteritems():
176 if ':' in key or '\0' in key:
176 if ':' in key or '\0' in key:
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
178 if '\0' in value:
178 if '\0' in value:
179 raise ValueError("':' are forbidden in metadata value'")
179 raise ValueError("':' are forbidden in metadata value'")
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
181
181
182 def decodemeta(data):
182 def decodemeta(data):
183 """Return string to string dictionary from encoded version."""
183 """Return string to string dictionary from encoded version."""
184 d = {}
184 d = {}
185 for l in data.split('\0'):
185 for l in data.split('\0'):
186 if l:
186 if l:
187 key, value = l.split(':')
187 key, value = l.split(':')
188 d[key] = value
188 d[key] = value
189 return d
189 return d
190
190
191 class marker(object):
191 class marker(object):
192 """Wrap obsolete marker raw data"""
192 """Wrap obsolete marker raw data"""
193
193
194 def __init__(self, repo, data):
194 def __init__(self, repo, data):
195 # the repo argument will be used to create changectx in later version
195 # the repo argument will be used to create changectx in later version
196 self._repo = repo
196 self._repo = repo
197 self._data = data
197 self._data = data
198 self._decodedmeta = None
198 self._decodedmeta = None
199
199
200 def __hash__(self):
200 def __hash__(self):
201 return hash(self._data)
201 return hash(self._data)
202
202
203 def __eq__(self, other):
203 def __eq__(self, other):
204 if type(other) != type(self):
204 if type(other) != type(self):
205 return False
205 return False
206 return self._data == other._data
206 return self._data == other._data
207
207
208 def precnode(self):
208 def precnode(self):
209 """Precursor changeset node identifier"""
209 """Precursor changeset node identifier"""
210 return self._data[0]
210 return self._data[0]
211
211
212 def succnodes(self):
212 def succnodes(self):
213 """List of successor changesets node identifiers"""
213 """List of successor changesets node identifiers"""
214 return self._data[1]
214 return self._data[1]
215
215
216 def metadata(self):
216 def metadata(self):
217 """Decoded metadata dictionary"""
217 """Decoded metadata dictionary"""
218 if self._decodedmeta is None:
218 if self._decodedmeta is None:
219 self._decodedmeta = decodemeta(self._data[3])
219 self._decodedmeta = decodemeta(self._data[3])
220 return self._decodedmeta
220 return self._decodedmeta
221
221
222 def date(self):
222 def date(self):
223 """Creation date as (unixtime, offset)"""
223 """Creation date as (unixtime, offset)"""
224 parts = self.metadata()['date'].split(' ')
224 parts = self.metadata()['date'].split(' ')
225 return (float(parts[0]), int(parts[1]))
225 return (float(parts[0]), int(parts[1]))
226
226
227 class obsstore(object):
227 class obsstore(object):
228 """Store obsolete markers
228 """Store obsolete markers
229
229
230 Markers can be accessed with two mappings:
230 Markers can be accessed with two mappings:
231 - precursors[x] -> set(markers on precursors edges of x)
231 - precursors[x] -> set(markers on precursors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
233 """
233 """
234
234
235 def __init__(self, sopener):
235 def __init__(self, sopener):
236 # caches for various obsolescence related cache
236 # caches for various obsolescence related cache
237 self.caches = {}
237 self.caches = {}
238 self._all = []
238 self._all = []
239 # new markers to serialize
239 # new markers to serialize
240 self.precursors = {}
240 self.precursors = {}
241 self.successors = {}
241 self.successors = {}
242 self.sopener = sopener
242 self.sopener = sopener
243 data = sopener.tryread('obsstore')
243 data = sopener.tryread('obsstore')
244 if data:
244 if data:
245 self._load(_readmarkers(data))
245 self._load(_readmarkers(data))
246
246
247 def __iter__(self):
247 def __iter__(self):
248 return iter(self._all)
248 return iter(self._all)
249
249
250 def __nonzero__(self):
250 def __nonzero__(self):
251 return bool(self._all)
251 return bool(self._all)
252
252
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
254 """obsolete: add a new obsolete marker
254 """obsolete: add a new obsolete marker
255
255
256 * ensuring it is hashable
256 * ensuring it is hashable
257 * check mandatory metadata
257 * check mandatory metadata
258 * encode metadata
258 * encode metadata
259
260 If you are a human writing code creating marker you want to use the
261 `createmarkers` function in this module instead.
259 """
262 """
260 if metadata is None:
263 if metadata is None:
261 metadata = {}
264 metadata = {}
262 if 'date' not in metadata:
265 if 'date' not in metadata:
263 metadata['date'] = "%d %d" % util.makedate()
266 metadata['date'] = "%d %d" % util.makedate()
264 if len(prec) != 20:
267 if len(prec) != 20:
265 raise ValueError(prec)
268 raise ValueError(prec)
266 for succ in succs:
269 for succ in succs:
267 if len(succ) != 20:
270 if len(succ) != 20:
268 raise ValueError(succ)
271 raise ValueError(succ)
269 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
272 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
270 self.add(transaction, [marker])
273 self.add(transaction, [marker])
271
274
272 def add(self, transaction, markers):
275 def add(self, transaction, markers):
273 """Add new markers to the store
276 """Add new markers to the store
274
277
275 Take care of filtering duplicate.
278 Take care of filtering duplicate.
276 Return the number of new marker."""
279 Return the number of new marker."""
277 if not _enabled:
280 if not _enabled:
278 raise util.Abort('obsolete feature is not enabled on this repo')
281 raise util.Abort('obsolete feature is not enabled on this repo')
279 known = set(self._all)
282 known = set(self._all)
280 new = []
283 new = []
281 for m in markers:
284 for m in markers:
282 if m not in known:
285 if m not in known:
283 known.add(m)
286 known.add(m)
284 new.append(m)
287 new.append(m)
285 if new:
288 if new:
286 f = self.sopener('obsstore', 'ab')
289 f = self.sopener('obsstore', 'ab')
287 try:
290 try:
288 # Whether the file's current position is at the begin or at
291 # Whether the file's current position is at the begin or at
289 # the end after opening a file for appending is implementation
292 # the end after opening a file for appending is implementation
290 # defined. So we must seek to the end before calling tell(),
293 # defined. So we must seek to the end before calling tell(),
291 # or we may get a zero offset for non-zero sized files on
294 # or we may get a zero offset for non-zero sized files on
292 # some platforms (issue3543).
295 # some platforms (issue3543).
293 f.seek(0, _SEEK_END)
296 f.seek(0, _SEEK_END)
294 offset = f.tell()
297 offset = f.tell()
295 transaction.add('obsstore', offset)
298 transaction.add('obsstore', offset)
296 # offset == 0: new file - add the version header
299 # offset == 0: new file - add the version header
297 for bytes in _encodemarkers(new, offset == 0):
300 for bytes in _encodemarkers(new, offset == 0):
298 f.write(bytes)
301 f.write(bytes)
299 finally:
302 finally:
300 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
303 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
301 # call 'filecacheentry.refresh()' here
304 # call 'filecacheentry.refresh()' here
302 f.close()
305 f.close()
303 self._load(new)
306 self._load(new)
304 # new marker *may* have changed several set. invalidate the cache.
307 # new marker *may* have changed several set. invalidate the cache.
305 self.caches.clear()
308 self.caches.clear()
306 return len(new)
309 return len(new)
307
310
308 def mergemarkers(self, transaction, data):
311 def mergemarkers(self, transaction, data):
309 markers = _readmarkers(data)
312 markers = _readmarkers(data)
310 self.add(transaction, markers)
313 self.add(transaction, markers)
311
314
312 def _load(self, markers):
315 def _load(self, markers):
313 for mark in markers:
316 for mark in markers:
314 self._all.append(mark)
317 self._all.append(mark)
315 pre, sucs = mark[:2]
318 pre, sucs = mark[:2]
316 self.successors.setdefault(pre, set()).add(mark)
319 self.successors.setdefault(pre, set()).add(mark)
317 for suc in sucs:
320 for suc in sucs:
318 self.precursors.setdefault(suc, set()).add(mark)
321 self.precursors.setdefault(suc, set()).add(mark)
319 if node.nullid in self.precursors:
322 if node.nullid in self.precursors:
320 raise util.Abort(_('bad obsolescence marker detected: '
323 raise util.Abort(_('bad obsolescence marker detected: '
321 'invalid successors nullid'))
324 'invalid successors nullid'))
322
325
323 def _encodemarkers(markers, addheader=False):
326 def _encodemarkers(markers, addheader=False):
324 # Kept separate from flushmarkers(), it will be reused for
327 # Kept separate from flushmarkers(), it will be reused for
325 # markers exchange.
328 # markers exchange.
326 if addheader:
329 if addheader:
327 yield _pack('>B', _fmversion)
330 yield _pack('>B', _fmversion)
328 for marker in markers:
331 for marker in markers:
329 yield _encodeonemarker(marker)
332 yield _encodeonemarker(marker)
330
333
331
334
332 def _encodeonemarker(marker):
335 def _encodeonemarker(marker):
333 pre, sucs, flags, metadata = marker
336 pre, sucs, flags, metadata = marker
334 nbsuc = len(sucs)
337 nbsuc = len(sucs)
335 format = _fmfixed + (_fmnode * nbsuc)
338 format = _fmfixed + (_fmnode * nbsuc)
336 data = [nbsuc, len(metadata), flags, pre]
339 data = [nbsuc, len(metadata), flags, pre]
337 data.extend(sucs)
340 data.extend(sucs)
338 return _pack(format, *data) + metadata
341 return _pack(format, *data) + metadata
339
342
340 # arbitrary picked to fit into 8K limit from HTTP server
343 # arbitrary picked to fit into 8K limit from HTTP server
341 # you have to take in account:
344 # you have to take in account:
342 # - the version header
345 # - the version header
343 # - the base85 encoding
346 # - the base85 encoding
344 _maxpayload = 5300
347 _maxpayload = 5300
345
348
346 def listmarkers(repo):
349 def listmarkers(repo):
347 """List markers over pushkey"""
350 """List markers over pushkey"""
348 if not repo.obsstore:
351 if not repo.obsstore:
349 return {}
352 return {}
350 keys = {}
353 keys = {}
351 parts = []
354 parts = []
352 currentlen = _maxpayload * 2 # ensure we create a new part
355 currentlen = _maxpayload * 2 # ensure we create a new part
353 for marker in repo.obsstore:
356 for marker in repo.obsstore:
354 nextdata = _encodeonemarker(marker)
357 nextdata = _encodeonemarker(marker)
355 if (len(nextdata) + currentlen > _maxpayload):
358 if (len(nextdata) + currentlen > _maxpayload):
356 currentpart = []
359 currentpart = []
357 currentlen = 0
360 currentlen = 0
358 parts.append(currentpart)
361 parts.append(currentpart)
359 currentpart.append(nextdata)
362 currentpart.append(nextdata)
360 currentlen += len(nextdata)
363 currentlen += len(nextdata)
361 for idx, part in enumerate(reversed(parts)):
364 for idx, part in enumerate(reversed(parts)):
362 data = ''.join([_pack('>B', _fmversion)] + part)
365 data = ''.join([_pack('>B', _fmversion)] + part)
363 keys['dump%i' % idx] = base85.b85encode(data)
366 keys['dump%i' % idx] = base85.b85encode(data)
364 return keys
367 return keys
365
368
366 def pushmarker(repo, key, old, new):
369 def pushmarker(repo, key, old, new):
367 """Push markers over pushkey"""
370 """Push markers over pushkey"""
368 if not key.startswith('dump'):
371 if not key.startswith('dump'):
369 repo.ui.warn(_('unknown key: %r') % key)
372 repo.ui.warn(_('unknown key: %r') % key)
370 return 0
373 return 0
371 if old:
374 if old:
372 repo.ui.warn(_('unexpected old value') % key)
375 repo.ui.warn(_('unexpected old value') % key)
373 return 0
376 return 0
374 data = base85.b85decode(new)
377 data = base85.b85decode(new)
375 lock = repo.lock()
378 lock = repo.lock()
376 try:
379 try:
377 tr = repo.transaction('pushkey: obsolete markers')
380 tr = repo.transaction('pushkey: obsolete markers')
378 try:
381 try:
379 repo.obsstore.mergemarkers(tr, data)
382 repo.obsstore.mergemarkers(tr, data)
380 tr.close()
383 tr.close()
381 return 1
384 return 1
382 finally:
385 finally:
383 tr.release()
386 tr.release()
384 finally:
387 finally:
385 lock.release()
388 lock.release()
386
389
387 def allmarkers(repo):
390 def allmarkers(repo):
388 """all obsolete markers known in a repository"""
391 """all obsolete markers known in a repository"""
389 for markerdata in repo.obsstore:
392 for markerdata in repo.obsstore:
390 yield marker(repo, markerdata)
393 yield marker(repo, markerdata)
391
394
392 def precursormarkers(ctx):
395 def precursormarkers(ctx):
393 """obsolete marker marking this changeset as a successors"""
396 """obsolete marker marking this changeset as a successors"""
394 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
397 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
395 yield marker(ctx._repo, data)
398 yield marker(ctx._repo, data)
396
399
397 def successormarkers(ctx):
400 def successormarkers(ctx):
398 """obsolete marker making this changeset obsolete"""
401 """obsolete marker making this changeset obsolete"""
399 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
402 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
400 yield marker(ctx._repo, data)
403 yield marker(ctx._repo, data)
401
404
402 def allsuccessors(obsstore, nodes, ignoreflags=0):
405 def allsuccessors(obsstore, nodes, ignoreflags=0):
403 """Yield node for every successor of <nodes>.
406 """Yield node for every successor of <nodes>.
404
407
405 Some successors may be unknown locally.
408 Some successors may be unknown locally.
406
409
407 This is a linear yield unsuited to detecting split changesets. It includes
410 This is a linear yield unsuited to detecting split changesets. It includes
408 initial nodes too."""
411 initial nodes too."""
409 remaining = set(nodes)
412 remaining = set(nodes)
410 seen = set(remaining)
413 seen = set(remaining)
411 while remaining:
414 while remaining:
412 current = remaining.pop()
415 current = remaining.pop()
413 yield current
416 yield current
414 for mark in obsstore.successors.get(current, ()):
417 for mark in obsstore.successors.get(current, ()):
415 # ignore marker flagged with specified flag
418 # ignore marker flagged with specified flag
416 if mark[2] & ignoreflags:
419 if mark[2] & ignoreflags:
417 continue
420 continue
418 for suc in mark[1]:
421 for suc in mark[1]:
419 if suc not in seen:
422 if suc not in seen:
420 seen.add(suc)
423 seen.add(suc)
421 remaining.add(suc)
424 remaining.add(suc)
422
425
423 def allprecursors(obsstore, nodes, ignoreflags=0):
426 def allprecursors(obsstore, nodes, ignoreflags=0):
424 """Yield node for every precursors of <nodes>.
427 """Yield node for every precursors of <nodes>.
425
428
426 Some precursors may be unknown locally.
429 Some precursors may be unknown locally.
427
430
428 This is a linear yield unsuited to detecting folded changesets. It includes
431 This is a linear yield unsuited to detecting folded changesets. It includes
429 initial nodes too."""
432 initial nodes too."""
430
433
431 remaining = set(nodes)
434 remaining = set(nodes)
432 seen = set(remaining)
435 seen = set(remaining)
433 while remaining:
436 while remaining:
434 current = remaining.pop()
437 current = remaining.pop()
435 yield current
438 yield current
436 for mark in obsstore.precursors.get(current, ()):
439 for mark in obsstore.precursors.get(current, ()):
437 # ignore marker flagged with specified flag
440 # ignore marker flagged with specified flag
438 if mark[2] & ignoreflags:
441 if mark[2] & ignoreflags:
439 continue
442 continue
440 suc = mark[0]
443 suc = mark[0]
441 if suc not in seen:
444 if suc not in seen:
442 seen.add(suc)
445 seen.add(suc)
443 remaining.add(suc)
446 remaining.add(suc)
444
447
445 def foreground(repo, nodes):
448 def foreground(repo, nodes):
446 """return all nodes in the "foreground" of other node
449 """return all nodes in the "foreground" of other node
447
450
448 The foreground of a revision is anything reachable using parent -> children
451 The foreground of a revision is anything reachable using parent -> children
449 or precursor -> successor relation. It is very similar to "descendant" but
452 or precursor -> successor relation. It is very similar to "descendant" but
450 augmented with obsolescence information.
453 augmented with obsolescence information.
451
454
452 Beware that possible obsolescence cycle may result if complex situation.
455 Beware that possible obsolescence cycle may result if complex situation.
453 """
456 """
454 repo = repo.unfiltered()
457 repo = repo.unfiltered()
455 foreground = set(repo.set('%ln::', nodes))
458 foreground = set(repo.set('%ln::', nodes))
456 if repo.obsstore:
459 if repo.obsstore:
457 # We only need this complicated logic if there is obsolescence
460 # We only need this complicated logic if there is obsolescence
458 # XXX will probably deserve an optimised revset.
461 # XXX will probably deserve an optimised revset.
459 nm = repo.changelog.nodemap
462 nm = repo.changelog.nodemap
460 plen = -1
463 plen = -1
461 # compute the whole set of successors or descendants
464 # compute the whole set of successors or descendants
462 while len(foreground) != plen:
465 while len(foreground) != plen:
463 plen = len(foreground)
466 plen = len(foreground)
464 succs = set(c.node() for c in foreground)
467 succs = set(c.node() for c in foreground)
465 mutable = [c.node() for c in foreground if c.mutable()]
468 mutable = [c.node() for c in foreground if c.mutable()]
466 succs.update(allsuccessors(repo.obsstore, mutable))
469 succs.update(allsuccessors(repo.obsstore, mutable))
467 known = (n for n in succs if n in nm)
470 known = (n for n in succs if n in nm)
468 foreground = set(repo.set('%ln::', known))
471 foreground = set(repo.set('%ln::', known))
469 return set(c.node() for c in foreground)
472 return set(c.node() for c in foreground)
470
473
471
474
472 def successorssets(repo, initialnode, cache=None):
475 def successorssets(repo, initialnode, cache=None):
473 """Return all set of successors of initial nodes
476 """Return all set of successors of initial nodes
474
477
475 The successors set of a changeset A are a group of revisions that succeed
478 The successors set of a changeset A are a group of revisions that succeed
476 A. It succeeds A as a consistent whole, each revision being only a partial
479 A. It succeeds A as a consistent whole, each revision being only a partial
477 replacement. The successors set contains non-obsolete changesets only.
480 replacement. The successors set contains non-obsolete changesets only.
478
481
479 This function returns the full list of successor sets which is why it
482 This function returns the full list of successor sets which is why it
480 returns a list of tuples and not just a single tuple. Each tuple is a valid
483 returns a list of tuples and not just a single tuple. Each tuple is a valid
481 successors set. Not that (A,) may be a valid successors set for changeset A
484 successors set. Not that (A,) may be a valid successors set for changeset A
482 (see below).
485 (see below).
483
486
484 In most cases, a changeset A will have a single element (e.g. the changeset
487 In most cases, a changeset A will have a single element (e.g. the changeset
485 A is replaced by A') in its successors set. Though, it is also common for a
488 A is replaced by A') in its successors set. Though, it is also common for a
486 changeset A to have no elements in its successor set (e.g. the changeset
489 changeset A to have no elements in its successor set (e.g. the changeset
487 has been pruned). Therefore, the returned list of successors sets will be
490 has been pruned). Therefore, the returned list of successors sets will be
488 [(A',)] or [], respectively.
491 [(A',)] or [], respectively.
489
492
490 When a changeset A is split into A' and B', however, it will result in a
493 When a changeset A is split into A' and B', however, it will result in a
491 successors set containing more than a single element, i.e. [(A',B')].
494 successors set containing more than a single element, i.e. [(A',B')].
492 Divergent changesets will result in multiple successors sets, i.e. [(A',),
495 Divergent changesets will result in multiple successors sets, i.e. [(A',),
493 (A'')].
496 (A'')].
494
497
495 If a changeset A is not obsolete, then it will conceptually have no
498 If a changeset A is not obsolete, then it will conceptually have no
496 successors set. To distinguish this from a pruned changeset, the successor
499 successors set. To distinguish this from a pruned changeset, the successor
497 set will only contain itself, i.e. [(A,)].
500 set will only contain itself, i.e. [(A,)].
498
501
499 Finally, successors unknown locally are considered to be pruned (obsoleted
502 Finally, successors unknown locally are considered to be pruned (obsoleted
500 without any successors).
503 without any successors).
501
504
502 The optional `cache` parameter is a dictionary that may contain precomputed
505 The optional `cache` parameter is a dictionary that may contain precomputed
503 successors sets. It is meant to reuse the computation of a previous call to
506 successors sets. It is meant to reuse the computation of a previous call to
504 `successorssets` when multiple calls are made at the same time. The cache
507 `successorssets` when multiple calls are made at the same time. The cache
505 dictionary is updated in place. The caller is responsible for its live
508 dictionary is updated in place. The caller is responsible for its live
506 spawn. Code that makes multiple calls to `successorssets` *must* use this
509 spawn. Code that makes multiple calls to `successorssets` *must* use this
507 cache mechanism or suffer terrible performances.
510 cache mechanism or suffer terrible performances.
508
511
509 """
512 """
510
513
511 succmarkers = repo.obsstore.successors
514 succmarkers = repo.obsstore.successors
512
515
513 # Stack of nodes we search successors sets for
516 # Stack of nodes we search successors sets for
514 toproceed = [initialnode]
517 toproceed = [initialnode]
515 # set version of above list for fast loop detection
518 # set version of above list for fast loop detection
516 # element added to "toproceed" must be added here
519 # element added to "toproceed" must be added here
517 stackedset = set(toproceed)
520 stackedset = set(toproceed)
518 if cache is None:
521 if cache is None:
519 cache = {}
522 cache = {}
520
523
521 # This while loop is the flattened version of a recursive search for
524 # This while loop is the flattened version of a recursive search for
522 # successors sets
525 # successors sets
523 #
526 #
524 # def successorssets(x):
527 # def successorssets(x):
525 # successors = directsuccessors(x)
528 # successors = directsuccessors(x)
526 # ss = [[]]
529 # ss = [[]]
527 # for succ in directsuccessors(x):
530 # for succ in directsuccessors(x):
528 # # product as in itertools cartesian product
531 # # product as in itertools cartesian product
529 # ss = product(ss, successorssets(succ))
532 # ss = product(ss, successorssets(succ))
530 # return ss
533 # return ss
531 #
534 #
532 # But we can not use plain recursive calls here:
535 # But we can not use plain recursive calls here:
533 # - that would blow the python call stack
536 # - that would blow the python call stack
534 # - obsolescence markers may have cycles, we need to handle them.
537 # - obsolescence markers may have cycles, we need to handle them.
535 #
538 #
536 # The `toproceed` list act as our call stack. Every node we search
539 # The `toproceed` list act as our call stack. Every node we search
537 # successors set for are stacked there.
540 # successors set for are stacked there.
538 #
541 #
539 # The `stackedset` is set version of this stack used to check if a node is
542 # The `stackedset` is set version of this stack used to check if a node is
540 # already stacked. This check is used to detect cycles and prevent infinite
543 # already stacked. This check is used to detect cycles and prevent infinite
541 # loop.
544 # loop.
542 #
545 #
543 # successors set of all nodes are stored in the `cache` dictionary.
546 # successors set of all nodes are stored in the `cache` dictionary.
544 #
547 #
545 # After this while loop ends we use the cache to return the successors sets
548 # After this while loop ends we use the cache to return the successors sets
546 # for the node requested by the caller.
549 # for the node requested by the caller.
547 while toproceed:
550 while toproceed:
548 # Every iteration tries to compute the successors sets of the topmost
551 # Every iteration tries to compute the successors sets of the topmost
549 # node of the stack: CURRENT.
552 # node of the stack: CURRENT.
550 #
553 #
551 # There are four possible outcomes:
554 # There are four possible outcomes:
552 #
555 #
553 # 1) We already know the successors sets of CURRENT:
556 # 1) We already know the successors sets of CURRENT:
554 # -> mission accomplished, pop it from the stack.
557 # -> mission accomplished, pop it from the stack.
555 # 2) Node is not obsolete:
558 # 2) Node is not obsolete:
556 # -> the node is its own successors sets. Add it to the cache.
559 # -> the node is its own successors sets. Add it to the cache.
557 # 3) We do not know successors set of direct successors of CURRENT:
560 # 3) We do not know successors set of direct successors of CURRENT:
558 # -> We add those successors to the stack.
561 # -> We add those successors to the stack.
559 # 4) We know successors sets of all direct successors of CURRENT:
562 # 4) We know successors sets of all direct successors of CURRENT:
560 # -> We can compute CURRENT successors set and add it to the
563 # -> We can compute CURRENT successors set and add it to the
561 # cache.
564 # cache.
562 #
565 #
563 current = toproceed[-1]
566 current = toproceed[-1]
564 if current in cache:
567 if current in cache:
565 # case (1): We already know the successors sets
568 # case (1): We already know the successors sets
566 stackedset.remove(toproceed.pop())
569 stackedset.remove(toproceed.pop())
567 elif current not in succmarkers:
570 elif current not in succmarkers:
568 # case (2): The node is not obsolete.
571 # case (2): The node is not obsolete.
569 if current in repo:
572 if current in repo:
570 # We have a valid last successors.
573 # We have a valid last successors.
571 cache[current] = [(current,)]
574 cache[current] = [(current,)]
572 else:
575 else:
573 # Final obsolete version is unknown locally.
576 # Final obsolete version is unknown locally.
574 # Do not count that as a valid successors
577 # Do not count that as a valid successors
575 cache[current] = []
578 cache[current] = []
576 else:
579 else:
577 # cases (3) and (4)
580 # cases (3) and (4)
578 #
581 #
579 # We proceed in two phases. Phase 1 aims to distinguish case (3)
582 # We proceed in two phases. Phase 1 aims to distinguish case (3)
580 # from case (4):
583 # from case (4):
581 #
584 #
582 # For each direct successors of CURRENT, we check whether its
585 # For each direct successors of CURRENT, we check whether its
583 # successors sets are known. If they are not, we stack the
586 # successors sets are known. If they are not, we stack the
584 # unknown node and proceed to the next iteration of the while
587 # unknown node and proceed to the next iteration of the while
585 # loop. (case 3)
588 # loop. (case 3)
586 #
589 #
587 # During this step, we may detect obsolescence cycles: a node
590 # During this step, we may detect obsolescence cycles: a node
588 # with unknown successors sets but already in the call stack.
591 # with unknown successors sets but already in the call stack.
589 # In such a situation, we arbitrary set the successors sets of
592 # In such a situation, we arbitrary set the successors sets of
590 # the node to nothing (node pruned) to break the cycle.
593 # the node to nothing (node pruned) to break the cycle.
591 #
594 #
592 # If no break was encountered we proceed to phase 2.
595 # If no break was encountered we proceed to phase 2.
593 #
596 #
594 # Phase 2 computes successors sets of CURRENT (case 4); see details
597 # Phase 2 computes successors sets of CURRENT (case 4); see details
595 # in phase 2 itself.
598 # in phase 2 itself.
596 #
599 #
597 # Note the two levels of iteration in each phase.
600 # Note the two levels of iteration in each phase.
598 # - The first one handles obsolescence markers using CURRENT as
601 # - The first one handles obsolescence markers using CURRENT as
599 # precursor (successors markers of CURRENT).
602 # precursor (successors markers of CURRENT).
600 #
603 #
601 # Having multiple entry here means divergence.
604 # Having multiple entry here means divergence.
602 #
605 #
603 # - The second one handles successors defined in each marker.
606 # - The second one handles successors defined in each marker.
604 #
607 #
605 # Having none means pruned node, multiple successors means split,
608 # Having none means pruned node, multiple successors means split,
606 # single successors are standard replacement.
609 # single successors are standard replacement.
607 #
610 #
608 for mark in sorted(succmarkers[current]):
611 for mark in sorted(succmarkers[current]):
609 for suc in mark[1]:
612 for suc in mark[1]:
610 if suc not in cache:
613 if suc not in cache:
611 if suc in stackedset:
614 if suc in stackedset:
612 # cycle breaking
615 # cycle breaking
613 cache[suc] = []
616 cache[suc] = []
614 else:
617 else:
615 # case (3) If we have not computed successors sets
618 # case (3) If we have not computed successors sets
616 # of one of those successors we add it to the
619 # of one of those successors we add it to the
617 # `toproceed` stack and stop all work for this
620 # `toproceed` stack and stop all work for this
618 # iteration.
621 # iteration.
619 toproceed.append(suc)
622 toproceed.append(suc)
620 stackedset.add(suc)
623 stackedset.add(suc)
621 break
624 break
622 else:
625 else:
623 continue
626 continue
624 break
627 break
625 else:
628 else:
626 # case (4): we know all successors sets of all direct
629 # case (4): we know all successors sets of all direct
627 # successors
630 # successors
628 #
631 #
629 # Successors set contributed by each marker depends on the
632 # Successors set contributed by each marker depends on the
630 # successors sets of all its "successors" node.
633 # successors sets of all its "successors" node.
631 #
634 #
632 # Each different marker is a divergence in the obsolescence
635 # Each different marker is a divergence in the obsolescence
633 # history. It contributes successors sets distinct from other
636 # history. It contributes successors sets distinct from other
634 # markers.
637 # markers.
635 #
638 #
636 # Within a marker, a successor may have divergent successors
639 # Within a marker, a successor may have divergent successors
637 # sets. In such a case, the marker will contribute multiple
640 # sets. In such a case, the marker will contribute multiple
638 # divergent successors sets. If multiple successors have
641 # divergent successors sets. If multiple successors have
639 # divergent successors sets, a cartesian product is used.
642 # divergent successors sets, a cartesian product is used.
640 #
643 #
641 # At the end we post-process successors sets to remove
644 # At the end we post-process successors sets to remove
642 # duplicated entry and successors set that are strict subset of
645 # duplicated entry and successors set that are strict subset of
643 # another one.
646 # another one.
644 succssets = []
647 succssets = []
645 for mark in sorted(succmarkers[current]):
648 for mark in sorted(succmarkers[current]):
646 # successors sets contributed by this marker
649 # successors sets contributed by this marker
647 markss = [[]]
650 markss = [[]]
648 for suc in mark[1]:
651 for suc in mark[1]:
649 # cardinal product with previous successors
652 # cardinal product with previous successors
650 productresult = []
653 productresult = []
651 for prefix in markss:
654 for prefix in markss:
652 for suffix in cache[suc]:
655 for suffix in cache[suc]:
653 newss = list(prefix)
656 newss = list(prefix)
654 for part in suffix:
657 for part in suffix:
655 # do not duplicated entry in successors set
658 # do not duplicated entry in successors set
656 # first entry wins.
659 # first entry wins.
657 if part not in newss:
660 if part not in newss:
658 newss.append(part)
661 newss.append(part)
659 productresult.append(newss)
662 productresult.append(newss)
660 markss = productresult
663 markss = productresult
661 succssets.extend(markss)
664 succssets.extend(markss)
662 # remove duplicated and subset
665 # remove duplicated and subset
663 seen = []
666 seen = []
664 final = []
667 final = []
665 candidate = sorted(((set(s), s) for s in succssets if s),
668 candidate = sorted(((set(s), s) for s in succssets if s),
666 key=lambda x: len(x[1]), reverse=True)
669 key=lambda x: len(x[1]), reverse=True)
667 for setversion, listversion in candidate:
670 for setversion, listversion in candidate:
668 for seenset in seen:
671 for seenset in seen:
669 if setversion.issubset(seenset):
672 if setversion.issubset(seenset):
670 break
673 break
671 else:
674 else:
672 final.append(listversion)
675 final.append(listversion)
673 seen.append(setversion)
676 seen.append(setversion)
674 final.reverse() # put small successors set first
677 final.reverse() # put small successors set first
675 cache[current] = final
678 cache[current] = final
676 return cache[initialnode]
679 return cache[initialnode]
677
680
678 def _knownrevs(repo, nodes):
681 def _knownrevs(repo, nodes):
679 """yield revision numbers of known nodes passed in parameters
682 """yield revision numbers of known nodes passed in parameters
680
683
681 Unknown revisions are silently ignored."""
684 Unknown revisions are silently ignored."""
682 torev = repo.changelog.nodemap.get
685 torev = repo.changelog.nodemap.get
683 for n in nodes:
686 for n in nodes:
684 rev = torev(n)
687 rev = torev(n)
685 if rev is not None:
688 if rev is not None:
686 yield rev
689 yield rev
687
690
688 # mapping of 'set-name' -> <function to compute this set>
691 # mapping of 'set-name' -> <function to compute this set>
689 cachefuncs = {}
692 cachefuncs = {}
690 def cachefor(name):
693 def cachefor(name):
691 """Decorator to register a function as computing the cache for a set"""
694 """Decorator to register a function as computing the cache for a set"""
692 def decorator(func):
695 def decorator(func):
693 assert name not in cachefuncs
696 assert name not in cachefuncs
694 cachefuncs[name] = func
697 cachefuncs[name] = func
695 return func
698 return func
696 return decorator
699 return decorator
697
700
698 def getrevs(repo, name):
701 def getrevs(repo, name):
699 """Return the set of revision that belong to the <name> set
702 """Return the set of revision that belong to the <name> set
700
703
701 Such access may compute the set and cache it for future use"""
704 Such access may compute the set and cache it for future use"""
702 repo = repo.unfiltered()
705 repo = repo.unfiltered()
703 if not repo.obsstore:
706 if not repo.obsstore:
704 return ()
707 return ()
705 if name not in repo.obsstore.caches:
708 if name not in repo.obsstore.caches:
706 repo.obsstore.caches[name] = cachefuncs[name](repo)
709 repo.obsstore.caches[name] = cachefuncs[name](repo)
707 return repo.obsstore.caches[name]
710 return repo.obsstore.caches[name]
708
711
709 # To be simple we need to invalidate obsolescence cache when:
712 # To be simple we need to invalidate obsolescence cache when:
710 #
713 #
711 # - new changeset is added:
714 # - new changeset is added:
712 # - public phase is changed
715 # - public phase is changed
713 # - obsolescence marker are added
716 # - obsolescence marker are added
714 # - strip is used a repo
717 # - strip is used a repo
715 def clearobscaches(repo):
718 def clearobscaches(repo):
716 """Remove all obsolescence related cache from a repo
719 """Remove all obsolescence related cache from a repo
717
720
718 This remove all cache in obsstore is the obsstore already exist on the
721 This remove all cache in obsstore is the obsstore already exist on the
719 repo.
722 repo.
720
723
721 (We could be smarter here given the exact event that trigger the cache
724 (We could be smarter here given the exact event that trigger the cache
722 clearing)"""
725 clearing)"""
723 # only clear cache is there is obsstore data in this repo
726 # only clear cache is there is obsstore data in this repo
724 if 'obsstore' in repo._filecache:
727 if 'obsstore' in repo._filecache:
725 repo.obsstore.caches.clear()
728 repo.obsstore.caches.clear()
726
729
727 @cachefor('obsolete')
730 @cachefor('obsolete')
728 def _computeobsoleteset(repo):
731 def _computeobsoleteset(repo):
729 """the set of obsolete revisions"""
732 """the set of obsolete revisions"""
730 obs = set()
733 obs = set()
731 getrev = repo.changelog.nodemap.get
734 getrev = repo.changelog.nodemap.get
732 getphase = repo._phasecache.phase
735 getphase = repo._phasecache.phase
733 for node in repo.obsstore.successors:
736 for node in repo.obsstore.successors:
734 rev = getrev(node)
737 rev = getrev(node)
735 if rev is not None and getphase(repo, rev):
738 if rev is not None and getphase(repo, rev):
736 obs.add(rev)
739 obs.add(rev)
737 return obs
740 return obs
738
741
739 @cachefor('unstable')
742 @cachefor('unstable')
740 def _computeunstableset(repo):
743 def _computeunstableset(repo):
741 """the set of non obsolete revisions with obsolete parents"""
744 """the set of non obsolete revisions with obsolete parents"""
742 # revset is not efficient enough here
745 # revset is not efficient enough here
743 # we do (obsolete()::) - obsolete() by hand
746 # we do (obsolete()::) - obsolete() by hand
744 obs = getrevs(repo, 'obsolete')
747 obs = getrevs(repo, 'obsolete')
745 if not obs:
748 if not obs:
746 return set()
749 return set()
747 cl = repo.changelog
750 cl = repo.changelog
748 return set(r for r in cl.descendants(obs) if r not in obs)
751 return set(r for r in cl.descendants(obs) if r not in obs)
749
752
750 @cachefor('suspended')
753 @cachefor('suspended')
751 def _computesuspendedset(repo):
754 def _computesuspendedset(repo):
752 """the set of obsolete parents with non obsolete descendants"""
755 """the set of obsolete parents with non obsolete descendants"""
753 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
756 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
754 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
757 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
755
758
756 @cachefor('extinct')
759 @cachefor('extinct')
757 def _computeextinctset(repo):
760 def _computeextinctset(repo):
758 """the set of obsolete parents without non obsolete descendants"""
761 """the set of obsolete parents without non obsolete descendants"""
759 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
762 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
760
763
761
764
762 @cachefor('bumped')
765 @cachefor('bumped')
763 def _computebumpedset(repo):
766 def _computebumpedset(repo):
764 """the set of revs trying to obsolete public revisions"""
767 """the set of revs trying to obsolete public revisions"""
765 bumped = set()
768 bumped = set()
766 # utils function (avoid attribut lookup in the loop)
769 # utils function (avoid attribut lookup in the loop)
767 phase = repo._phasecache.phase # would be faster to grab the full list
770 phase = repo._phasecache.phase # would be faster to grab the full list
768 public = phases.public
771 public = phases.public
769 cl = repo.changelog
772 cl = repo.changelog
770 torev = cl.nodemap.get
773 torev = cl.nodemap.get
771 obs = getrevs(repo, 'obsolete')
774 obs = getrevs(repo, 'obsolete')
772 for rev in repo:
775 for rev in repo:
773 # We only evaluate mutable, non-obsolete revision
776 # We only evaluate mutable, non-obsolete revision
774 if (public < phase(repo, rev)) and (rev not in obs):
777 if (public < phase(repo, rev)) and (rev not in obs):
775 node = cl.node(rev)
778 node = cl.node(rev)
776 # (future) A cache of precursors may worth if split is very common
779 # (future) A cache of precursors may worth if split is very common
777 for pnode in allprecursors(repo.obsstore, [node],
780 for pnode in allprecursors(repo.obsstore, [node],
778 ignoreflags=bumpedfix):
781 ignoreflags=bumpedfix):
779 prev = torev(pnode) # unfiltered! but so is phasecache
782 prev = torev(pnode) # unfiltered! but so is phasecache
780 if (prev is not None) and (phase(repo, prev) <= public):
783 if (prev is not None) and (phase(repo, prev) <= public):
781 # we have a public precursors
784 # we have a public precursors
782 bumped.add(rev)
785 bumped.add(rev)
783 break # Next draft!
786 break # Next draft!
784 return bumped
787 return bumped
785
788
786 @cachefor('divergent')
789 @cachefor('divergent')
787 def _computedivergentset(repo):
790 def _computedivergentset(repo):
788 """the set of rev that compete to be the final successors of some revision.
791 """the set of rev that compete to be the final successors of some revision.
789 """
792 """
790 divergent = set()
793 divergent = set()
791 obsstore = repo.obsstore
794 obsstore = repo.obsstore
792 newermap = {}
795 newermap = {}
793 for ctx in repo.set('(not public()) - obsolete()'):
796 for ctx in repo.set('(not public()) - obsolete()'):
794 mark = obsstore.precursors.get(ctx.node(), ())
797 mark = obsstore.precursors.get(ctx.node(), ())
795 toprocess = set(mark)
798 toprocess = set(mark)
796 while toprocess:
799 while toprocess:
797 prec = toprocess.pop()[0]
800 prec = toprocess.pop()[0]
798 if prec not in newermap:
801 if prec not in newermap:
799 successorssets(repo, prec, newermap)
802 successorssets(repo, prec, newermap)
800 newer = [n for n in newermap[prec] if n]
803 newer = [n for n in newermap[prec] if n]
801 if len(newer) > 1:
804 if len(newer) > 1:
802 divergent.add(ctx.rev())
805 divergent.add(ctx.rev())
803 break
806 break
804 toprocess.update(obsstore.precursors.get(prec, ()))
807 toprocess.update(obsstore.precursors.get(prec, ()))
805 return divergent
808 return divergent
806
809
807
810
808 def createmarkers(repo, relations, flag=0, metadata=None):
811 def createmarkers(repo, relations, flag=0, metadata=None):
809 """Add obsolete markers between changesets in a repo
812 """Add obsolete markers between changesets in a repo
810
813
811 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
814 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
812 `old` and `news` are changectx.
815 `old` and `news` are changectx.
813
816
814 Trying to obsolete a public changeset will raise an exception.
817 Trying to obsolete a public changeset will raise an exception.
815
818
816 Current user and date are used except if specified otherwise in the
819 Current user and date are used except if specified otherwise in the
817 metadata attribute.
820 metadata attribute.
818
821
819 This function operates within a transaction of its own, but does
822 This function operates within a transaction of its own, but does
820 not take any lock on the repo.
823 not take any lock on the repo.
821 """
824 """
822 # prepare metadata
825 # prepare metadata
823 if metadata is None:
826 if metadata is None:
824 metadata = {}
827 metadata = {}
825 if 'date' not in metadata:
828 if 'date' not in metadata:
826 metadata['date'] = '%i %i' % util.makedate()
829 metadata['date'] = '%i %i' % util.makedate()
827 if 'user' not in metadata:
830 if 'user' not in metadata:
828 metadata['user'] = repo.ui.username()
831 metadata['user'] = repo.ui.username()
829 tr = repo.transaction('add-obsolescence-marker')
832 tr = repo.transaction('add-obsolescence-marker')
830 try:
833 try:
831 for prec, sucs in relations:
834 for prec, sucs in relations:
832 if not prec.mutable():
835 if not prec.mutable():
833 raise util.Abort("cannot obsolete immutable changeset: %s"
836 raise util.Abort("cannot obsolete immutable changeset: %s"
834 % prec)
837 % prec)
835 nprec = prec.node()
838 nprec = prec.node()
836 nsucs = tuple(s.node() for s in sucs)
839 nsucs = tuple(s.node() for s in sucs)
837 if nprec in nsucs:
840 if nprec in nsucs:
838 raise util.Abort("changeset %s cannot obsolete itself" % prec)
841 raise util.Abort("changeset %s cannot obsolete itself" % prec)
839 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
842 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
840 repo.filteredrevcache.clear()
843 repo.filteredrevcache.clear()
841 tr.close()
844 tr.close()
842 finally:
845 finally:
843 tr.release()
846 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now