##// END OF EJS Templates
obsstore: `create` method return True if a marker is actually added...
Pierre-Yves David -
r20584:224a058f default
parent child Browse files
Show More
@@ -1,854 +1,857 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 import phases
87 import phases
88 from i18n import _
88 from i18n import _
89
89
90 _pack = struct.pack
90 _pack = struct.pack
91 _unpack = struct.unpack
91 _unpack = struct.unpack
92
92
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
94
94
95 # the obsolete feature is not mature enough to be enabled by default.
95 # the obsolete feature is not mature enough to be enabled by default.
96 # you have to rely on third party extension extension to enable this.
96 # you have to rely on third party extension extension to enable this.
97 _enabled = False
97 _enabled = False
98
98
99 # data used for parsing and writing
99 # data used for parsing and writing
100 _fmversion = 0
100 _fmversion = 0
101 _fmfixed = '>BIB20s'
101 _fmfixed = '>BIB20s'
102 _fmnode = '20s'
102 _fmnode = '20s'
103 _fmfsize = struct.calcsize(_fmfixed)
103 _fmfsize = struct.calcsize(_fmfixed)
104 _fnodesize = struct.calcsize(_fmnode)
104 _fnodesize = struct.calcsize(_fmnode)
105
105
106 ### obsolescence marker flag
106 ### obsolescence marker flag
107
107
108 ## bumpedfix flag
108 ## bumpedfix flag
109 #
109 #
110 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # When a changeset A' succeed to a changeset A which became public, we call A'
111 # "bumped" because it's a successors of a public changesets
111 # "bumped" because it's a successors of a public changesets
112 #
112 #
113 # o A' (bumped)
113 # o A' (bumped)
114 # |`:
114 # |`:
115 # | o A
115 # | o A
116 # |/
116 # |/
117 # o Z
117 # o Z
118 #
118 #
119 # The way to solve this situation is to create a new changeset Ad as children
119 # The way to solve this situation is to create a new changeset Ad as children
120 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # of A. This changeset have the same content than A'. So the diff from A to A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
122 #
122 #
123 # o Ad
123 # o Ad
124 # |`:
124 # |`:
125 # | x A'
125 # | x A'
126 # |'|
126 # |'|
127 # o | A
127 # o | A
128 # |/
128 # |/
129 # o Z
129 # o Z
130 #
130 #
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
133 # This flag mean that the successors express the changes between the public and
133 # This flag mean that the successors express the changes between the public and
134 # bumped version and fix the situation, breaking the transitivity of
134 # bumped version and fix the situation, breaking the transitivity of
135 # "bumped" here.
135 # "bumped" here.
136 bumpedfix = 1
136 bumpedfix = 1
137
137
138 def _readmarkers(data):
138 def _readmarkers(data):
139 """Read and enumerate markers from raw data"""
139 """Read and enumerate markers from raw data"""
140 off = 0
140 off = 0
141 diskversion = _unpack('>B', data[off:off + 1])[0]
141 diskversion = _unpack('>B', data[off:off + 1])[0]
142 off += 1
142 off += 1
143 if diskversion != _fmversion:
143 if diskversion != _fmversion:
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
145 % diskversion)
145 % diskversion)
146
146
147 # Loop on markers
147 # Loop on markers
148 l = len(data)
148 l = len(data)
149 while off + _fmfsize <= l:
149 while off + _fmfsize <= l:
150 # read fixed part
150 # read fixed part
151 cur = data[off:off + _fmfsize]
151 cur = data[off:off + _fmfsize]
152 off += _fmfsize
152 off += _fmfsize
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
154 # read replacement
154 # read replacement
155 sucs = ()
155 sucs = ()
156 if nbsuc:
156 if nbsuc:
157 s = (_fnodesize * nbsuc)
157 s = (_fnodesize * nbsuc)
158 cur = data[off:off + s]
158 cur = data[off:off + s]
159 sucs = _unpack(_fmnode * nbsuc, cur)
159 sucs = _unpack(_fmnode * nbsuc, cur)
160 off += s
160 off += s
161 # read metadata
161 # read metadata
162 # (metadata will be decoded on demand)
162 # (metadata will be decoded on demand)
163 metadata = data[off:off + mdsize]
163 metadata = data[off:off + mdsize]
164 if len(metadata) != mdsize:
164 if len(metadata) != mdsize:
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
166 'short, %d bytes expected, got %d')
166 'short, %d bytes expected, got %d')
167 % (mdsize, len(metadata)))
167 % (mdsize, len(metadata)))
168 off += mdsize
168 off += mdsize
169 yield (pre, sucs, flags, metadata)
169 yield (pre, sucs, flags, metadata)
170
170
171 def encodemeta(meta):
171 def encodemeta(meta):
172 """Return encoded metadata string to string mapping.
172 """Return encoded metadata string to string mapping.
173
173
174 Assume no ':' in key and no '\0' in both key and value."""
174 Assume no ':' in key and no '\0' in both key and value."""
175 for key, value in meta.iteritems():
175 for key, value in meta.iteritems():
176 if ':' in key or '\0' in key:
176 if ':' in key or '\0' in key:
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
178 if '\0' in value:
178 if '\0' in value:
179 raise ValueError("':' are forbidden in metadata value'")
179 raise ValueError("':' are forbidden in metadata value'")
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
181
181
182 def decodemeta(data):
182 def decodemeta(data):
183 """Return string to string dictionary from encoded version."""
183 """Return string to string dictionary from encoded version."""
184 d = {}
184 d = {}
185 for l in data.split('\0'):
185 for l in data.split('\0'):
186 if l:
186 if l:
187 key, value = l.split(':')
187 key, value = l.split(':')
188 d[key] = value
188 d[key] = value
189 return d
189 return d
190
190
191 class marker(object):
191 class marker(object):
192 """Wrap obsolete marker raw data"""
192 """Wrap obsolete marker raw data"""
193
193
194 def __init__(self, repo, data):
194 def __init__(self, repo, data):
195 # the repo argument will be used to create changectx in later version
195 # the repo argument will be used to create changectx in later version
196 self._repo = repo
196 self._repo = repo
197 self._data = data
197 self._data = data
198 self._decodedmeta = None
198 self._decodedmeta = None
199
199
200 def __hash__(self):
200 def __hash__(self):
201 return hash(self._data)
201 return hash(self._data)
202
202
203 def __eq__(self, other):
203 def __eq__(self, other):
204 if type(other) != type(self):
204 if type(other) != type(self):
205 return False
205 return False
206 return self._data == other._data
206 return self._data == other._data
207
207
208 def precnode(self):
208 def precnode(self):
209 """Precursor changeset node identifier"""
209 """Precursor changeset node identifier"""
210 return self._data[0]
210 return self._data[0]
211
211
212 def succnodes(self):
212 def succnodes(self):
213 """List of successor changesets node identifiers"""
213 """List of successor changesets node identifiers"""
214 return self._data[1]
214 return self._data[1]
215
215
216 def metadata(self):
216 def metadata(self):
217 """Decoded metadata dictionary"""
217 """Decoded metadata dictionary"""
218 if self._decodedmeta is None:
218 if self._decodedmeta is None:
219 self._decodedmeta = decodemeta(self._data[3])
219 self._decodedmeta = decodemeta(self._data[3])
220 return self._decodedmeta
220 return self._decodedmeta
221
221
222 def date(self):
222 def date(self):
223 """Creation date as (unixtime, offset)"""
223 """Creation date as (unixtime, offset)"""
224 parts = self.metadata()['date'].split(' ')
224 parts = self.metadata()['date'].split(' ')
225 return (float(parts[0]), int(parts[1]))
225 return (float(parts[0]), int(parts[1]))
226
226
227 class obsstore(object):
227 class obsstore(object):
228 """Store obsolete markers
228 """Store obsolete markers
229
229
230 Markers can be accessed with two mappings:
230 Markers can be accessed with two mappings:
231 - precursors[x] -> set(markers on precursors edges of x)
231 - precursors[x] -> set(markers on precursors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
233 """
233 """
234
234
235 def __init__(self, sopener):
235 def __init__(self, sopener):
236 # caches for various obsolescence related cache
236 # caches for various obsolescence related cache
237 self.caches = {}
237 self.caches = {}
238 self._all = []
238 self._all = []
239 # new markers to serialize
239 # new markers to serialize
240 self.precursors = {}
240 self.precursors = {}
241 self.successors = {}
241 self.successors = {}
242 self.sopener = sopener
242 self.sopener = sopener
243 data = sopener.tryread('obsstore')
243 data = sopener.tryread('obsstore')
244 if data:
244 if data:
245 self._load(_readmarkers(data))
245 self._load(_readmarkers(data))
246
246
247 def __iter__(self):
247 def __iter__(self):
248 return iter(self._all)
248 return iter(self._all)
249
249
250 def __nonzero__(self):
250 def __nonzero__(self):
251 return bool(self._all)
251 return bool(self._all)
252
252
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
254 """obsolete: add a new obsolete marker
254 """obsolete: add a new obsolete marker
255
255
256 * ensuring it is hashable
256 * ensuring it is hashable
257 * check mandatory metadata
257 * check mandatory metadata
258 * encode metadata
258 * encode metadata
259
259
260 If you are a human writing code creating marker you want to use the
260 If you are a human writing code creating marker you want to use the
261 `createmarkers` function in this module instead.
261 `createmarkers` function in this module instead.
262
263 return True if a new marker have been added, False if the markers
264 already existed (no op).
262 """
265 """
263 if metadata is None:
266 if metadata is None:
264 metadata = {}
267 metadata = {}
265 if 'date' not in metadata:
268 if 'date' not in metadata:
266 metadata['date'] = "%d %d" % util.makedate()
269 metadata['date'] = "%d %d" % util.makedate()
267 if len(prec) != 20:
270 if len(prec) != 20:
268 raise ValueError(prec)
271 raise ValueError(prec)
269 for succ in succs:
272 for succ in succs:
270 if len(succ) != 20:
273 if len(succ) != 20:
271 raise ValueError(succ)
274 raise ValueError(succ)
272 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
275 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
273 self.add(transaction, [marker])
276 return bool(self.add(transaction, [marker]))
274
277
275 def add(self, transaction, markers):
278 def add(self, transaction, markers):
276 """Add new markers to the store
279 """Add new markers to the store
277
280
278 Take care of filtering duplicate.
281 Take care of filtering duplicate.
279 Return the number of new marker."""
282 Return the number of new marker."""
280 if not _enabled:
283 if not _enabled:
281 raise util.Abort('obsolete feature is not enabled on this repo')
284 raise util.Abort('obsolete feature is not enabled on this repo')
282 known = set(self._all)
285 known = set(self._all)
283 new = []
286 new = []
284 for m in markers:
287 for m in markers:
285 if m not in known:
288 if m not in known:
286 known.add(m)
289 known.add(m)
287 new.append(m)
290 new.append(m)
288 if new:
291 if new:
289 f = self.sopener('obsstore', 'ab')
292 f = self.sopener('obsstore', 'ab')
290 try:
293 try:
291 # Whether the file's current position is at the begin or at
294 # Whether the file's current position is at the begin or at
292 # the end after opening a file for appending is implementation
295 # the end after opening a file for appending is implementation
293 # defined. So we must seek to the end before calling tell(),
296 # defined. So we must seek to the end before calling tell(),
294 # or we may get a zero offset for non-zero sized files on
297 # or we may get a zero offset for non-zero sized files on
295 # some platforms (issue3543).
298 # some platforms (issue3543).
296 f.seek(0, _SEEK_END)
299 f.seek(0, _SEEK_END)
297 offset = f.tell()
300 offset = f.tell()
298 transaction.add('obsstore', offset)
301 transaction.add('obsstore', offset)
299 # offset == 0: new file - add the version header
302 # offset == 0: new file - add the version header
300 for bytes in _encodemarkers(new, offset == 0):
303 for bytes in _encodemarkers(new, offset == 0):
301 f.write(bytes)
304 f.write(bytes)
302 finally:
305 finally:
303 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
306 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
304 # call 'filecacheentry.refresh()' here
307 # call 'filecacheentry.refresh()' here
305 f.close()
308 f.close()
306 self._load(new)
309 self._load(new)
307 # new marker *may* have changed several set. invalidate the cache.
310 # new marker *may* have changed several set. invalidate the cache.
308 self.caches.clear()
311 self.caches.clear()
309 return len(new)
312 return len(new)
310
313
311 def mergemarkers(self, transaction, data):
314 def mergemarkers(self, transaction, data):
312 markers = _readmarkers(data)
315 markers = _readmarkers(data)
313 self.add(transaction, markers)
316 self.add(transaction, markers)
314
317
315 def _load(self, markers):
318 def _load(self, markers):
316 for mark in markers:
319 for mark in markers:
317 self._all.append(mark)
320 self._all.append(mark)
318 pre, sucs = mark[:2]
321 pre, sucs = mark[:2]
319 self.successors.setdefault(pre, set()).add(mark)
322 self.successors.setdefault(pre, set()).add(mark)
320 for suc in sucs:
323 for suc in sucs:
321 self.precursors.setdefault(suc, set()).add(mark)
324 self.precursors.setdefault(suc, set()).add(mark)
322 if node.nullid in self.precursors:
325 if node.nullid in self.precursors:
323 raise util.Abort(_('bad obsolescence marker detected: '
326 raise util.Abort(_('bad obsolescence marker detected: '
324 'invalid successors nullid'))
327 'invalid successors nullid'))
325
328
326 def _encodemarkers(markers, addheader=False):
329 def _encodemarkers(markers, addheader=False):
327 # Kept separate from flushmarkers(), it will be reused for
330 # Kept separate from flushmarkers(), it will be reused for
328 # markers exchange.
331 # markers exchange.
329 if addheader:
332 if addheader:
330 yield _pack('>B', _fmversion)
333 yield _pack('>B', _fmversion)
331 for marker in markers:
334 for marker in markers:
332 yield _encodeonemarker(marker)
335 yield _encodeonemarker(marker)
333
336
334
337
335 def _encodeonemarker(marker):
338 def _encodeonemarker(marker):
336 pre, sucs, flags, metadata = marker
339 pre, sucs, flags, metadata = marker
337 nbsuc = len(sucs)
340 nbsuc = len(sucs)
338 format = _fmfixed + (_fmnode * nbsuc)
341 format = _fmfixed + (_fmnode * nbsuc)
339 data = [nbsuc, len(metadata), flags, pre]
342 data = [nbsuc, len(metadata), flags, pre]
340 data.extend(sucs)
343 data.extend(sucs)
341 return _pack(format, *data) + metadata
344 return _pack(format, *data) + metadata
342
345
343 # arbitrary picked to fit into 8K limit from HTTP server
346 # arbitrary picked to fit into 8K limit from HTTP server
344 # you have to take in account:
347 # you have to take in account:
345 # - the version header
348 # - the version header
346 # - the base85 encoding
349 # - the base85 encoding
347 _maxpayload = 5300
350 _maxpayload = 5300
348
351
349 def listmarkers(repo):
352 def listmarkers(repo):
350 """List markers over pushkey"""
353 """List markers over pushkey"""
351 if not repo.obsstore:
354 if not repo.obsstore:
352 return {}
355 return {}
353 keys = {}
356 keys = {}
354 parts = []
357 parts = []
355 currentlen = _maxpayload * 2 # ensure we create a new part
358 currentlen = _maxpayload * 2 # ensure we create a new part
356 for marker in repo.obsstore:
359 for marker in repo.obsstore:
357 nextdata = _encodeonemarker(marker)
360 nextdata = _encodeonemarker(marker)
358 if (len(nextdata) + currentlen > _maxpayload):
361 if (len(nextdata) + currentlen > _maxpayload):
359 currentpart = []
362 currentpart = []
360 currentlen = 0
363 currentlen = 0
361 parts.append(currentpart)
364 parts.append(currentpart)
362 currentpart.append(nextdata)
365 currentpart.append(nextdata)
363 currentlen += len(nextdata)
366 currentlen += len(nextdata)
364 for idx, part in enumerate(reversed(parts)):
367 for idx, part in enumerate(reversed(parts)):
365 data = ''.join([_pack('>B', _fmversion)] + part)
368 data = ''.join([_pack('>B', _fmversion)] + part)
366 keys['dump%i' % idx] = base85.b85encode(data)
369 keys['dump%i' % idx] = base85.b85encode(data)
367 return keys
370 return keys
368
371
369 def pushmarker(repo, key, old, new):
372 def pushmarker(repo, key, old, new):
370 """Push markers over pushkey"""
373 """Push markers over pushkey"""
371 if not key.startswith('dump'):
374 if not key.startswith('dump'):
372 repo.ui.warn(_('unknown key: %r') % key)
375 repo.ui.warn(_('unknown key: %r') % key)
373 return 0
376 return 0
374 if old:
377 if old:
375 repo.ui.warn(_('unexpected old value') % key)
378 repo.ui.warn(_('unexpected old value') % key)
376 return 0
379 return 0
377 data = base85.b85decode(new)
380 data = base85.b85decode(new)
378 lock = repo.lock()
381 lock = repo.lock()
379 try:
382 try:
380 tr = repo.transaction('pushkey: obsolete markers')
383 tr = repo.transaction('pushkey: obsolete markers')
381 try:
384 try:
382 repo.obsstore.mergemarkers(tr, data)
385 repo.obsstore.mergemarkers(tr, data)
383 tr.close()
386 tr.close()
384 return 1
387 return 1
385 finally:
388 finally:
386 tr.release()
389 tr.release()
387 finally:
390 finally:
388 lock.release()
391 lock.release()
389
392
390 def allmarkers(repo):
393 def allmarkers(repo):
391 """all obsolete markers known in a repository"""
394 """all obsolete markers known in a repository"""
392 for markerdata in repo.obsstore:
395 for markerdata in repo.obsstore:
393 yield marker(repo, markerdata)
396 yield marker(repo, markerdata)
394
397
395 def precursormarkers(ctx):
398 def precursormarkers(ctx):
396 """obsolete marker marking this changeset as a successors"""
399 """obsolete marker marking this changeset as a successors"""
397 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
400 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
398 yield marker(ctx._repo, data)
401 yield marker(ctx._repo, data)
399
402
400 def successormarkers(ctx):
403 def successormarkers(ctx):
401 """obsolete marker making this changeset obsolete"""
404 """obsolete marker making this changeset obsolete"""
402 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
405 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
403 yield marker(ctx._repo, data)
406 yield marker(ctx._repo, data)
404
407
405 def allsuccessors(obsstore, nodes, ignoreflags=0):
408 def allsuccessors(obsstore, nodes, ignoreflags=0):
406 """Yield node for every successor of <nodes>.
409 """Yield node for every successor of <nodes>.
407
410
408 Some successors may be unknown locally.
411 Some successors may be unknown locally.
409
412
410 This is a linear yield unsuited to detecting split changesets. It includes
413 This is a linear yield unsuited to detecting split changesets. It includes
411 initial nodes too."""
414 initial nodes too."""
412 remaining = set(nodes)
415 remaining = set(nodes)
413 seen = set(remaining)
416 seen = set(remaining)
414 while remaining:
417 while remaining:
415 current = remaining.pop()
418 current = remaining.pop()
416 yield current
419 yield current
417 for mark in obsstore.successors.get(current, ()):
420 for mark in obsstore.successors.get(current, ()):
418 # ignore marker flagged with specified flag
421 # ignore marker flagged with specified flag
419 if mark[2] & ignoreflags:
422 if mark[2] & ignoreflags:
420 continue
423 continue
421 for suc in mark[1]:
424 for suc in mark[1]:
422 if suc not in seen:
425 if suc not in seen:
423 seen.add(suc)
426 seen.add(suc)
424 remaining.add(suc)
427 remaining.add(suc)
425
428
426 def allprecursors(obsstore, nodes, ignoreflags=0):
429 def allprecursors(obsstore, nodes, ignoreflags=0):
427 """Yield node for every precursors of <nodes>.
430 """Yield node for every precursors of <nodes>.
428
431
429 Some precursors may be unknown locally.
432 Some precursors may be unknown locally.
430
433
431 This is a linear yield unsuited to detecting folded changesets. It includes
434 This is a linear yield unsuited to detecting folded changesets. It includes
432 initial nodes too."""
435 initial nodes too."""
433
436
434 remaining = set(nodes)
437 remaining = set(nodes)
435 seen = set(remaining)
438 seen = set(remaining)
436 while remaining:
439 while remaining:
437 current = remaining.pop()
440 current = remaining.pop()
438 yield current
441 yield current
439 for mark in obsstore.precursors.get(current, ()):
442 for mark in obsstore.precursors.get(current, ()):
440 # ignore marker flagged with specified flag
443 # ignore marker flagged with specified flag
441 if mark[2] & ignoreflags:
444 if mark[2] & ignoreflags:
442 continue
445 continue
443 suc = mark[0]
446 suc = mark[0]
444 if suc not in seen:
447 if suc not in seen:
445 seen.add(suc)
448 seen.add(suc)
446 remaining.add(suc)
449 remaining.add(suc)
447
450
448 def foreground(repo, nodes):
451 def foreground(repo, nodes):
449 """return all nodes in the "foreground" of other node
452 """return all nodes in the "foreground" of other node
450
453
451 The foreground of a revision is anything reachable using parent -> children
454 The foreground of a revision is anything reachable using parent -> children
452 or precursor -> successor relation. It is very similar to "descendant" but
455 or precursor -> successor relation. It is very similar to "descendant" but
453 augmented with obsolescence information.
456 augmented with obsolescence information.
454
457
455 Beware that possible obsolescence cycle may result if complex situation.
458 Beware that possible obsolescence cycle may result if complex situation.
456 """
459 """
457 repo = repo.unfiltered()
460 repo = repo.unfiltered()
458 foreground = set(repo.set('%ln::', nodes))
461 foreground = set(repo.set('%ln::', nodes))
459 if repo.obsstore:
462 if repo.obsstore:
460 # We only need this complicated logic if there is obsolescence
463 # We only need this complicated logic if there is obsolescence
461 # XXX will probably deserve an optimised revset.
464 # XXX will probably deserve an optimised revset.
462 nm = repo.changelog.nodemap
465 nm = repo.changelog.nodemap
463 plen = -1
466 plen = -1
464 # compute the whole set of successors or descendants
467 # compute the whole set of successors or descendants
465 while len(foreground) != plen:
468 while len(foreground) != plen:
466 plen = len(foreground)
469 plen = len(foreground)
467 succs = set(c.node() for c in foreground)
470 succs = set(c.node() for c in foreground)
468 mutable = [c.node() for c in foreground if c.mutable()]
471 mutable = [c.node() for c in foreground if c.mutable()]
469 succs.update(allsuccessors(repo.obsstore, mutable))
472 succs.update(allsuccessors(repo.obsstore, mutable))
470 known = (n for n in succs if n in nm)
473 known = (n for n in succs if n in nm)
471 foreground = set(repo.set('%ln::', known))
474 foreground = set(repo.set('%ln::', known))
472 return set(c.node() for c in foreground)
475 return set(c.node() for c in foreground)
473
476
474
477
475 def successorssets(repo, initialnode, cache=None):
478 def successorssets(repo, initialnode, cache=None):
476 """Return all set of successors of initial nodes
479 """Return all set of successors of initial nodes
477
480
478 The successors set of a changeset A are a group of revisions that succeed
481 The successors set of a changeset A are a group of revisions that succeed
479 A. It succeeds A as a consistent whole, each revision being only a partial
482 A. It succeeds A as a consistent whole, each revision being only a partial
480 replacement. The successors set contains non-obsolete changesets only.
483 replacement. The successors set contains non-obsolete changesets only.
481
484
482 This function returns the full list of successor sets which is why it
485 This function returns the full list of successor sets which is why it
483 returns a list of tuples and not just a single tuple. Each tuple is a valid
486 returns a list of tuples and not just a single tuple. Each tuple is a valid
484 successors set. Not that (A,) may be a valid successors set for changeset A
487 successors set. Not that (A,) may be a valid successors set for changeset A
485 (see below).
488 (see below).
486
489
487 In most cases, a changeset A will have a single element (e.g. the changeset
490 In most cases, a changeset A will have a single element (e.g. the changeset
488 A is replaced by A') in its successors set. Though, it is also common for a
491 A is replaced by A') in its successors set. Though, it is also common for a
489 changeset A to have no elements in its successor set (e.g. the changeset
492 changeset A to have no elements in its successor set (e.g. the changeset
490 has been pruned). Therefore, the returned list of successors sets will be
493 has been pruned). Therefore, the returned list of successors sets will be
491 [(A',)] or [], respectively.
494 [(A',)] or [], respectively.
492
495
493 When a changeset A is split into A' and B', however, it will result in a
496 When a changeset A is split into A' and B', however, it will result in a
494 successors set containing more than a single element, i.e. [(A',B')].
497 successors set containing more than a single element, i.e. [(A',B')].
495 Divergent changesets will result in multiple successors sets, i.e. [(A',),
498 Divergent changesets will result in multiple successors sets, i.e. [(A',),
496 (A'')].
499 (A'')].
497
500
498 If a changeset A is not obsolete, then it will conceptually have no
501 If a changeset A is not obsolete, then it will conceptually have no
499 successors set. To distinguish this from a pruned changeset, the successor
502 successors set. To distinguish this from a pruned changeset, the successor
500 set will only contain itself, i.e. [(A,)].
503 set will only contain itself, i.e. [(A,)].
501
504
502 Finally, successors unknown locally are considered to be pruned (obsoleted
505 Finally, successors unknown locally are considered to be pruned (obsoleted
503 without any successors).
506 without any successors).
504
507
505 The optional `cache` parameter is a dictionary that may contain precomputed
508 The optional `cache` parameter is a dictionary that may contain precomputed
506 successors sets. It is meant to reuse the computation of a previous call to
509 successors sets. It is meant to reuse the computation of a previous call to
507 `successorssets` when multiple calls are made at the same time. The cache
510 `successorssets` when multiple calls are made at the same time. The cache
508 dictionary is updated in place. The caller is responsible for its live
511 dictionary is updated in place. The caller is responsible for its live
509 spawn. Code that makes multiple calls to `successorssets` *must* use this
512 spawn. Code that makes multiple calls to `successorssets` *must* use this
510 cache mechanism or suffer terrible performances.
513 cache mechanism or suffer terrible performances.
511
514
512 """
515 """
513
516
514 succmarkers = repo.obsstore.successors
517 succmarkers = repo.obsstore.successors
515
518
516 # Stack of nodes we search successors sets for
519 # Stack of nodes we search successors sets for
517 toproceed = [initialnode]
520 toproceed = [initialnode]
518 # set version of above list for fast loop detection
521 # set version of above list for fast loop detection
519 # element added to "toproceed" must be added here
522 # element added to "toproceed" must be added here
520 stackedset = set(toproceed)
523 stackedset = set(toproceed)
521 if cache is None:
524 if cache is None:
522 cache = {}
525 cache = {}
523
526
524 # This while loop is the flattened version of a recursive search for
527 # This while loop is the flattened version of a recursive search for
525 # successors sets
528 # successors sets
526 #
529 #
527 # def successorssets(x):
530 # def successorssets(x):
528 # successors = directsuccessors(x)
531 # successors = directsuccessors(x)
529 # ss = [[]]
532 # ss = [[]]
530 # for succ in directsuccessors(x):
533 # for succ in directsuccessors(x):
531 # # product as in itertools cartesian product
534 # # product as in itertools cartesian product
532 # ss = product(ss, successorssets(succ))
535 # ss = product(ss, successorssets(succ))
533 # return ss
536 # return ss
534 #
537 #
535 # But we can not use plain recursive calls here:
538 # But we can not use plain recursive calls here:
536 # - that would blow the python call stack
539 # - that would blow the python call stack
537 # - obsolescence markers may have cycles, we need to handle them.
540 # - obsolescence markers may have cycles, we need to handle them.
538 #
541 #
539 # The `toproceed` list act as our call stack. Every node we search
542 # The `toproceed` list act as our call stack. Every node we search
540 # successors set for are stacked there.
543 # successors set for are stacked there.
541 #
544 #
542 # The `stackedset` is set version of this stack used to check if a node is
545 # The `stackedset` is set version of this stack used to check if a node is
543 # already stacked. This check is used to detect cycles and prevent infinite
546 # already stacked. This check is used to detect cycles and prevent infinite
544 # loop.
547 # loop.
545 #
548 #
546 # successors set of all nodes are stored in the `cache` dictionary.
549 # successors set of all nodes are stored in the `cache` dictionary.
547 #
550 #
548 # After this while loop ends we use the cache to return the successors sets
551 # After this while loop ends we use the cache to return the successors sets
549 # for the node requested by the caller.
552 # for the node requested by the caller.
550 while toproceed:
553 while toproceed:
551 # Every iteration tries to compute the successors sets of the topmost
554 # Every iteration tries to compute the successors sets of the topmost
552 # node of the stack: CURRENT.
555 # node of the stack: CURRENT.
553 #
556 #
554 # There are four possible outcomes:
557 # There are four possible outcomes:
555 #
558 #
556 # 1) We already know the successors sets of CURRENT:
559 # 1) We already know the successors sets of CURRENT:
557 # -> mission accomplished, pop it from the stack.
560 # -> mission accomplished, pop it from the stack.
558 # 2) Node is not obsolete:
561 # 2) Node is not obsolete:
559 # -> the node is its own successors sets. Add it to the cache.
562 # -> the node is its own successors sets. Add it to the cache.
560 # 3) We do not know successors set of direct successors of CURRENT:
563 # 3) We do not know successors set of direct successors of CURRENT:
561 # -> We add those successors to the stack.
564 # -> We add those successors to the stack.
562 # 4) We know successors sets of all direct successors of CURRENT:
565 # 4) We know successors sets of all direct successors of CURRENT:
563 # -> We can compute CURRENT successors set and add it to the
566 # -> We can compute CURRENT successors set and add it to the
564 # cache.
567 # cache.
565 #
568 #
566 current = toproceed[-1]
569 current = toproceed[-1]
567 if current in cache:
570 if current in cache:
568 # case (1): We already know the successors sets
571 # case (1): We already know the successors sets
569 stackedset.remove(toproceed.pop())
572 stackedset.remove(toproceed.pop())
570 elif current not in succmarkers:
573 elif current not in succmarkers:
571 # case (2): The node is not obsolete.
574 # case (2): The node is not obsolete.
572 if current in repo:
575 if current in repo:
573 # We have a valid last successors.
576 # We have a valid last successors.
574 cache[current] = [(current,)]
577 cache[current] = [(current,)]
575 else:
578 else:
576 # Final obsolete version is unknown locally.
579 # Final obsolete version is unknown locally.
577 # Do not count that as a valid successors
580 # Do not count that as a valid successors
578 cache[current] = []
581 cache[current] = []
579 else:
582 else:
580 # cases (3) and (4)
583 # cases (3) and (4)
581 #
584 #
582 # We proceed in two phases. Phase 1 aims to distinguish case (3)
585 # We proceed in two phases. Phase 1 aims to distinguish case (3)
583 # from case (4):
586 # from case (4):
584 #
587 #
585 # For each direct successors of CURRENT, we check whether its
588 # For each direct successors of CURRENT, we check whether its
586 # successors sets are known. If they are not, we stack the
589 # successors sets are known. If they are not, we stack the
587 # unknown node and proceed to the next iteration of the while
590 # unknown node and proceed to the next iteration of the while
588 # loop. (case 3)
591 # loop. (case 3)
589 #
592 #
590 # During this step, we may detect obsolescence cycles: a node
593 # During this step, we may detect obsolescence cycles: a node
591 # with unknown successors sets but already in the call stack.
594 # with unknown successors sets but already in the call stack.
592 # In such a situation, we arbitrary set the successors sets of
595 # In such a situation, we arbitrary set the successors sets of
593 # the node to nothing (node pruned) to break the cycle.
596 # the node to nothing (node pruned) to break the cycle.
594 #
597 #
595 # If no break was encountered we proceed to phase 2.
598 # If no break was encountered we proceed to phase 2.
596 #
599 #
597 # Phase 2 computes successors sets of CURRENT (case 4); see details
600 # Phase 2 computes successors sets of CURRENT (case 4); see details
598 # in phase 2 itself.
601 # in phase 2 itself.
599 #
602 #
600 # Note the two levels of iteration in each phase.
603 # Note the two levels of iteration in each phase.
601 # - The first one handles obsolescence markers using CURRENT as
604 # - The first one handles obsolescence markers using CURRENT as
602 # precursor (successors markers of CURRENT).
605 # precursor (successors markers of CURRENT).
603 #
606 #
604 # Having multiple entry here means divergence.
607 # Having multiple entry here means divergence.
605 #
608 #
606 # - The second one handles successors defined in each marker.
609 # - The second one handles successors defined in each marker.
607 #
610 #
608 # Having none means pruned node, multiple successors means split,
611 # Having none means pruned node, multiple successors means split,
609 # single successors are standard replacement.
612 # single successors are standard replacement.
610 #
613 #
611 for mark in sorted(succmarkers[current]):
614 for mark in sorted(succmarkers[current]):
612 for suc in mark[1]:
615 for suc in mark[1]:
613 if suc not in cache:
616 if suc not in cache:
614 if suc in stackedset:
617 if suc in stackedset:
615 # cycle breaking
618 # cycle breaking
616 cache[suc] = []
619 cache[suc] = []
617 else:
620 else:
618 # case (3) If we have not computed successors sets
621 # case (3) If we have not computed successors sets
619 # of one of those successors we add it to the
622 # of one of those successors we add it to the
620 # `toproceed` stack and stop all work for this
623 # `toproceed` stack and stop all work for this
621 # iteration.
624 # iteration.
622 toproceed.append(suc)
625 toproceed.append(suc)
623 stackedset.add(suc)
626 stackedset.add(suc)
624 break
627 break
625 else:
628 else:
626 continue
629 continue
627 break
630 break
628 else:
631 else:
629 # case (4): we know all successors sets of all direct
632 # case (4): we know all successors sets of all direct
630 # successors
633 # successors
631 #
634 #
632 # Successors set contributed by each marker depends on the
635 # Successors set contributed by each marker depends on the
633 # successors sets of all its "successors" node.
636 # successors sets of all its "successors" node.
634 #
637 #
635 # Each different marker is a divergence in the obsolescence
638 # Each different marker is a divergence in the obsolescence
636 # history. It contributes successors sets distinct from other
639 # history. It contributes successors sets distinct from other
637 # markers.
640 # markers.
638 #
641 #
639 # Within a marker, a successor may have divergent successors
642 # Within a marker, a successor may have divergent successors
640 # sets. In such a case, the marker will contribute multiple
643 # sets. In such a case, the marker will contribute multiple
641 # divergent successors sets. If multiple successors have
644 # divergent successors sets. If multiple successors have
642 # divergent successors sets, a cartesian product is used.
645 # divergent successors sets, a cartesian product is used.
643 #
646 #
644 # At the end we post-process successors sets to remove
647 # At the end we post-process successors sets to remove
645 # duplicated entry and successors set that are strict subset of
648 # duplicated entry and successors set that are strict subset of
646 # another one.
649 # another one.
647 succssets = []
650 succssets = []
648 for mark in sorted(succmarkers[current]):
651 for mark in sorted(succmarkers[current]):
649 # successors sets contributed by this marker
652 # successors sets contributed by this marker
650 markss = [[]]
653 markss = [[]]
651 for suc in mark[1]:
654 for suc in mark[1]:
652 # cardinal product with previous successors
655 # cardinal product with previous successors
653 productresult = []
656 productresult = []
654 for prefix in markss:
657 for prefix in markss:
655 for suffix in cache[suc]:
658 for suffix in cache[suc]:
656 newss = list(prefix)
659 newss = list(prefix)
657 for part in suffix:
660 for part in suffix:
658 # do not duplicated entry in successors set
661 # do not duplicated entry in successors set
659 # first entry wins.
662 # first entry wins.
660 if part not in newss:
663 if part not in newss:
661 newss.append(part)
664 newss.append(part)
662 productresult.append(newss)
665 productresult.append(newss)
663 markss = productresult
666 markss = productresult
664 succssets.extend(markss)
667 succssets.extend(markss)
665 # remove duplicated and subset
668 # remove duplicated and subset
666 seen = []
669 seen = []
667 final = []
670 final = []
668 candidate = sorted(((set(s), s) for s in succssets if s),
671 candidate = sorted(((set(s), s) for s in succssets if s),
669 key=lambda x: len(x[1]), reverse=True)
672 key=lambda x: len(x[1]), reverse=True)
670 for setversion, listversion in candidate:
673 for setversion, listversion in candidate:
671 for seenset in seen:
674 for seenset in seen:
672 if setversion.issubset(seenset):
675 if setversion.issubset(seenset):
673 break
676 break
674 else:
677 else:
675 final.append(listversion)
678 final.append(listversion)
676 seen.append(setversion)
679 seen.append(setversion)
677 final.reverse() # put small successors set first
680 final.reverse() # put small successors set first
678 cache[current] = final
681 cache[current] = final
679 return cache[initialnode]
682 return cache[initialnode]
680
683
681 def _knownrevs(repo, nodes):
684 def _knownrevs(repo, nodes):
682 """yield revision numbers of known nodes passed in parameters
685 """yield revision numbers of known nodes passed in parameters
683
686
684 Unknown revisions are silently ignored."""
687 Unknown revisions are silently ignored."""
685 torev = repo.changelog.nodemap.get
688 torev = repo.changelog.nodemap.get
686 for n in nodes:
689 for n in nodes:
687 rev = torev(n)
690 rev = torev(n)
688 if rev is not None:
691 if rev is not None:
689 yield rev
692 yield rev
690
693
691 # mapping of 'set-name' -> <function to compute this set>
694 # mapping of 'set-name' -> <function to compute this set>
692 cachefuncs = {}
695 cachefuncs = {}
693 def cachefor(name):
696 def cachefor(name):
694 """Decorator to register a function as computing the cache for a set"""
697 """Decorator to register a function as computing the cache for a set"""
695 def decorator(func):
698 def decorator(func):
696 assert name not in cachefuncs
699 assert name not in cachefuncs
697 cachefuncs[name] = func
700 cachefuncs[name] = func
698 return func
701 return func
699 return decorator
702 return decorator
700
703
701 def getrevs(repo, name):
704 def getrevs(repo, name):
702 """Return the set of revision that belong to the <name> set
705 """Return the set of revision that belong to the <name> set
703
706
704 Such access may compute the set and cache it for future use"""
707 Such access may compute the set and cache it for future use"""
705 repo = repo.unfiltered()
708 repo = repo.unfiltered()
706 if not repo.obsstore:
709 if not repo.obsstore:
707 return ()
710 return ()
708 if name not in repo.obsstore.caches:
711 if name not in repo.obsstore.caches:
709 repo.obsstore.caches[name] = cachefuncs[name](repo)
712 repo.obsstore.caches[name] = cachefuncs[name](repo)
710 return repo.obsstore.caches[name]
713 return repo.obsstore.caches[name]
711
714
712 # To be simple we need to invalidate obsolescence cache when:
715 # To be simple we need to invalidate obsolescence cache when:
713 #
716 #
714 # - new changeset is added:
717 # - new changeset is added:
715 # - public phase is changed
718 # - public phase is changed
716 # - obsolescence marker are added
719 # - obsolescence marker are added
717 # - strip is used a repo
720 # - strip is used a repo
718 def clearobscaches(repo):
721 def clearobscaches(repo):
719 """Remove all obsolescence related cache from a repo
722 """Remove all obsolescence related cache from a repo
720
723
721 This remove all cache in obsstore is the obsstore already exist on the
724 This remove all cache in obsstore is the obsstore already exist on the
722 repo.
725 repo.
723
726
724 (We could be smarter here given the exact event that trigger the cache
727 (We could be smarter here given the exact event that trigger the cache
725 clearing)"""
728 clearing)"""
726 # only clear cache is there is obsstore data in this repo
729 # only clear cache is there is obsstore data in this repo
727 if 'obsstore' in repo._filecache:
730 if 'obsstore' in repo._filecache:
728 repo.obsstore.caches.clear()
731 repo.obsstore.caches.clear()
729
732
730 @cachefor('obsolete')
733 @cachefor('obsolete')
731 def _computeobsoleteset(repo):
734 def _computeobsoleteset(repo):
732 """the set of obsolete revisions"""
735 """the set of obsolete revisions"""
733 obs = set()
736 obs = set()
734 getrev = repo.changelog.nodemap.get
737 getrev = repo.changelog.nodemap.get
735 getphase = repo._phasecache.phase
738 getphase = repo._phasecache.phase
736 for node in repo.obsstore.successors:
739 for node in repo.obsstore.successors:
737 rev = getrev(node)
740 rev = getrev(node)
738 if rev is not None and getphase(repo, rev):
741 if rev is not None and getphase(repo, rev):
739 obs.add(rev)
742 obs.add(rev)
740 return obs
743 return obs
741
744
742 @cachefor('unstable')
745 @cachefor('unstable')
743 def _computeunstableset(repo):
746 def _computeunstableset(repo):
744 """the set of non obsolete revisions with obsolete parents"""
747 """the set of non obsolete revisions with obsolete parents"""
745 # revset is not efficient enough here
748 # revset is not efficient enough here
746 # we do (obsolete()::) - obsolete() by hand
749 # we do (obsolete()::) - obsolete() by hand
747 obs = getrevs(repo, 'obsolete')
750 obs = getrevs(repo, 'obsolete')
748 if not obs:
751 if not obs:
749 return set()
752 return set()
750 cl = repo.changelog
753 cl = repo.changelog
751 return set(r for r in cl.descendants(obs) if r not in obs)
754 return set(r for r in cl.descendants(obs) if r not in obs)
752
755
753 @cachefor('suspended')
756 @cachefor('suspended')
754 def _computesuspendedset(repo):
757 def _computesuspendedset(repo):
755 """the set of obsolete parents with non obsolete descendants"""
758 """the set of obsolete parents with non obsolete descendants"""
756 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
759 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
757 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
760 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
758
761
759 @cachefor('extinct')
762 @cachefor('extinct')
760 def _computeextinctset(repo):
763 def _computeextinctset(repo):
761 """the set of obsolete parents without non obsolete descendants"""
764 """the set of obsolete parents without non obsolete descendants"""
762 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
765 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
763
766
764
767
765 @cachefor('bumped')
768 @cachefor('bumped')
766 def _computebumpedset(repo):
769 def _computebumpedset(repo):
767 """the set of revs trying to obsolete public revisions"""
770 """the set of revs trying to obsolete public revisions"""
768 bumped = set()
771 bumped = set()
769 # utils function (avoid attribute lookup in the loop)
772 # utils function (avoid attribute lookup in the loop)
770 phase = repo._phasecache.phase # would be faster to grab the full list
773 phase = repo._phasecache.phase # would be faster to grab the full list
771 public = phases.public
774 public = phases.public
772 cl = repo.changelog
775 cl = repo.changelog
773 torev = cl.nodemap.get
776 torev = cl.nodemap.get
774 obs = getrevs(repo, 'obsolete')
777 obs = getrevs(repo, 'obsolete')
775 for rev in repo:
778 for rev in repo:
776 # We only evaluate mutable, non-obsolete revision
779 # We only evaluate mutable, non-obsolete revision
777 if (public < phase(repo, rev)) and (rev not in obs):
780 if (public < phase(repo, rev)) and (rev not in obs):
778 node = cl.node(rev)
781 node = cl.node(rev)
779 # (future) A cache of precursors may worth if split is very common
782 # (future) A cache of precursors may worth if split is very common
780 for pnode in allprecursors(repo.obsstore, [node],
783 for pnode in allprecursors(repo.obsstore, [node],
781 ignoreflags=bumpedfix):
784 ignoreflags=bumpedfix):
782 prev = torev(pnode) # unfiltered! but so is phasecache
785 prev = torev(pnode) # unfiltered! but so is phasecache
783 if (prev is not None) and (phase(repo, prev) <= public):
786 if (prev is not None) and (phase(repo, prev) <= public):
784 # we have a public precursors
787 # we have a public precursors
785 bumped.add(rev)
788 bumped.add(rev)
786 break # Next draft!
789 break # Next draft!
787 return bumped
790 return bumped
788
791
789 @cachefor('divergent')
792 @cachefor('divergent')
790 def _computedivergentset(repo):
793 def _computedivergentset(repo):
791 """the set of rev that compete to be the final successors of some revision.
794 """the set of rev that compete to be the final successors of some revision.
792 """
795 """
793 divergent = set()
796 divergent = set()
794 obsstore = repo.obsstore
797 obsstore = repo.obsstore
795 newermap = {}
798 newermap = {}
796 for ctx in repo.set('(not public()) - obsolete()'):
799 for ctx in repo.set('(not public()) - obsolete()'):
797 mark = obsstore.precursors.get(ctx.node(), ())
800 mark = obsstore.precursors.get(ctx.node(), ())
798 toprocess = set(mark)
801 toprocess = set(mark)
799 while toprocess:
802 while toprocess:
800 prec = toprocess.pop()[0]
803 prec = toprocess.pop()[0]
801 if prec not in newermap:
804 if prec not in newermap:
802 successorssets(repo, prec, newermap)
805 successorssets(repo, prec, newermap)
803 newer = [n for n in newermap[prec] if n]
806 newer = [n for n in newermap[prec] if n]
804 if len(newer) > 1:
807 if len(newer) > 1:
805 divergent.add(ctx.rev())
808 divergent.add(ctx.rev())
806 break
809 break
807 toprocess.update(obsstore.precursors.get(prec, ()))
810 toprocess.update(obsstore.precursors.get(prec, ()))
808 return divergent
811 return divergent
809
812
810
813
811 def createmarkers(repo, relations, flag=0, metadata=None):
814 def createmarkers(repo, relations, flag=0, metadata=None):
812 """Add obsolete markers between changesets in a repo
815 """Add obsolete markers between changesets in a repo
813
816
814 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
817 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
815 tuple. `old` and `news` are changectx. metadata is an optional dictionnary
818 tuple. `old` and `news` are changectx. metadata is an optional dictionnary
816 containing metadata for this marker only. It is merged with the global
819 containing metadata for this marker only. It is merged with the global
817 metadata specified through the `metadata` argument of this function,
820 metadata specified through the `metadata` argument of this function,
818
821
819 Trying to obsolete a public changeset will raise an exception.
822 Trying to obsolete a public changeset will raise an exception.
820
823
821 Current user and date are used except if specified otherwise in the
824 Current user and date are used except if specified otherwise in the
822 metadata attribute.
825 metadata attribute.
823
826
824 This function operates within a transaction of its own, but does
827 This function operates within a transaction of its own, but does
825 not take any lock on the repo.
828 not take any lock on the repo.
826 """
829 """
827 # prepare metadata
830 # prepare metadata
828 if metadata is None:
831 if metadata is None:
829 metadata = {}
832 metadata = {}
830 if 'date' not in metadata:
833 if 'date' not in metadata:
831 metadata['date'] = '%i %i' % util.makedate()
834 metadata['date'] = '%i %i' % util.makedate()
832 if 'user' not in metadata:
835 if 'user' not in metadata:
833 metadata['user'] = repo.ui.username()
836 metadata['user'] = repo.ui.username()
834 tr = repo.transaction('add-obsolescence-marker')
837 tr = repo.transaction('add-obsolescence-marker')
835 try:
838 try:
836 for rel in relations:
839 for rel in relations:
837 prec = rel[0]
840 prec = rel[0]
838 sucs = rel[1]
841 sucs = rel[1]
839 localmetadata = metadata.copy()
842 localmetadata = metadata.copy()
840 if 2 < len(rel):
843 if 2 < len(rel):
841 localmetadata.update(rel[2])
844 localmetadata.update(rel[2])
842
845
843 if not prec.mutable():
846 if not prec.mutable():
844 raise util.Abort("cannot obsolete immutable changeset: %s"
847 raise util.Abort("cannot obsolete immutable changeset: %s"
845 % prec)
848 % prec)
846 nprec = prec.node()
849 nprec = prec.node()
847 nsucs = tuple(s.node() for s in sucs)
850 nsucs = tuple(s.node() for s in sucs)
848 if nprec in nsucs:
851 if nprec in nsucs:
849 raise util.Abort("changeset %s cannot obsolete itself" % prec)
852 raise util.Abort("changeset %s cannot obsolete itself" % prec)
850 repo.obsstore.create(tr, nprec, nsucs, flag, localmetadata)
853 repo.obsstore.create(tr, nprec, nsucs, flag, localmetadata)
851 repo.filteredrevcache.clear()
854 repo.filteredrevcache.clear()
852 tr.close()
855 tr.close()
853 finally:
856 finally:
854 tr.release()
857 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now