##// END OF EJS Templates
obsolete: allow passing a revision to successorssets()
Dan Villiom Podlaski Christiansen -
r19615:77d43476 default
parent child Browse files
Show More
@@ -1,819 +1,822 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 from i18n import _
87 from i18n import _
88
88
89 _pack = struct.pack
89 _pack = struct.pack
90 _unpack = struct.unpack
90 _unpack = struct.unpack
91
91
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93
93
94 # the obsolete feature is not mature enough to be enabled by default.
94 # the obsolete feature is not mature enough to be enabled by default.
95 # you have to rely on third party extension extension to enable this.
95 # you have to rely on third party extension extension to enable this.
96 _enabled = False
96 _enabled = False
97
97
98 # data used for parsing and writing
98 # data used for parsing and writing
99 _fmversion = 0
99 _fmversion = 0
100 _fmfixed = '>BIB20s'
100 _fmfixed = '>BIB20s'
101 _fmnode = '20s'
101 _fmnode = '20s'
102 _fmfsize = struct.calcsize(_fmfixed)
102 _fmfsize = struct.calcsize(_fmfixed)
103 _fnodesize = struct.calcsize(_fmnode)
103 _fnodesize = struct.calcsize(_fmnode)
104
104
105 ### obsolescence marker flag
105 ### obsolescence marker flag
106
106
107 ## bumpedfix flag
107 ## bumpedfix flag
108 #
108 #
109 # When a changeset A' succeed to a changeset A which became public, we call A'
109 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # "bumped" because it's a successors of a public changesets
110 # "bumped" because it's a successors of a public changesets
111 #
111 #
112 # o A' (bumped)
112 # o A' (bumped)
113 # |`:
113 # |`:
114 # | o A
114 # | o A
115 # |/
115 # |/
116 # o Z
116 # o Z
117 #
117 #
118 # The way to solve this situation is to create a new changeset Ad as children
118 # The way to solve this situation is to create a new changeset Ad as children
119 # of A. This changeset have the same content than A'. So the diff from A to A'
119 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 #
121 #
122 # o Ad
122 # o Ad
123 # |`:
123 # |`:
124 # | x A'
124 # | x A'
125 # |'|
125 # |'|
126 # o | A
126 # o | A
127 # |/
127 # |/
128 # o Z
128 # o Z
129 #
129 #
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # This flag mean that the successors express the changes between the public and
132 # This flag mean that the successors express the changes between the public and
133 # bumped version and fix the situation, breaking the transitivity of
133 # bumped version and fix the situation, breaking the transitivity of
134 # "bumped" here.
134 # "bumped" here.
135 bumpedfix = 1
135 bumpedfix = 1
136
136
137 def _readmarkers(data):
137 def _readmarkers(data):
138 """Read and enumerate markers from raw data"""
138 """Read and enumerate markers from raw data"""
139 off = 0
139 off = 0
140 diskversion = _unpack('>B', data[off:off + 1])[0]
140 diskversion = _unpack('>B', data[off:off + 1])[0]
141 off += 1
141 off += 1
142 if diskversion != _fmversion:
142 if diskversion != _fmversion:
143 raise util.Abort(_('parsing obsolete marker: unknown version %r')
143 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 % diskversion)
144 % diskversion)
145
145
146 # Loop on markers
146 # Loop on markers
147 l = len(data)
147 l = len(data)
148 while off + _fmfsize <= l:
148 while off + _fmfsize <= l:
149 # read fixed part
149 # read fixed part
150 cur = data[off:off + _fmfsize]
150 cur = data[off:off + _fmfsize]
151 off += _fmfsize
151 off += _fmfsize
152 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
152 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 # read replacement
153 # read replacement
154 sucs = ()
154 sucs = ()
155 if nbsuc:
155 if nbsuc:
156 s = (_fnodesize * nbsuc)
156 s = (_fnodesize * nbsuc)
157 cur = data[off:off + s]
157 cur = data[off:off + s]
158 sucs = _unpack(_fmnode * nbsuc, cur)
158 sucs = _unpack(_fmnode * nbsuc, cur)
159 off += s
159 off += s
160 # read metadata
160 # read metadata
161 # (metadata will be decoded on demand)
161 # (metadata will be decoded on demand)
162 metadata = data[off:off + mdsize]
162 metadata = data[off:off + mdsize]
163 if len(metadata) != mdsize:
163 if len(metadata) != mdsize:
164 raise util.Abort(_('parsing obsolete marker: metadata is too '
164 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 'short, %d bytes expected, got %d')
165 'short, %d bytes expected, got %d')
166 % (mdsize, len(metadata)))
166 % (mdsize, len(metadata)))
167 off += mdsize
167 off += mdsize
168 yield (pre, sucs, flags, metadata)
168 yield (pre, sucs, flags, metadata)
169
169
170 def encodemeta(meta):
170 def encodemeta(meta):
171 """Return encoded metadata string to string mapping.
171 """Return encoded metadata string to string mapping.
172
172
173 Assume no ':' in key and no '\0' in both key and value."""
173 Assume no ':' in key and no '\0' in both key and value."""
174 for key, value in meta.iteritems():
174 for key, value in meta.iteritems():
175 if ':' in key or '\0' in key:
175 if ':' in key or '\0' in key:
176 raise ValueError("':' and '\0' are forbidden in metadata key'")
176 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 if '\0' in value:
177 if '\0' in value:
178 raise ValueError("':' are forbidden in metadata value'")
178 raise ValueError("':' are forbidden in metadata value'")
179 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
179 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180
180
181 def decodemeta(data):
181 def decodemeta(data):
182 """Return string to string dictionary from encoded version."""
182 """Return string to string dictionary from encoded version."""
183 d = {}
183 d = {}
184 for l in data.split('\0'):
184 for l in data.split('\0'):
185 if l:
185 if l:
186 key, value = l.split(':')
186 key, value = l.split(':')
187 d[key] = value
187 d[key] = value
188 return d
188 return d
189
189
190 class marker(object):
190 class marker(object):
191 """Wrap obsolete marker raw data"""
191 """Wrap obsolete marker raw data"""
192
192
193 def __init__(self, repo, data):
193 def __init__(self, repo, data):
194 # the repo argument will be used to create changectx in later version
194 # the repo argument will be used to create changectx in later version
195 self._repo = repo
195 self._repo = repo
196 self._data = data
196 self._data = data
197 self._decodedmeta = None
197 self._decodedmeta = None
198
198
199 def precnode(self):
199 def precnode(self):
200 """Precursor changeset node identifier"""
200 """Precursor changeset node identifier"""
201 return self._data[0]
201 return self._data[0]
202
202
203 def succnodes(self):
203 def succnodes(self):
204 """List of successor changesets node identifiers"""
204 """List of successor changesets node identifiers"""
205 return self._data[1]
205 return self._data[1]
206
206
207 def metadata(self):
207 def metadata(self):
208 """Decoded metadata dictionary"""
208 """Decoded metadata dictionary"""
209 if self._decodedmeta is None:
209 if self._decodedmeta is None:
210 self._decodedmeta = decodemeta(self._data[3])
210 self._decodedmeta = decodemeta(self._data[3])
211 return self._decodedmeta
211 return self._decodedmeta
212
212
213 def date(self):
213 def date(self):
214 """Creation date as (unixtime, offset)"""
214 """Creation date as (unixtime, offset)"""
215 parts = self.metadata()['date'].split(' ')
215 parts = self.metadata()['date'].split(' ')
216 return (float(parts[0]), int(parts[1]))
216 return (float(parts[0]), int(parts[1]))
217
217
218 class obsstore(object):
218 class obsstore(object):
219 """Store obsolete markers
219 """Store obsolete markers
220
220
221 Markers can be accessed with two mappings:
221 Markers can be accessed with two mappings:
222 - precursors[x] -> set(markers on precursors edges of x)
222 - precursors[x] -> set(markers on precursors edges of x)
223 - successors[x] -> set(markers on successors edges of x)
223 - successors[x] -> set(markers on successors edges of x)
224 """
224 """
225
225
226 def __init__(self, sopener):
226 def __init__(self, sopener):
227 # caches for various obsolescence related cache
227 # caches for various obsolescence related cache
228 self.caches = {}
228 self.caches = {}
229 self._all = []
229 self._all = []
230 # new markers to serialize
230 # new markers to serialize
231 self.precursors = {}
231 self.precursors = {}
232 self.successors = {}
232 self.successors = {}
233 self.sopener = sopener
233 self.sopener = sopener
234 data = sopener.tryread('obsstore')
234 data = sopener.tryread('obsstore')
235 if data:
235 if data:
236 self._load(_readmarkers(data))
236 self._load(_readmarkers(data))
237
237
238 def __iter__(self):
238 def __iter__(self):
239 return iter(self._all)
239 return iter(self._all)
240
240
241 def __nonzero__(self):
241 def __nonzero__(self):
242 return bool(self._all)
242 return bool(self._all)
243
243
244 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
244 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
245 """obsolete: add a new obsolete marker
245 """obsolete: add a new obsolete marker
246
246
247 * ensuring it is hashable
247 * ensuring it is hashable
248 * check mandatory metadata
248 * check mandatory metadata
249 * encode metadata
249 * encode metadata
250 """
250 """
251 if metadata is None:
251 if metadata is None:
252 metadata = {}
252 metadata = {}
253 if 'date' not in metadata:
253 if 'date' not in metadata:
254 metadata['date'] = "%d %d" % util.makedate()
254 metadata['date'] = "%d %d" % util.makedate()
255 if len(prec) != 20:
255 if len(prec) != 20:
256 raise ValueError(prec)
256 raise ValueError(prec)
257 for succ in succs:
257 for succ in succs:
258 if len(succ) != 20:
258 if len(succ) != 20:
259 raise ValueError(succ)
259 raise ValueError(succ)
260 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
260 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
261 self.add(transaction, [marker])
261 self.add(transaction, [marker])
262
262
263 def add(self, transaction, markers):
263 def add(self, transaction, markers):
264 """Add new markers to the store
264 """Add new markers to the store
265
265
266 Take care of filtering duplicate.
266 Take care of filtering duplicate.
267 Return the number of new marker."""
267 Return the number of new marker."""
268 if not _enabled:
268 if not _enabled:
269 raise util.Abort('obsolete feature is not enabled on this repo')
269 raise util.Abort('obsolete feature is not enabled on this repo')
270 new = [m for m in markers if m not in self._all]
270 new = [m for m in markers if m not in self._all]
271 if new:
271 if new:
272 f = self.sopener('obsstore', 'ab')
272 f = self.sopener('obsstore', 'ab')
273 try:
273 try:
274 # Whether the file's current position is at the begin or at
274 # Whether the file's current position is at the begin or at
275 # the end after opening a file for appending is implementation
275 # the end after opening a file for appending is implementation
276 # defined. So we must seek to the end before calling tell(),
276 # defined. So we must seek to the end before calling tell(),
277 # or we may get a zero offset for non-zero sized files on
277 # or we may get a zero offset for non-zero sized files on
278 # some platforms (issue3543).
278 # some platforms (issue3543).
279 f.seek(0, _SEEK_END)
279 f.seek(0, _SEEK_END)
280 offset = f.tell()
280 offset = f.tell()
281 transaction.add('obsstore', offset)
281 transaction.add('obsstore', offset)
282 # offset == 0: new file - add the version header
282 # offset == 0: new file - add the version header
283 for bytes in _encodemarkers(new, offset == 0):
283 for bytes in _encodemarkers(new, offset == 0):
284 f.write(bytes)
284 f.write(bytes)
285 finally:
285 finally:
286 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
286 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
287 # call 'filecacheentry.refresh()' here
287 # call 'filecacheentry.refresh()' here
288 f.close()
288 f.close()
289 self._load(new)
289 self._load(new)
290 # new marker *may* have changed several set. invalidate the cache.
290 # new marker *may* have changed several set. invalidate the cache.
291 self.caches.clear()
291 self.caches.clear()
292 return len(new)
292 return len(new)
293
293
294 def mergemarkers(self, transaction, data):
294 def mergemarkers(self, transaction, data):
295 markers = _readmarkers(data)
295 markers = _readmarkers(data)
296 self.add(transaction, markers)
296 self.add(transaction, markers)
297
297
298 def _load(self, markers):
298 def _load(self, markers):
299 for mark in markers:
299 for mark in markers:
300 self._all.append(mark)
300 self._all.append(mark)
301 pre, sucs = mark[:2]
301 pre, sucs = mark[:2]
302 self.successors.setdefault(pre, set()).add(mark)
302 self.successors.setdefault(pre, set()).add(mark)
303 for suc in sucs:
303 for suc in sucs:
304 self.precursors.setdefault(suc, set()).add(mark)
304 self.precursors.setdefault(suc, set()).add(mark)
305 if node.nullid in self.precursors:
305 if node.nullid in self.precursors:
306 raise util.Abort(_('bad obsolescence marker detected: '
306 raise util.Abort(_('bad obsolescence marker detected: '
307 'invalid successors nullid'))
307 'invalid successors nullid'))
308
308
309 def _encodemarkers(markers, addheader=False):
309 def _encodemarkers(markers, addheader=False):
310 # Kept separate from flushmarkers(), it will be reused for
310 # Kept separate from flushmarkers(), it will be reused for
311 # markers exchange.
311 # markers exchange.
312 if addheader:
312 if addheader:
313 yield _pack('>B', _fmversion)
313 yield _pack('>B', _fmversion)
314 for marker in markers:
314 for marker in markers:
315 yield _encodeonemarker(marker)
315 yield _encodeonemarker(marker)
316
316
317
317
318 def _encodeonemarker(marker):
318 def _encodeonemarker(marker):
319 pre, sucs, flags, metadata = marker
319 pre, sucs, flags, metadata = marker
320 nbsuc = len(sucs)
320 nbsuc = len(sucs)
321 format = _fmfixed + (_fmnode * nbsuc)
321 format = _fmfixed + (_fmnode * nbsuc)
322 data = [nbsuc, len(metadata), flags, pre]
322 data = [nbsuc, len(metadata), flags, pre]
323 data.extend(sucs)
323 data.extend(sucs)
324 return _pack(format, *data) + metadata
324 return _pack(format, *data) + metadata
325
325
326 # arbitrary picked to fit into 8K limit from HTTP server
326 # arbitrary picked to fit into 8K limit from HTTP server
327 # you have to take in account:
327 # you have to take in account:
328 # - the version header
328 # - the version header
329 # - the base85 encoding
329 # - the base85 encoding
330 _maxpayload = 5300
330 _maxpayload = 5300
331
331
332 def listmarkers(repo):
332 def listmarkers(repo):
333 """List markers over pushkey"""
333 """List markers over pushkey"""
334 if not repo.obsstore:
334 if not repo.obsstore:
335 return {}
335 return {}
336 keys = {}
336 keys = {}
337 parts = []
337 parts = []
338 currentlen = _maxpayload * 2 # ensure we create a new part
338 currentlen = _maxpayload * 2 # ensure we create a new part
339 for marker in repo.obsstore:
339 for marker in repo.obsstore:
340 nextdata = _encodeonemarker(marker)
340 nextdata = _encodeonemarker(marker)
341 if (len(nextdata) + currentlen > _maxpayload):
341 if (len(nextdata) + currentlen > _maxpayload):
342 currentpart = []
342 currentpart = []
343 currentlen = 0
343 currentlen = 0
344 parts.append(currentpart)
344 parts.append(currentpart)
345 currentpart.append(nextdata)
345 currentpart.append(nextdata)
346 currentlen += len(nextdata)
346 currentlen += len(nextdata)
347 for idx, part in enumerate(reversed(parts)):
347 for idx, part in enumerate(reversed(parts)):
348 data = ''.join([_pack('>B', _fmversion)] + part)
348 data = ''.join([_pack('>B', _fmversion)] + part)
349 keys['dump%i' % idx] = base85.b85encode(data)
349 keys['dump%i' % idx] = base85.b85encode(data)
350 return keys
350 return keys
351
351
352 def pushmarker(repo, key, old, new):
352 def pushmarker(repo, key, old, new):
353 """Push markers over pushkey"""
353 """Push markers over pushkey"""
354 if not key.startswith('dump'):
354 if not key.startswith('dump'):
355 repo.ui.warn(_('unknown key: %r') % key)
355 repo.ui.warn(_('unknown key: %r') % key)
356 return 0
356 return 0
357 if old:
357 if old:
358 repo.ui.warn(_('unexpected old value') % key)
358 repo.ui.warn(_('unexpected old value') % key)
359 return 0
359 return 0
360 data = base85.b85decode(new)
360 data = base85.b85decode(new)
361 lock = repo.lock()
361 lock = repo.lock()
362 try:
362 try:
363 tr = repo.transaction('pushkey: obsolete markers')
363 tr = repo.transaction('pushkey: obsolete markers')
364 try:
364 try:
365 repo.obsstore.mergemarkers(tr, data)
365 repo.obsstore.mergemarkers(tr, data)
366 tr.close()
366 tr.close()
367 return 1
367 return 1
368 finally:
368 finally:
369 tr.release()
369 tr.release()
370 finally:
370 finally:
371 lock.release()
371 lock.release()
372
372
373 def syncpush(repo, remote):
373 def syncpush(repo, remote):
374 """utility function to push obsolete markers to a remote
374 """utility function to push obsolete markers to a remote
375
375
376 Exist mostly to allow overridding for experimentation purpose"""
376 Exist mostly to allow overridding for experimentation purpose"""
377 if (_enabled and repo.obsstore and
377 if (_enabled and repo.obsstore and
378 'obsolete' in remote.listkeys('namespaces')):
378 'obsolete' in remote.listkeys('namespaces')):
379 rslts = []
379 rslts = []
380 remotedata = repo.listkeys('obsolete')
380 remotedata = repo.listkeys('obsolete')
381 for key in sorted(remotedata, reverse=True):
381 for key in sorted(remotedata, reverse=True):
382 # reverse sort to ensure we end with dump0
382 # reverse sort to ensure we end with dump0
383 data = remotedata[key]
383 data = remotedata[key]
384 rslts.append(remote.pushkey('obsolete', key, '', data))
384 rslts.append(remote.pushkey('obsolete', key, '', data))
385 if [r for r in rslts if not r]:
385 if [r for r in rslts if not r]:
386 msg = _('failed to push some obsolete markers!\n')
386 msg = _('failed to push some obsolete markers!\n')
387 repo.ui.warn(msg)
387 repo.ui.warn(msg)
388
388
389 def syncpull(repo, remote, gettransaction):
389 def syncpull(repo, remote, gettransaction):
390 """utility function to pull obsolete markers from a remote
390 """utility function to pull obsolete markers from a remote
391
391
392 The `gettransaction` is function that return the pull transaction, creating
392 The `gettransaction` is function that return the pull transaction, creating
393 one if necessary. We return the transaction to inform the calling code that
393 one if necessary. We return the transaction to inform the calling code that
394 a new transaction have been created (when applicable).
394 a new transaction have been created (when applicable).
395
395
396 Exists mostly to allow overridding for experimentation purpose"""
396 Exists mostly to allow overridding for experimentation purpose"""
397 tr = None
397 tr = None
398 if _enabled:
398 if _enabled:
399 repo.ui.debug('fetching remote obsolete markers\n')
399 repo.ui.debug('fetching remote obsolete markers\n')
400 remoteobs = remote.listkeys('obsolete')
400 remoteobs = remote.listkeys('obsolete')
401 if 'dump0' in remoteobs:
401 if 'dump0' in remoteobs:
402 tr = gettransaction()
402 tr = gettransaction()
403 for key in sorted(remoteobs, reverse=True):
403 for key in sorted(remoteobs, reverse=True):
404 if key.startswith('dump'):
404 if key.startswith('dump'):
405 data = base85.b85decode(remoteobs[key])
405 data = base85.b85decode(remoteobs[key])
406 repo.obsstore.mergemarkers(tr, data)
406 repo.obsstore.mergemarkers(tr, data)
407 repo.invalidatevolatilesets()
407 repo.invalidatevolatilesets()
408 return tr
408 return tr
409
409
410 def allmarkers(repo):
410 def allmarkers(repo):
411 """all obsolete markers known in a repository"""
411 """all obsolete markers known in a repository"""
412 for markerdata in repo.obsstore:
412 for markerdata in repo.obsstore:
413 yield marker(repo, markerdata)
413 yield marker(repo, markerdata)
414
414
415 def precursormarkers(ctx):
415 def precursormarkers(ctx):
416 """obsolete marker marking this changeset as a successors"""
416 """obsolete marker marking this changeset as a successors"""
417 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
417 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
418 yield marker(ctx._repo, data)
418 yield marker(ctx._repo, data)
419
419
420 def successormarkers(ctx):
420 def successormarkers(ctx):
421 """obsolete marker making this changeset obsolete"""
421 """obsolete marker making this changeset obsolete"""
422 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
422 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
423 yield marker(ctx._repo, data)
423 yield marker(ctx._repo, data)
424
424
425 def allsuccessors(obsstore, nodes, ignoreflags=0):
425 def allsuccessors(obsstore, nodes, ignoreflags=0):
426 """Yield node for every successor of <nodes>.
426 """Yield node for every successor of <nodes>.
427
427
428 Some successors may be unknown locally.
428 Some successors may be unknown locally.
429
429
430 This is a linear yield unsuited to detecting split changesets."""
430 This is a linear yield unsuited to detecting split changesets."""
431 remaining = set(nodes)
431 remaining = set(nodes)
432 seen = set(remaining)
432 seen = set(remaining)
433 while remaining:
433 while remaining:
434 current = remaining.pop()
434 current = remaining.pop()
435 yield current
435 yield current
436 for mark in obsstore.successors.get(current, ()):
436 for mark in obsstore.successors.get(current, ()):
437 # ignore marker flagged with with specified flag
437 # ignore marker flagged with with specified flag
438 if mark[2] & ignoreflags:
438 if mark[2] & ignoreflags:
439 continue
439 continue
440 for suc in mark[1]:
440 for suc in mark[1]:
441 if suc not in seen:
441 if suc not in seen:
442 seen.add(suc)
442 seen.add(suc)
443 remaining.add(suc)
443 remaining.add(suc)
444
444
445 def foreground(repo, nodes):
445 def foreground(repo, nodes):
446 """return all nodes in the "foreground" of other node
446 """return all nodes in the "foreground" of other node
447
447
448 The foreground of a revision is anything reachable using parent -> children
448 The foreground of a revision is anything reachable using parent -> children
449 or precursor -> sucessor relation. It is very similars to "descendant" but
449 or precursor -> sucessor relation. It is very similars to "descendant" but
450 augmented with obsolescence information.
450 augmented with obsolescence information.
451
451
452 Beware that possible obsolescence cycle may result if complexe situation.
452 Beware that possible obsolescence cycle may result if complexe situation.
453 """
453 """
454 repo = repo.unfiltered()
454 repo = repo.unfiltered()
455 foreground = set(repo.set('%ln::', nodes))
455 foreground = set(repo.set('%ln::', nodes))
456 if repo.obsstore:
456 if repo.obsstore:
457 # We only need this complicated logic if there is obsolescence
457 # We only need this complicated logic if there is obsolescence
458 # XXX will probably deserve an optimised revset.
458 # XXX will probably deserve an optimised revset.
459 nm = repo.changelog.nodemap
459 nm = repo.changelog.nodemap
460 plen = -1
460 plen = -1
461 # compute the whole set of successors or descendants
461 # compute the whole set of successors or descendants
462 while len(foreground) != plen:
462 while len(foreground) != plen:
463 plen = len(foreground)
463 plen = len(foreground)
464 succs = set(c.node() for c in foreground)
464 succs = set(c.node() for c in foreground)
465 mutable = [c.node() for c in foreground if c.mutable()]
465 mutable = [c.node() for c in foreground if c.mutable()]
466 succs.update(allsuccessors(repo.obsstore, mutable))
466 succs.update(allsuccessors(repo.obsstore, mutable))
467 known = (n for n in succs if n in nm)
467 known = (n for n in succs if n in nm)
468 foreground = set(repo.set('%ln::', known))
468 foreground = set(repo.set('%ln::', known))
469 return set(c.node() for c in foreground)
469 return set(c.node() for c in foreground)
470
470
471
471
472 def successorssets(repo, initialnode, cache=None):
472 def successorssets(repo, initialnode, cache=None):
473 """Return all set of successors of initial nodes
473 """Return all set of successors of initial nodes
474
474
475 Successors set of changeset A are a group of revision that succeed A. It
475 Successors set of changeset A are a group of revision that succeed A. It
476 succeed A as a consistent whole, each revision being only partial
476 succeed A as a consistent whole, each revision being only partial
477 replacement. Successors set contains non-obsolete changeset only.
477 replacement. Successors set contains non-obsolete changeset only.
478
478
479 In most cases a changeset A have zero (changeset pruned) or a single
479 In most cases a changeset A have zero (changeset pruned) or a single
480 successors set that contains a single successor (changeset A replaced by
480 successors set that contains a single successor (changeset A replaced by
481 A')
481 A')
482
482
483 When changeset is split, it results successors set containing more than
483 When changeset is split, it results successors set containing more than
484 a single element. Divergent rewriting will result in multiple successors
484 a single element. Divergent rewriting will result in multiple successors
485 sets.
485 sets.
486
486
487 They are returned as a list of tuples containing all valid successors sets.
487 They are returned as a list of tuples containing all valid successors sets.
488
488
489 Final successors unknown locally are considered plain prune (obsoleted
489 Final successors unknown locally are considered plain prune (obsoleted
490 without successors).
490 without successors).
491
491
492 The optional `cache` parameter is a dictionary that may contains
492 The optional `cache` parameter is a dictionary that may contains
493 precomputed successors sets. It is meant to reuse the computation of
493 precomputed successors sets. It is meant to reuse the computation of
494 previous call to `successorssets` when multiple calls are made at the same
494 previous call to `successorssets` when multiple calls are made at the same
495 time. The cache dictionary is updated in place. The caller is responsible
495 time. The cache dictionary is updated in place. The caller is responsible
496 for its live spawn. Code that makes multiple calls to `successorssets`
496 for its live spawn. Code that makes multiple calls to `successorssets`
497 *must* use this cache mechanism or suffer terrible performances."""
497 *must* use this cache mechanism or suffer terrible performances."""
498
498
499 if isinstance(initialnode, int):
500 initialnode = repo.unfiltered().changelog.node(initialnode)
501
499 succmarkers = repo.obsstore.successors
502 succmarkers = repo.obsstore.successors
500
503
501 # Stack of nodes we search successors sets for
504 # Stack of nodes we search successors sets for
502 toproceed = [initialnode]
505 toproceed = [initialnode]
503 # set version of above list for fast loop detection
506 # set version of above list for fast loop detection
504 # element added to "toproceed" must be added here
507 # element added to "toproceed" must be added here
505 stackedset = set(toproceed)
508 stackedset = set(toproceed)
506 if cache is None:
509 if cache is None:
507 cache = {}
510 cache = {}
508
511
509 # This while loop is the flattened version of a recursive search for
512 # This while loop is the flattened version of a recursive search for
510 # successors sets
513 # successors sets
511 #
514 #
512 # def successorssets(x):
515 # def successorssets(x):
513 # successors = directsuccessors(x)
516 # successors = directsuccessors(x)
514 # ss = [[]]
517 # ss = [[]]
515 # for succ in directsuccessors(x):
518 # for succ in directsuccessors(x):
516 # # product as in itertools cartesian product
519 # # product as in itertools cartesian product
517 # ss = product(ss, successorssets(succ))
520 # ss = product(ss, successorssets(succ))
518 # return ss
521 # return ss
519 #
522 #
520 # But we can not use plain recursive calls here:
523 # But we can not use plain recursive calls here:
521 # - that would blow the python call stack
524 # - that would blow the python call stack
522 # - obsolescence markers may have cycles, we need to handle them.
525 # - obsolescence markers may have cycles, we need to handle them.
523 #
526 #
524 # The `toproceed` list act as our call stack. Every node we search
527 # The `toproceed` list act as our call stack. Every node we search
525 # successors set for are stacked there.
528 # successors set for are stacked there.
526 #
529 #
527 # The `stackedset` is set version of this stack used to check if a node is
530 # The `stackedset` is set version of this stack used to check if a node is
528 # already stacked. This check is used to detect cycles and prevent infinite
531 # already stacked. This check is used to detect cycles and prevent infinite
529 # loop.
532 # loop.
530 #
533 #
531 # successors set of all nodes are stored in the `cache` dictionary.
534 # successors set of all nodes are stored in the `cache` dictionary.
532 #
535 #
533 # After this while loop ends we use the cache to return the successors sets
536 # After this while loop ends we use the cache to return the successors sets
534 # for the node requested by the caller.
537 # for the node requested by the caller.
535 while toproceed:
538 while toproceed:
536 # Every iteration tries to compute the successors sets of the topmost
539 # Every iteration tries to compute the successors sets of the topmost
537 # node of the stack: CURRENT.
540 # node of the stack: CURRENT.
538 #
541 #
539 # There are four possible outcomes:
542 # There are four possible outcomes:
540 #
543 #
541 # 1) We already know the successors sets of CURRENT:
544 # 1) We already know the successors sets of CURRENT:
542 # -> mission accomplished, pop it from the stack.
545 # -> mission accomplished, pop it from the stack.
543 # 2) Node is not obsolete:
546 # 2) Node is not obsolete:
544 # -> the node is its own successors sets. Add it to the cache.
547 # -> the node is its own successors sets. Add it to the cache.
545 # 3) We do not know successors set of direct successors of CURRENT:
548 # 3) We do not know successors set of direct successors of CURRENT:
546 # -> We add those successors to the stack.
549 # -> We add those successors to the stack.
547 # 4) We know successors sets of all direct successors of CURRENT:
550 # 4) We know successors sets of all direct successors of CURRENT:
548 # -> We can compute CURRENT successors set and add it to the
551 # -> We can compute CURRENT successors set and add it to the
549 # cache.
552 # cache.
550 #
553 #
551 current = toproceed[-1]
554 current = toproceed[-1]
552 if current in cache:
555 if current in cache:
553 # case (1): We already know the successors sets
556 # case (1): We already know the successors sets
554 stackedset.remove(toproceed.pop())
557 stackedset.remove(toproceed.pop())
555 elif current not in succmarkers:
558 elif current not in succmarkers:
556 # case (2): The node is not obsolete.
559 # case (2): The node is not obsolete.
557 if current in repo:
560 if current in repo:
558 # We have a valid last successors.
561 # We have a valid last successors.
559 cache[current] = [(current,)]
562 cache[current] = [(current,)]
560 else:
563 else:
561 # Final obsolete version is unknown locally.
564 # Final obsolete version is unknown locally.
562 # Do not count that as a valid successors
565 # Do not count that as a valid successors
563 cache[current] = []
566 cache[current] = []
564 else:
567 else:
565 # cases (3) and (4)
568 # cases (3) and (4)
566 #
569 #
567 # We proceed in two phases. Phase 1 aims to distinguish case (3)
570 # We proceed in two phases. Phase 1 aims to distinguish case (3)
568 # from case (4):
571 # from case (4):
569 #
572 #
570 # For each direct successors of CURRENT, we check whether its
573 # For each direct successors of CURRENT, we check whether its
571 # successors sets are known. If they are not, we stack the
574 # successors sets are known. If they are not, we stack the
572 # unknown node and proceed to the next iteration of the while
575 # unknown node and proceed to the next iteration of the while
573 # loop. (case 3)
576 # loop. (case 3)
574 #
577 #
575 # During this step, we may detect obsolescence cycles: a node
578 # During this step, we may detect obsolescence cycles: a node
576 # with unknown successors sets but already in the call stack.
579 # with unknown successors sets but already in the call stack.
577 # In such a situation, we arbitrary set the successors sets of
580 # In such a situation, we arbitrary set the successors sets of
578 # the node to nothing (node pruned) to break the cycle.
581 # the node to nothing (node pruned) to break the cycle.
579 #
582 #
580 # If no break was encountered we proceed to phase 2.
583 # If no break was encountered we proceed to phase 2.
581 #
584 #
582 # Phase 2 computes successors sets of CURRENT (case 4); see details
585 # Phase 2 computes successors sets of CURRENT (case 4); see details
583 # in phase 2 itself.
586 # in phase 2 itself.
584 #
587 #
585 # Note the two levels of iteration in each phase.
588 # Note the two levels of iteration in each phase.
586 # - The first one handles obsolescence markers using CURRENT as
589 # - The first one handles obsolescence markers using CURRENT as
587 # precursor (successors markers of CURRENT).
590 # precursor (successors markers of CURRENT).
588 #
591 #
589 # Having multiple entry here means divergence.
592 # Having multiple entry here means divergence.
590 #
593 #
591 # - The second one handles successors defined in each marker.
594 # - The second one handles successors defined in each marker.
592 #
595 #
593 # Having none means pruned node, multiple successors means split,
596 # Having none means pruned node, multiple successors means split,
594 # single successors are standard replacement.
597 # single successors are standard replacement.
595 #
598 #
596 for mark in sorted(succmarkers[current]):
599 for mark in sorted(succmarkers[current]):
597 for suc in mark[1]:
600 for suc in mark[1]:
598 if suc not in cache:
601 if suc not in cache:
599 if suc in stackedset:
602 if suc in stackedset:
600 # cycle breaking
603 # cycle breaking
601 cache[suc] = []
604 cache[suc] = []
602 else:
605 else:
603 # case (3) If we have not computed successors sets
606 # case (3) If we have not computed successors sets
604 # of one of those successors we add it to the
607 # of one of those successors we add it to the
605 # `toproceed` stack and stop all work for this
608 # `toproceed` stack and stop all work for this
606 # iteration.
609 # iteration.
607 toproceed.append(suc)
610 toproceed.append(suc)
608 stackedset.add(suc)
611 stackedset.add(suc)
609 break
612 break
610 else:
613 else:
611 continue
614 continue
612 break
615 break
613 else:
616 else:
614 # case (4): we know all successors sets of all direct
617 # case (4): we know all successors sets of all direct
615 # successors
618 # successors
616 #
619 #
617 # Successors set contributed by each marker depends on the
620 # Successors set contributed by each marker depends on the
618 # successors sets of all its "successors" node.
621 # successors sets of all its "successors" node.
619 #
622 #
620 # Each different marker is a divergence in the obsolescence
623 # Each different marker is a divergence in the obsolescence
621 # history. It contributes successors sets distinct from other
624 # history. It contributes successors sets distinct from other
622 # markers.
625 # markers.
623 #
626 #
624 # Within a marker, a successor may have divergent successors
627 # Within a marker, a successor may have divergent successors
625 # sets. In such a case, the marker will contribute multiple
628 # sets. In such a case, the marker will contribute multiple
626 # divergent successors sets. If multiple successors have
629 # divergent successors sets. If multiple successors have
627 # divergent successors sets, a cartesian product is used.
630 # divergent successors sets, a cartesian product is used.
628 #
631 #
629 # At the end we post-process successors sets to remove
632 # At the end we post-process successors sets to remove
630 # duplicated entry and successors set that are strict subset of
633 # duplicated entry and successors set that are strict subset of
631 # another one.
634 # another one.
632 succssets = []
635 succssets = []
633 for mark in sorted(succmarkers[current]):
636 for mark in sorted(succmarkers[current]):
634 # successors sets contributed by this marker
637 # successors sets contributed by this marker
635 markss = [[]]
638 markss = [[]]
636 for suc in mark[1]:
639 for suc in mark[1]:
637 # cardinal product with previous successors
640 # cardinal product with previous successors
638 productresult = []
641 productresult = []
639 for prefix in markss:
642 for prefix in markss:
640 for suffix in cache[suc]:
643 for suffix in cache[suc]:
641 newss = list(prefix)
644 newss = list(prefix)
642 for part in suffix:
645 for part in suffix:
643 # do not duplicated entry in successors set
646 # do not duplicated entry in successors set
644 # first entry wins.
647 # first entry wins.
645 if part not in newss:
648 if part not in newss:
646 newss.append(part)
649 newss.append(part)
647 productresult.append(newss)
650 productresult.append(newss)
648 markss = productresult
651 markss = productresult
649 succssets.extend(markss)
652 succssets.extend(markss)
650 # remove duplicated and subset
653 # remove duplicated and subset
651 seen = []
654 seen = []
652 final = []
655 final = []
653 candidate = sorted(((set(s), s) for s in succssets if s),
656 candidate = sorted(((set(s), s) for s in succssets if s),
654 key=lambda x: len(x[1]), reverse=True)
657 key=lambda x: len(x[1]), reverse=True)
655 for setversion, listversion in candidate:
658 for setversion, listversion in candidate:
656 for seenset in seen:
659 for seenset in seen:
657 if setversion.issubset(seenset):
660 if setversion.issubset(seenset):
658 break
661 break
659 else:
662 else:
660 final.append(listversion)
663 final.append(listversion)
661 seen.append(setversion)
664 seen.append(setversion)
662 final.reverse() # put small successors set first
665 final.reverse() # put small successors set first
663 cache[current] = final
666 cache[current] = final
664 return cache[initialnode]
667 return cache[initialnode]
665
668
666 def _knownrevs(repo, nodes):
669 def _knownrevs(repo, nodes):
667 """yield revision numbers of known nodes passed in parameters
670 """yield revision numbers of known nodes passed in parameters
668
671
669 Unknown revisions are silently ignored."""
672 Unknown revisions are silently ignored."""
670 torev = repo.changelog.nodemap.get
673 torev = repo.changelog.nodemap.get
671 for n in nodes:
674 for n in nodes:
672 rev = torev(n)
675 rev = torev(n)
673 if rev is not None:
676 if rev is not None:
674 yield rev
677 yield rev
675
678
676 # mapping of 'set-name' -> <function to compute this set>
679 # mapping of 'set-name' -> <function to compute this set>
677 cachefuncs = {}
680 cachefuncs = {}
678 def cachefor(name):
681 def cachefor(name):
679 """Decorator to register a function as computing the cache for a set"""
682 """Decorator to register a function as computing the cache for a set"""
680 def decorator(func):
683 def decorator(func):
681 assert name not in cachefuncs
684 assert name not in cachefuncs
682 cachefuncs[name] = func
685 cachefuncs[name] = func
683 return func
686 return func
684 return decorator
687 return decorator
685
688
686 def getrevs(repo, name):
689 def getrevs(repo, name):
687 """Return the set of revision that belong to the <name> set
690 """Return the set of revision that belong to the <name> set
688
691
689 Such access may compute the set and cache it for future use"""
692 Such access may compute the set and cache it for future use"""
690 repo = repo.unfiltered()
693 repo = repo.unfiltered()
691 if not repo.obsstore:
694 if not repo.obsstore:
692 return ()
695 return ()
693 if name not in repo.obsstore.caches:
696 if name not in repo.obsstore.caches:
694 repo.obsstore.caches[name] = cachefuncs[name](repo)
697 repo.obsstore.caches[name] = cachefuncs[name](repo)
695 return repo.obsstore.caches[name]
698 return repo.obsstore.caches[name]
696
699
697 # To be simple we need to invalidate obsolescence cache when:
700 # To be simple we need to invalidate obsolescence cache when:
698 #
701 #
699 # - new changeset is added:
702 # - new changeset is added:
700 # - public phase is changed
703 # - public phase is changed
701 # - obsolescence marker are added
704 # - obsolescence marker are added
702 # - strip is used a repo
705 # - strip is used a repo
703 def clearobscaches(repo):
706 def clearobscaches(repo):
704 """Remove all obsolescence related cache from a repo
707 """Remove all obsolescence related cache from a repo
705
708
706 This remove all cache in obsstore is the obsstore already exist on the
709 This remove all cache in obsstore is the obsstore already exist on the
707 repo.
710 repo.
708
711
709 (We could be smarter here given the exact event that trigger the cache
712 (We could be smarter here given the exact event that trigger the cache
710 clearing)"""
713 clearing)"""
711 # only clear cache is there is obsstore data in this repo
714 # only clear cache is there is obsstore data in this repo
712 if 'obsstore' in repo._filecache:
715 if 'obsstore' in repo._filecache:
713 repo.obsstore.caches.clear()
716 repo.obsstore.caches.clear()
714
717
715 @cachefor('obsolete')
718 @cachefor('obsolete')
716 def _computeobsoleteset(repo):
719 def _computeobsoleteset(repo):
717 """the set of obsolete revisions"""
720 """the set of obsolete revisions"""
718 obs = set()
721 obs = set()
719 getrev = repo.changelog.nodemap.get
722 getrev = repo.changelog.nodemap.get
720 getphase = repo._phasecache.phase
723 getphase = repo._phasecache.phase
721 for node in repo.obsstore.successors:
724 for node in repo.obsstore.successors:
722 rev = getrev(node)
725 rev = getrev(node)
723 if rev is not None and getphase(repo, rev):
726 if rev is not None and getphase(repo, rev):
724 obs.add(rev)
727 obs.add(rev)
725 return obs
728 return obs
726
729
727 @cachefor('unstable')
730 @cachefor('unstable')
728 def _computeunstableset(repo):
731 def _computeunstableset(repo):
729 """the set of non obsolete revisions with obsolete parents"""
732 """the set of non obsolete revisions with obsolete parents"""
730 # revset is not efficient enough here
733 # revset is not efficient enough here
731 # we do (obsolete()::) - obsolete() by hand
734 # we do (obsolete()::) - obsolete() by hand
732 obs = getrevs(repo, 'obsolete')
735 obs = getrevs(repo, 'obsolete')
733 if not obs:
736 if not obs:
734 return set()
737 return set()
735 cl = repo.changelog
738 cl = repo.changelog
736 return set(r for r in cl.descendants(obs) if r not in obs)
739 return set(r for r in cl.descendants(obs) if r not in obs)
737
740
738 @cachefor('suspended')
741 @cachefor('suspended')
739 def _computesuspendedset(repo):
742 def _computesuspendedset(repo):
740 """the set of obsolete parents with non obsolete descendants"""
743 """the set of obsolete parents with non obsolete descendants"""
741 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
744 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
742 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
745 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
743
746
744 @cachefor('extinct')
747 @cachefor('extinct')
745 def _computeextinctset(repo):
748 def _computeextinctset(repo):
746 """the set of obsolete parents without non obsolete descendants"""
749 """the set of obsolete parents without non obsolete descendants"""
747 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
750 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
748
751
749
752
750 @cachefor('bumped')
753 @cachefor('bumped')
751 def _computebumpedset(repo):
754 def _computebumpedset(repo):
752 """the set of revs trying to obsolete public revisions"""
755 """the set of revs trying to obsolete public revisions"""
753 # get all possible bumped changesets
756 # get all possible bumped changesets
754 tonode = repo.changelog.node
757 tonode = repo.changelog.node
755 publicnodes = (tonode(r) for r in repo.revs('public()'))
758 publicnodes = (tonode(r) for r in repo.revs('public()'))
756 successors = allsuccessors(repo.obsstore, publicnodes,
759 successors = allsuccessors(repo.obsstore, publicnodes,
757 ignoreflags=bumpedfix)
760 ignoreflags=bumpedfix)
758 # revision public or already obsolete don't count as bumped
761 # revision public or already obsolete don't count as bumped
759 query = '%ld - obsolete() - public()'
762 query = '%ld - obsolete() - public()'
760 return set(repo.revs(query, _knownrevs(repo, successors)))
763 return set(repo.revs(query, _knownrevs(repo, successors)))
761
764
762 @cachefor('divergent')
765 @cachefor('divergent')
763 def _computedivergentset(repo):
766 def _computedivergentset(repo):
764 """the set of rev that compete to be the final successors of some revision.
767 """the set of rev that compete to be the final successors of some revision.
765 """
768 """
766 divergent = set()
769 divergent = set()
767 obsstore = repo.obsstore
770 obsstore = repo.obsstore
768 newermap = {}
771 newermap = {}
769 for ctx in repo.set('(not public()) - obsolete()'):
772 for ctx in repo.set('(not public()) - obsolete()'):
770 mark = obsstore.precursors.get(ctx.node(), ())
773 mark = obsstore.precursors.get(ctx.node(), ())
771 toprocess = set(mark)
774 toprocess = set(mark)
772 while toprocess:
775 while toprocess:
773 prec = toprocess.pop()[0]
776 prec = toprocess.pop()[0]
774 if prec not in newermap:
777 if prec not in newermap:
775 successorssets(repo, prec, newermap)
778 successorssets(repo, prec, newermap)
776 newer = [n for n in newermap[prec] if n]
779 newer = [n for n in newermap[prec] if n]
777 if len(newer) > 1:
780 if len(newer) > 1:
778 divergent.add(ctx.rev())
781 divergent.add(ctx.rev())
779 break
782 break
780 toprocess.update(obsstore.precursors.get(prec, ()))
783 toprocess.update(obsstore.precursors.get(prec, ()))
781 return divergent
784 return divergent
782
785
783
786
784 def createmarkers(repo, relations, flag=0, metadata=None):
787 def createmarkers(repo, relations, flag=0, metadata=None):
785 """Add obsolete markers between changesets in a repo
788 """Add obsolete markers between changesets in a repo
786
789
787 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
790 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
788 `old` and `news` are changectx.
791 `old` and `news` are changectx.
789
792
790 Trying to obsolete a public changeset will raise an exception.
793 Trying to obsolete a public changeset will raise an exception.
791
794
792 Current user and date are used except if specified otherwise in the
795 Current user and date are used except if specified otherwise in the
793 metadata attribute.
796 metadata attribute.
794
797
795 This function operates within a transaction of its own, but does
798 This function operates within a transaction of its own, but does
796 not take any lock on the repo.
799 not take any lock on the repo.
797 """
800 """
798 # prepare metadata
801 # prepare metadata
799 if metadata is None:
802 if metadata is None:
800 metadata = {}
803 metadata = {}
801 if 'date' not in metadata:
804 if 'date' not in metadata:
802 metadata['date'] = '%i %i' % util.makedate()
805 metadata['date'] = '%i %i' % util.makedate()
803 if 'user' not in metadata:
806 if 'user' not in metadata:
804 metadata['user'] = repo.ui.username()
807 metadata['user'] = repo.ui.username()
805 tr = repo.transaction('add-obsolescence-marker')
808 tr = repo.transaction('add-obsolescence-marker')
806 try:
809 try:
807 for prec, sucs in relations:
810 for prec, sucs in relations:
808 if not prec.mutable():
811 if not prec.mutable():
809 raise util.Abort("cannot obsolete immutable changeset: %s"
812 raise util.Abort("cannot obsolete immutable changeset: %s"
810 % prec)
813 % prec)
811 nprec = prec.node()
814 nprec = prec.node()
812 nsucs = tuple(s.node() for s in sucs)
815 nsucs = tuple(s.node() for s in sucs)
813 if nprec in nsucs:
816 if nprec in nsucs:
814 raise util.Abort("changeset %s cannot obsolete itself" % prec)
817 raise util.Abort("changeset %s cannot obsolete itself" % prec)
815 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
818 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
816 repo.filteredrevcache.clear()
819 repo.filteredrevcache.clear()
817 tr.close()
820 tr.close()
818 finally:
821 finally:
819 tr.release()
822 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now