##// END OF EJS Templates
Backed out changeset 77d434760857
Augie Fackler -
r19618:6ac206fb default
parent child Browse files
Show More
@@ -1,822 +1,819 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 from i18n import _
87 from i18n import _
88
88
89 _pack = struct.pack
89 _pack = struct.pack
90 _unpack = struct.unpack
90 _unpack = struct.unpack
91
91
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93
93
94 # the obsolete feature is not mature enough to be enabled by default.
94 # the obsolete feature is not mature enough to be enabled by default.
95 # you have to rely on third party extension extension to enable this.
95 # you have to rely on third party extension extension to enable this.
96 _enabled = False
96 _enabled = False
97
97
98 # data used for parsing and writing
98 # data used for parsing and writing
99 _fmversion = 0
99 _fmversion = 0
100 _fmfixed = '>BIB20s'
100 _fmfixed = '>BIB20s'
101 _fmnode = '20s'
101 _fmnode = '20s'
102 _fmfsize = struct.calcsize(_fmfixed)
102 _fmfsize = struct.calcsize(_fmfixed)
103 _fnodesize = struct.calcsize(_fmnode)
103 _fnodesize = struct.calcsize(_fmnode)
104
104
105 ### obsolescence marker flag
105 ### obsolescence marker flag
106
106
107 ## bumpedfix flag
107 ## bumpedfix flag
108 #
108 #
109 # When a changeset A' succeed to a changeset A which became public, we call A'
109 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # "bumped" because it's a successors of a public changesets
110 # "bumped" because it's a successors of a public changesets
111 #
111 #
112 # o A' (bumped)
112 # o A' (bumped)
113 # |`:
113 # |`:
114 # | o A
114 # | o A
115 # |/
115 # |/
116 # o Z
116 # o Z
117 #
117 #
118 # The way to solve this situation is to create a new changeset Ad as children
118 # The way to solve this situation is to create a new changeset Ad as children
119 # of A. This changeset have the same content than A'. So the diff from A to A'
119 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 #
121 #
122 # o Ad
122 # o Ad
123 # |`:
123 # |`:
124 # | x A'
124 # | x A'
125 # |'|
125 # |'|
126 # o | A
126 # o | A
127 # |/
127 # |/
128 # o Z
128 # o Z
129 #
129 #
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # This flag mean that the successors express the changes between the public and
132 # This flag mean that the successors express the changes between the public and
133 # bumped version and fix the situation, breaking the transitivity of
133 # bumped version and fix the situation, breaking the transitivity of
134 # "bumped" here.
134 # "bumped" here.
135 bumpedfix = 1
135 bumpedfix = 1
136
136
137 def _readmarkers(data):
137 def _readmarkers(data):
138 """Read and enumerate markers from raw data"""
138 """Read and enumerate markers from raw data"""
139 off = 0
139 off = 0
140 diskversion = _unpack('>B', data[off:off + 1])[0]
140 diskversion = _unpack('>B', data[off:off + 1])[0]
141 off += 1
141 off += 1
142 if diskversion != _fmversion:
142 if diskversion != _fmversion:
143 raise util.Abort(_('parsing obsolete marker: unknown version %r')
143 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 % diskversion)
144 % diskversion)
145
145
146 # Loop on markers
146 # Loop on markers
147 l = len(data)
147 l = len(data)
148 while off + _fmfsize <= l:
148 while off + _fmfsize <= l:
149 # read fixed part
149 # read fixed part
150 cur = data[off:off + _fmfsize]
150 cur = data[off:off + _fmfsize]
151 off += _fmfsize
151 off += _fmfsize
152 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
152 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 # read replacement
153 # read replacement
154 sucs = ()
154 sucs = ()
155 if nbsuc:
155 if nbsuc:
156 s = (_fnodesize * nbsuc)
156 s = (_fnodesize * nbsuc)
157 cur = data[off:off + s]
157 cur = data[off:off + s]
158 sucs = _unpack(_fmnode * nbsuc, cur)
158 sucs = _unpack(_fmnode * nbsuc, cur)
159 off += s
159 off += s
160 # read metadata
160 # read metadata
161 # (metadata will be decoded on demand)
161 # (metadata will be decoded on demand)
162 metadata = data[off:off + mdsize]
162 metadata = data[off:off + mdsize]
163 if len(metadata) != mdsize:
163 if len(metadata) != mdsize:
164 raise util.Abort(_('parsing obsolete marker: metadata is too '
164 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 'short, %d bytes expected, got %d')
165 'short, %d bytes expected, got %d')
166 % (mdsize, len(metadata)))
166 % (mdsize, len(metadata)))
167 off += mdsize
167 off += mdsize
168 yield (pre, sucs, flags, metadata)
168 yield (pre, sucs, flags, metadata)
169
169
170 def encodemeta(meta):
170 def encodemeta(meta):
171 """Return encoded metadata string to string mapping.
171 """Return encoded metadata string to string mapping.
172
172
173 Assume no ':' in key and no '\0' in both key and value."""
173 Assume no ':' in key and no '\0' in both key and value."""
174 for key, value in meta.iteritems():
174 for key, value in meta.iteritems():
175 if ':' in key or '\0' in key:
175 if ':' in key or '\0' in key:
176 raise ValueError("':' and '\0' are forbidden in metadata key'")
176 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 if '\0' in value:
177 if '\0' in value:
178 raise ValueError("':' are forbidden in metadata value'")
178 raise ValueError("':' are forbidden in metadata value'")
179 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
179 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180
180
181 def decodemeta(data):
181 def decodemeta(data):
182 """Return string to string dictionary from encoded version."""
182 """Return string to string dictionary from encoded version."""
183 d = {}
183 d = {}
184 for l in data.split('\0'):
184 for l in data.split('\0'):
185 if l:
185 if l:
186 key, value = l.split(':')
186 key, value = l.split(':')
187 d[key] = value
187 d[key] = value
188 return d
188 return d
189
189
190 class marker(object):
190 class marker(object):
191 """Wrap obsolete marker raw data"""
191 """Wrap obsolete marker raw data"""
192
192
193 def __init__(self, repo, data):
193 def __init__(self, repo, data):
194 # the repo argument will be used to create changectx in later version
194 # the repo argument will be used to create changectx in later version
195 self._repo = repo
195 self._repo = repo
196 self._data = data
196 self._data = data
197 self._decodedmeta = None
197 self._decodedmeta = None
198
198
199 def precnode(self):
199 def precnode(self):
200 """Precursor changeset node identifier"""
200 """Precursor changeset node identifier"""
201 return self._data[0]
201 return self._data[0]
202
202
203 def succnodes(self):
203 def succnodes(self):
204 """List of successor changesets node identifiers"""
204 """List of successor changesets node identifiers"""
205 return self._data[1]
205 return self._data[1]
206
206
207 def metadata(self):
207 def metadata(self):
208 """Decoded metadata dictionary"""
208 """Decoded metadata dictionary"""
209 if self._decodedmeta is None:
209 if self._decodedmeta is None:
210 self._decodedmeta = decodemeta(self._data[3])
210 self._decodedmeta = decodemeta(self._data[3])
211 return self._decodedmeta
211 return self._decodedmeta
212
212
213 def date(self):
213 def date(self):
214 """Creation date as (unixtime, offset)"""
214 """Creation date as (unixtime, offset)"""
215 parts = self.metadata()['date'].split(' ')
215 parts = self.metadata()['date'].split(' ')
216 return (float(parts[0]), int(parts[1]))
216 return (float(parts[0]), int(parts[1]))
217
217
218 class obsstore(object):
218 class obsstore(object):
219 """Store obsolete markers
219 """Store obsolete markers
220
220
221 Markers can be accessed with two mappings:
221 Markers can be accessed with two mappings:
222 - precursors[x] -> set(markers on precursors edges of x)
222 - precursors[x] -> set(markers on precursors edges of x)
223 - successors[x] -> set(markers on successors edges of x)
223 - successors[x] -> set(markers on successors edges of x)
224 """
224 """
225
225
226 def __init__(self, sopener):
226 def __init__(self, sopener):
227 # caches for various obsolescence related cache
227 # caches for various obsolescence related cache
228 self.caches = {}
228 self.caches = {}
229 self._all = []
229 self._all = []
230 # new markers to serialize
230 # new markers to serialize
231 self.precursors = {}
231 self.precursors = {}
232 self.successors = {}
232 self.successors = {}
233 self.sopener = sopener
233 self.sopener = sopener
234 data = sopener.tryread('obsstore')
234 data = sopener.tryread('obsstore')
235 if data:
235 if data:
236 self._load(_readmarkers(data))
236 self._load(_readmarkers(data))
237
237
238 def __iter__(self):
238 def __iter__(self):
239 return iter(self._all)
239 return iter(self._all)
240
240
241 def __nonzero__(self):
241 def __nonzero__(self):
242 return bool(self._all)
242 return bool(self._all)
243
243
244 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
244 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
245 """obsolete: add a new obsolete marker
245 """obsolete: add a new obsolete marker
246
246
247 * ensuring it is hashable
247 * ensuring it is hashable
248 * check mandatory metadata
248 * check mandatory metadata
249 * encode metadata
249 * encode metadata
250 """
250 """
251 if metadata is None:
251 if metadata is None:
252 metadata = {}
252 metadata = {}
253 if 'date' not in metadata:
253 if 'date' not in metadata:
254 metadata['date'] = "%d %d" % util.makedate()
254 metadata['date'] = "%d %d" % util.makedate()
255 if len(prec) != 20:
255 if len(prec) != 20:
256 raise ValueError(prec)
256 raise ValueError(prec)
257 for succ in succs:
257 for succ in succs:
258 if len(succ) != 20:
258 if len(succ) != 20:
259 raise ValueError(succ)
259 raise ValueError(succ)
260 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
260 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
261 self.add(transaction, [marker])
261 self.add(transaction, [marker])
262
262
263 def add(self, transaction, markers):
263 def add(self, transaction, markers):
264 """Add new markers to the store
264 """Add new markers to the store
265
265
266 Take care of filtering duplicate.
266 Take care of filtering duplicate.
267 Return the number of new marker."""
267 Return the number of new marker."""
268 if not _enabled:
268 if not _enabled:
269 raise util.Abort('obsolete feature is not enabled on this repo')
269 raise util.Abort('obsolete feature is not enabled on this repo')
270 new = [m for m in markers if m not in self._all]
270 new = [m for m in markers if m not in self._all]
271 if new:
271 if new:
272 f = self.sopener('obsstore', 'ab')
272 f = self.sopener('obsstore', 'ab')
273 try:
273 try:
274 # Whether the file's current position is at the begin or at
274 # Whether the file's current position is at the begin or at
275 # the end after opening a file for appending is implementation
275 # the end after opening a file for appending is implementation
276 # defined. So we must seek to the end before calling tell(),
276 # defined. So we must seek to the end before calling tell(),
277 # or we may get a zero offset for non-zero sized files on
277 # or we may get a zero offset for non-zero sized files on
278 # some platforms (issue3543).
278 # some platforms (issue3543).
279 f.seek(0, _SEEK_END)
279 f.seek(0, _SEEK_END)
280 offset = f.tell()
280 offset = f.tell()
281 transaction.add('obsstore', offset)
281 transaction.add('obsstore', offset)
282 # offset == 0: new file - add the version header
282 # offset == 0: new file - add the version header
283 for bytes in _encodemarkers(new, offset == 0):
283 for bytes in _encodemarkers(new, offset == 0):
284 f.write(bytes)
284 f.write(bytes)
285 finally:
285 finally:
286 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
286 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
287 # call 'filecacheentry.refresh()' here
287 # call 'filecacheentry.refresh()' here
288 f.close()
288 f.close()
289 self._load(new)
289 self._load(new)
290 # new marker *may* have changed several set. invalidate the cache.
290 # new marker *may* have changed several set. invalidate the cache.
291 self.caches.clear()
291 self.caches.clear()
292 return len(new)
292 return len(new)
293
293
294 def mergemarkers(self, transaction, data):
294 def mergemarkers(self, transaction, data):
295 markers = _readmarkers(data)
295 markers = _readmarkers(data)
296 self.add(transaction, markers)
296 self.add(transaction, markers)
297
297
298 def _load(self, markers):
298 def _load(self, markers):
299 for mark in markers:
299 for mark in markers:
300 self._all.append(mark)
300 self._all.append(mark)
301 pre, sucs = mark[:2]
301 pre, sucs = mark[:2]
302 self.successors.setdefault(pre, set()).add(mark)
302 self.successors.setdefault(pre, set()).add(mark)
303 for suc in sucs:
303 for suc in sucs:
304 self.precursors.setdefault(suc, set()).add(mark)
304 self.precursors.setdefault(suc, set()).add(mark)
305 if node.nullid in self.precursors:
305 if node.nullid in self.precursors:
306 raise util.Abort(_('bad obsolescence marker detected: '
306 raise util.Abort(_('bad obsolescence marker detected: '
307 'invalid successors nullid'))
307 'invalid successors nullid'))
308
308
309 def _encodemarkers(markers, addheader=False):
309 def _encodemarkers(markers, addheader=False):
310 # Kept separate from flushmarkers(), it will be reused for
310 # Kept separate from flushmarkers(), it will be reused for
311 # markers exchange.
311 # markers exchange.
312 if addheader:
312 if addheader:
313 yield _pack('>B', _fmversion)
313 yield _pack('>B', _fmversion)
314 for marker in markers:
314 for marker in markers:
315 yield _encodeonemarker(marker)
315 yield _encodeonemarker(marker)
316
316
317
317
318 def _encodeonemarker(marker):
318 def _encodeonemarker(marker):
319 pre, sucs, flags, metadata = marker
319 pre, sucs, flags, metadata = marker
320 nbsuc = len(sucs)
320 nbsuc = len(sucs)
321 format = _fmfixed + (_fmnode * nbsuc)
321 format = _fmfixed + (_fmnode * nbsuc)
322 data = [nbsuc, len(metadata), flags, pre]
322 data = [nbsuc, len(metadata), flags, pre]
323 data.extend(sucs)
323 data.extend(sucs)
324 return _pack(format, *data) + metadata
324 return _pack(format, *data) + metadata
325
325
326 # arbitrary picked to fit into 8K limit from HTTP server
326 # arbitrary picked to fit into 8K limit from HTTP server
327 # you have to take in account:
327 # you have to take in account:
328 # - the version header
328 # - the version header
329 # - the base85 encoding
329 # - the base85 encoding
330 _maxpayload = 5300
330 _maxpayload = 5300
331
331
332 def listmarkers(repo):
332 def listmarkers(repo):
333 """List markers over pushkey"""
333 """List markers over pushkey"""
334 if not repo.obsstore:
334 if not repo.obsstore:
335 return {}
335 return {}
336 keys = {}
336 keys = {}
337 parts = []
337 parts = []
338 currentlen = _maxpayload * 2 # ensure we create a new part
338 currentlen = _maxpayload * 2 # ensure we create a new part
339 for marker in repo.obsstore:
339 for marker in repo.obsstore:
340 nextdata = _encodeonemarker(marker)
340 nextdata = _encodeonemarker(marker)
341 if (len(nextdata) + currentlen > _maxpayload):
341 if (len(nextdata) + currentlen > _maxpayload):
342 currentpart = []
342 currentpart = []
343 currentlen = 0
343 currentlen = 0
344 parts.append(currentpart)
344 parts.append(currentpart)
345 currentpart.append(nextdata)
345 currentpart.append(nextdata)
346 currentlen += len(nextdata)
346 currentlen += len(nextdata)
347 for idx, part in enumerate(reversed(parts)):
347 for idx, part in enumerate(reversed(parts)):
348 data = ''.join([_pack('>B', _fmversion)] + part)
348 data = ''.join([_pack('>B', _fmversion)] + part)
349 keys['dump%i' % idx] = base85.b85encode(data)
349 keys['dump%i' % idx] = base85.b85encode(data)
350 return keys
350 return keys
351
351
352 def pushmarker(repo, key, old, new):
352 def pushmarker(repo, key, old, new):
353 """Push markers over pushkey"""
353 """Push markers over pushkey"""
354 if not key.startswith('dump'):
354 if not key.startswith('dump'):
355 repo.ui.warn(_('unknown key: %r') % key)
355 repo.ui.warn(_('unknown key: %r') % key)
356 return 0
356 return 0
357 if old:
357 if old:
358 repo.ui.warn(_('unexpected old value') % key)
358 repo.ui.warn(_('unexpected old value') % key)
359 return 0
359 return 0
360 data = base85.b85decode(new)
360 data = base85.b85decode(new)
361 lock = repo.lock()
361 lock = repo.lock()
362 try:
362 try:
363 tr = repo.transaction('pushkey: obsolete markers')
363 tr = repo.transaction('pushkey: obsolete markers')
364 try:
364 try:
365 repo.obsstore.mergemarkers(tr, data)
365 repo.obsstore.mergemarkers(tr, data)
366 tr.close()
366 tr.close()
367 return 1
367 return 1
368 finally:
368 finally:
369 tr.release()
369 tr.release()
370 finally:
370 finally:
371 lock.release()
371 lock.release()
372
372
373 def syncpush(repo, remote):
373 def syncpush(repo, remote):
374 """utility function to push obsolete markers to a remote
374 """utility function to push obsolete markers to a remote
375
375
376 Exist mostly to allow overridding for experimentation purpose"""
376 Exist mostly to allow overridding for experimentation purpose"""
377 if (_enabled and repo.obsstore and
377 if (_enabled and repo.obsstore and
378 'obsolete' in remote.listkeys('namespaces')):
378 'obsolete' in remote.listkeys('namespaces')):
379 rslts = []
379 rslts = []
380 remotedata = repo.listkeys('obsolete')
380 remotedata = repo.listkeys('obsolete')
381 for key in sorted(remotedata, reverse=True):
381 for key in sorted(remotedata, reverse=True):
382 # reverse sort to ensure we end with dump0
382 # reverse sort to ensure we end with dump0
383 data = remotedata[key]
383 data = remotedata[key]
384 rslts.append(remote.pushkey('obsolete', key, '', data))
384 rslts.append(remote.pushkey('obsolete', key, '', data))
385 if [r for r in rslts if not r]:
385 if [r for r in rslts if not r]:
386 msg = _('failed to push some obsolete markers!\n')
386 msg = _('failed to push some obsolete markers!\n')
387 repo.ui.warn(msg)
387 repo.ui.warn(msg)
388
388
389 def syncpull(repo, remote, gettransaction):
389 def syncpull(repo, remote, gettransaction):
390 """utility function to pull obsolete markers from a remote
390 """utility function to pull obsolete markers from a remote
391
391
392 The `gettransaction` is function that return the pull transaction, creating
392 The `gettransaction` is function that return the pull transaction, creating
393 one if necessary. We return the transaction to inform the calling code that
393 one if necessary. We return the transaction to inform the calling code that
394 a new transaction have been created (when applicable).
394 a new transaction have been created (when applicable).
395
395
396 Exists mostly to allow overridding for experimentation purpose"""
396 Exists mostly to allow overridding for experimentation purpose"""
397 tr = None
397 tr = None
398 if _enabled:
398 if _enabled:
399 repo.ui.debug('fetching remote obsolete markers\n')
399 repo.ui.debug('fetching remote obsolete markers\n')
400 remoteobs = remote.listkeys('obsolete')
400 remoteobs = remote.listkeys('obsolete')
401 if 'dump0' in remoteobs:
401 if 'dump0' in remoteobs:
402 tr = gettransaction()
402 tr = gettransaction()
403 for key in sorted(remoteobs, reverse=True):
403 for key in sorted(remoteobs, reverse=True):
404 if key.startswith('dump'):
404 if key.startswith('dump'):
405 data = base85.b85decode(remoteobs[key])
405 data = base85.b85decode(remoteobs[key])
406 repo.obsstore.mergemarkers(tr, data)
406 repo.obsstore.mergemarkers(tr, data)
407 repo.invalidatevolatilesets()
407 repo.invalidatevolatilesets()
408 return tr
408 return tr
409
409
410 def allmarkers(repo):
410 def allmarkers(repo):
411 """all obsolete markers known in a repository"""
411 """all obsolete markers known in a repository"""
412 for markerdata in repo.obsstore:
412 for markerdata in repo.obsstore:
413 yield marker(repo, markerdata)
413 yield marker(repo, markerdata)
414
414
415 def precursormarkers(ctx):
415 def precursormarkers(ctx):
416 """obsolete marker marking this changeset as a successors"""
416 """obsolete marker marking this changeset as a successors"""
417 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
417 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
418 yield marker(ctx._repo, data)
418 yield marker(ctx._repo, data)
419
419
420 def successormarkers(ctx):
420 def successormarkers(ctx):
421 """obsolete marker making this changeset obsolete"""
421 """obsolete marker making this changeset obsolete"""
422 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
422 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
423 yield marker(ctx._repo, data)
423 yield marker(ctx._repo, data)
424
424
425 def allsuccessors(obsstore, nodes, ignoreflags=0):
425 def allsuccessors(obsstore, nodes, ignoreflags=0):
426 """Yield node for every successor of <nodes>.
426 """Yield node for every successor of <nodes>.
427
427
428 Some successors may be unknown locally.
428 Some successors may be unknown locally.
429
429
430 This is a linear yield unsuited to detecting split changesets."""
430 This is a linear yield unsuited to detecting split changesets."""
431 remaining = set(nodes)
431 remaining = set(nodes)
432 seen = set(remaining)
432 seen = set(remaining)
433 while remaining:
433 while remaining:
434 current = remaining.pop()
434 current = remaining.pop()
435 yield current
435 yield current
436 for mark in obsstore.successors.get(current, ()):
436 for mark in obsstore.successors.get(current, ()):
437 # ignore marker flagged with with specified flag
437 # ignore marker flagged with with specified flag
438 if mark[2] & ignoreflags:
438 if mark[2] & ignoreflags:
439 continue
439 continue
440 for suc in mark[1]:
440 for suc in mark[1]:
441 if suc not in seen:
441 if suc not in seen:
442 seen.add(suc)
442 seen.add(suc)
443 remaining.add(suc)
443 remaining.add(suc)
444
444
445 def foreground(repo, nodes):
445 def foreground(repo, nodes):
446 """return all nodes in the "foreground" of other node
446 """return all nodes in the "foreground" of other node
447
447
448 The foreground of a revision is anything reachable using parent -> children
448 The foreground of a revision is anything reachable using parent -> children
449 or precursor -> sucessor relation. It is very similars to "descendant" but
449 or precursor -> sucessor relation. It is very similars to "descendant" but
450 augmented with obsolescence information.
450 augmented with obsolescence information.
451
451
452 Beware that possible obsolescence cycle may result if complexe situation.
452 Beware that possible obsolescence cycle may result if complexe situation.
453 """
453 """
454 repo = repo.unfiltered()
454 repo = repo.unfiltered()
455 foreground = set(repo.set('%ln::', nodes))
455 foreground = set(repo.set('%ln::', nodes))
456 if repo.obsstore:
456 if repo.obsstore:
457 # We only need this complicated logic if there is obsolescence
457 # We only need this complicated logic if there is obsolescence
458 # XXX will probably deserve an optimised revset.
458 # XXX will probably deserve an optimised revset.
459 nm = repo.changelog.nodemap
459 nm = repo.changelog.nodemap
460 plen = -1
460 plen = -1
461 # compute the whole set of successors or descendants
461 # compute the whole set of successors or descendants
462 while len(foreground) != plen:
462 while len(foreground) != plen:
463 plen = len(foreground)
463 plen = len(foreground)
464 succs = set(c.node() for c in foreground)
464 succs = set(c.node() for c in foreground)
465 mutable = [c.node() for c in foreground if c.mutable()]
465 mutable = [c.node() for c in foreground if c.mutable()]
466 succs.update(allsuccessors(repo.obsstore, mutable))
466 succs.update(allsuccessors(repo.obsstore, mutable))
467 known = (n for n in succs if n in nm)
467 known = (n for n in succs if n in nm)
468 foreground = set(repo.set('%ln::', known))
468 foreground = set(repo.set('%ln::', known))
469 return set(c.node() for c in foreground)
469 return set(c.node() for c in foreground)
470
470
471
471
472 def successorssets(repo, initialnode, cache=None):
472 def successorssets(repo, initialnode, cache=None):
473 """Return all set of successors of initial nodes
473 """Return all set of successors of initial nodes
474
474
475 Successors set of changeset A are a group of revision that succeed A. It
475 Successors set of changeset A are a group of revision that succeed A. It
476 succeed A as a consistent whole, each revision being only partial
476 succeed A as a consistent whole, each revision being only partial
477 replacement. Successors set contains non-obsolete changeset only.
477 replacement. Successors set contains non-obsolete changeset only.
478
478
479 In most cases a changeset A have zero (changeset pruned) or a single
479 In most cases a changeset A have zero (changeset pruned) or a single
480 successors set that contains a single successor (changeset A replaced by
480 successors set that contains a single successor (changeset A replaced by
481 A')
481 A')
482
482
483 When changeset is split, it results successors set containing more than
483 When changeset is split, it results successors set containing more than
484 a single element. Divergent rewriting will result in multiple successors
484 a single element. Divergent rewriting will result in multiple successors
485 sets.
485 sets.
486
486
487 They are returned as a list of tuples containing all valid successors sets.
487 They are returned as a list of tuples containing all valid successors sets.
488
488
489 Final successors unknown locally are considered plain prune (obsoleted
489 Final successors unknown locally are considered plain prune (obsoleted
490 without successors).
490 without successors).
491
491
492 The optional `cache` parameter is a dictionary that may contains
492 The optional `cache` parameter is a dictionary that may contains
493 precomputed successors sets. It is meant to reuse the computation of
493 precomputed successors sets. It is meant to reuse the computation of
494 previous call to `successorssets` when multiple calls are made at the same
494 previous call to `successorssets` when multiple calls are made at the same
495 time. The cache dictionary is updated in place. The caller is responsible
495 time. The cache dictionary is updated in place. The caller is responsible
496 for its live spawn. Code that makes multiple calls to `successorssets`
496 for its live spawn. Code that makes multiple calls to `successorssets`
497 *must* use this cache mechanism or suffer terrible performances."""
497 *must* use this cache mechanism or suffer terrible performances."""
498
498
499 if isinstance(initialnode, int):
500 initialnode = repo.unfiltered().changelog.node(initialnode)
501
502 succmarkers = repo.obsstore.successors
499 succmarkers = repo.obsstore.successors
503
500
504 # Stack of nodes we search successors sets for
501 # Stack of nodes we search successors sets for
505 toproceed = [initialnode]
502 toproceed = [initialnode]
506 # set version of above list for fast loop detection
503 # set version of above list for fast loop detection
507 # element added to "toproceed" must be added here
504 # element added to "toproceed" must be added here
508 stackedset = set(toproceed)
505 stackedset = set(toproceed)
509 if cache is None:
506 if cache is None:
510 cache = {}
507 cache = {}
511
508
512 # This while loop is the flattened version of a recursive search for
509 # This while loop is the flattened version of a recursive search for
513 # successors sets
510 # successors sets
514 #
511 #
515 # def successorssets(x):
512 # def successorssets(x):
516 # successors = directsuccessors(x)
513 # successors = directsuccessors(x)
517 # ss = [[]]
514 # ss = [[]]
518 # for succ in directsuccessors(x):
515 # for succ in directsuccessors(x):
519 # # product as in itertools cartesian product
516 # # product as in itertools cartesian product
520 # ss = product(ss, successorssets(succ))
517 # ss = product(ss, successorssets(succ))
521 # return ss
518 # return ss
522 #
519 #
523 # But we can not use plain recursive calls here:
520 # But we can not use plain recursive calls here:
524 # - that would blow the python call stack
521 # - that would blow the python call stack
525 # - obsolescence markers may have cycles, we need to handle them.
522 # - obsolescence markers may have cycles, we need to handle them.
526 #
523 #
527 # The `toproceed` list act as our call stack. Every node we search
524 # The `toproceed` list act as our call stack. Every node we search
528 # successors set for are stacked there.
525 # successors set for are stacked there.
529 #
526 #
530 # The `stackedset` is set version of this stack used to check if a node is
527 # The `stackedset` is set version of this stack used to check if a node is
531 # already stacked. This check is used to detect cycles and prevent infinite
528 # already stacked. This check is used to detect cycles and prevent infinite
532 # loop.
529 # loop.
533 #
530 #
534 # successors set of all nodes are stored in the `cache` dictionary.
531 # successors set of all nodes are stored in the `cache` dictionary.
535 #
532 #
536 # After this while loop ends we use the cache to return the successors sets
533 # After this while loop ends we use the cache to return the successors sets
537 # for the node requested by the caller.
534 # for the node requested by the caller.
538 while toproceed:
535 while toproceed:
539 # Every iteration tries to compute the successors sets of the topmost
536 # Every iteration tries to compute the successors sets of the topmost
540 # node of the stack: CURRENT.
537 # node of the stack: CURRENT.
541 #
538 #
542 # There are four possible outcomes:
539 # There are four possible outcomes:
543 #
540 #
544 # 1) We already know the successors sets of CURRENT:
541 # 1) We already know the successors sets of CURRENT:
545 # -> mission accomplished, pop it from the stack.
542 # -> mission accomplished, pop it from the stack.
546 # 2) Node is not obsolete:
543 # 2) Node is not obsolete:
547 # -> the node is its own successors sets. Add it to the cache.
544 # -> the node is its own successors sets. Add it to the cache.
548 # 3) We do not know successors set of direct successors of CURRENT:
545 # 3) We do not know successors set of direct successors of CURRENT:
549 # -> We add those successors to the stack.
546 # -> We add those successors to the stack.
550 # 4) We know successors sets of all direct successors of CURRENT:
547 # 4) We know successors sets of all direct successors of CURRENT:
551 # -> We can compute CURRENT successors set and add it to the
548 # -> We can compute CURRENT successors set and add it to the
552 # cache.
549 # cache.
553 #
550 #
554 current = toproceed[-1]
551 current = toproceed[-1]
555 if current in cache:
552 if current in cache:
556 # case (1): We already know the successors sets
553 # case (1): We already know the successors sets
557 stackedset.remove(toproceed.pop())
554 stackedset.remove(toproceed.pop())
558 elif current not in succmarkers:
555 elif current not in succmarkers:
559 # case (2): The node is not obsolete.
556 # case (2): The node is not obsolete.
560 if current in repo:
557 if current in repo:
561 # We have a valid last successors.
558 # We have a valid last successors.
562 cache[current] = [(current,)]
559 cache[current] = [(current,)]
563 else:
560 else:
564 # Final obsolete version is unknown locally.
561 # Final obsolete version is unknown locally.
565 # Do not count that as a valid successors
562 # Do not count that as a valid successors
566 cache[current] = []
563 cache[current] = []
567 else:
564 else:
568 # cases (3) and (4)
565 # cases (3) and (4)
569 #
566 #
570 # We proceed in two phases. Phase 1 aims to distinguish case (3)
567 # We proceed in two phases. Phase 1 aims to distinguish case (3)
571 # from case (4):
568 # from case (4):
572 #
569 #
573 # For each direct successors of CURRENT, we check whether its
570 # For each direct successors of CURRENT, we check whether its
574 # successors sets are known. If they are not, we stack the
571 # successors sets are known. If they are not, we stack the
575 # unknown node and proceed to the next iteration of the while
572 # unknown node and proceed to the next iteration of the while
576 # loop. (case 3)
573 # loop. (case 3)
577 #
574 #
578 # During this step, we may detect obsolescence cycles: a node
575 # During this step, we may detect obsolescence cycles: a node
579 # with unknown successors sets but already in the call stack.
576 # with unknown successors sets but already in the call stack.
580 # In such a situation, we arbitrary set the successors sets of
577 # In such a situation, we arbitrary set the successors sets of
581 # the node to nothing (node pruned) to break the cycle.
578 # the node to nothing (node pruned) to break the cycle.
582 #
579 #
583 # If no break was encountered we proceed to phase 2.
580 # If no break was encountered we proceed to phase 2.
584 #
581 #
585 # Phase 2 computes successors sets of CURRENT (case 4); see details
582 # Phase 2 computes successors sets of CURRENT (case 4); see details
586 # in phase 2 itself.
583 # in phase 2 itself.
587 #
584 #
588 # Note the two levels of iteration in each phase.
585 # Note the two levels of iteration in each phase.
589 # - The first one handles obsolescence markers using CURRENT as
586 # - The first one handles obsolescence markers using CURRENT as
590 # precursor (successors markers of CURRENT).
587 # precursor (successors markers of CURRENT).
591 #
588 #
592 # Having multiple entry here means divergence.
589 # Having multiple entry here means divergence.
593 #
590 #
594 # - The second one handles successors defined in each marker.
591 # - The second one handles successors defined in each marker.
595 #
592 #
596 # Having none means pruned node, multiple successors means split,
593 # Having none means pruned node, multiple successors means split,
597 # single successors are standard replacement.
594 # single successors are standard replacement.
598 #
595 #
599 for mark in sorted(succmarkers[current]):
596 for mark in sorted(succmarkers[current]):
600 for suc in mark[1]:
597 for suc in mark[1]:
601 if suc not in cache:
598 if suc not in cache:
602 if suc in stackedset:
599 if suc in stackedset:
603 # cycle breaking
600 # cycle breaking
604 cache[suc] = []
601 cache[suc] = []
605 else:
602 else:
606 # case (3) If we have not computed successors sets
603 # case (3) If we have not computed successors sets
607 # of one of those successors we add it to the
604 # of one of those successors we add it to the
608 # `toproceed` stack and stop all work for this
605 # `toproceed` stack and stop all work for this
609 # iteration.
606 # iteration.
610 toproceed.append(suc)
607 toproceed.append(suc)
611 stackedset.add(suc)
608 stackedset.add(suc)
612 break
609 break
613 else:
610 else:
614 continue
611 continue
615 break
612 break
616 else:
613 else:
617 # case (4): we know all successors sets of all direct
614 # case (4): we know all successors sets of all direct
618 # successors
615 # successors
619 #
616 #
620 # Successors set contributed by each marker depends on the
617 # Successors set contributed by each marker depends on the
621 # successors sets of all its "successors" node.
618 # successors sets of all its "successors" node.
622 #
619 #
623 # Each different marker is a divergence in the obsolescence
620 # Each different marker is a divergence in the obsolescence
624 # history. It contributes successors sets distinct from other
621 # history. It contributes successors sets distinct from other
625 # markers.
622 # markers.
626 #
623 #
627 # Within a marker, a successor may have divergent successors
624 # Within a marker, a successor may have divergent successors
628 # sets. In such a case, the marker will contribute multiple
625 # sets. In such a case, the marker will contribute multiple
629 # divergent successors sets. If multiple successors have
626 # divergent successors sets. If multiple successors have
630 # divergent successors sets, a cartesian product is used.
627 # divergent successors sets, a cartesian product is used.
631 #
628 #
632 # At the end we post-process successors sets to remove
629 # At the end we post-process successors sets to remove
633 # duplicated entry and successors set that are strict subset of
630 # duplicated entry and successors set that are strict subset of
634 # another one.
631 # another one.
635 succssets = []
632 succssets = []
636 for mark in sorted(succmarkers[current]):
633 for mark in sorted(succmarkers[current]):
637 # successors sets contributed by this marker
634 # successors sets contributed by this marker
638 markss = [[]]
635 markss = [[]]
639 for suc in mark[1]:
636 for suc in mark[1]:
640 # cardinal product with previous successors
637 # cardinal product with previous successors
641 productresult = []
638 productresult = []
642 for prefix in markss:
639 for prefix in markss:
643 for suffix in cache[suc]:
640 for suffix in cache[suc]:
644 newss = list(prefix)
641 newss = list(prefix)
645 for part in suffix:
642 for part in suffix:
646 # do not duplicated entry in successors set
643 # do not duplicated entry in successors set
647 # first entry wins.
644 # first entry wins.
648 if part not in newss:
645 if part not in newss:
649 newss.append(part)
646 newss.append(part)
650 productresult.append(newss)
647 productresult.append(newss)
651 markss = productresult
648 markss = productresult
652 succssets.extend(markss)
649 succssets.extend(markss)
653 # remove duplicated and subset
650 # remove duplicated and subset
654 seen = []
651 seen = []
655 final = []
652 final = []
656 candidate = sorted(((set(s), s) for s in succssets if s),
653 candidate = sorted(((set(s), s) for s in succssets if s),
657 key=lambda x: len(x[1]), reverse=True)
654 key=lambda x: len(x[1]), reverse=True)
658 for setversion, listversion in candidate:
655 for setversion, listversion in candidate:
659 for seenset in seen:
656 for seenset in seen:
660 if setversion.issubset(seenset):
657 if setversion.issubset(seenset):
661 break
658 break
662 else:
659 else:
663 final.append(listversion)
660 final.append(listversion)
664 seen.append(setversion)
661 seen.append(setversion)
665 final.reverse() # put small successors set first
662 final.reverse() # put small successors set first
666 cache[current] = final
663 cache[current] = final
667 return cache[initialnode]
664 return cache[initialnode]
668
665
669 def _knownrevs(repo, nodes):
666 def _knownrevs(repo, nodes):
670 """yield revision numbers of known nodes passed in parameters
667 """yield revision numbers of known nodes passed in parameters
671
668
672 Unknown revisions are silently ignored."""
669 Unknown revisions are silently ignored."""
673 torev = repo.changelog.nodemap.get
670 torev = repo.changelog.nodemap.get
674 for n in nodes:
671 for n in nodes:
675 rev = torev(n)
672 rev = torev(n)
676 if rev is not None:
673 if rev is not None:
677 yield rev
674 yield rev
678
675
679 # mapping of 'set-name' -> <function to compute this set>
676 # mapping of 'set-name' -> <function to compute this set>
680 cachefuncs = {}
677 cachefuncs = {}
681 def cachefor(name):
678 def cachefor(name):
682 """Decorator to register a function as computing the cache for a set"""
679 """Decorator to register a function as computing the cache for a set"""
683 def decorator(func):
680 def decorator(func):
684 assert name not in cachefuncs
681 assert name not in cachefuncs
685 cachefuncs[name] = func
682 cachefuncs[name] = func
686 return func
683 return func
687 return decorator
684 return decorator
688
685
689 def getrevs(repo, name):
686 def getrevs(repo, name):
690 """Return the set of revision that belong to the <name> set
687 """Return the set of revision that belong to the <name> set
691
688
692 Such access may compute the set and cache it for future use"""
689 Such access may compute the set and cache it for future use"""
693 repo = repo.unfiltered()
690 repo = repo.unfiltered()
694 if not repo.obsstore:
691 if not repo.obsstore:
695 return ()
692 return ()
696 if name not in repo.obsstore.caches:
693 if name not in repo.obsstore.caches:
697 repo.obsstore.caches[name] = cachefuncs[name](repo)
694 repo.obsstore.caches[name] = cachefuncs[name](repo)
698 return repo.obsstore.caches[name]
695 return repo.obsstore.caches[name]
699
696
700 # To be simple we need to invalidate obsolescence cache when:
697 # To be simple we need to invalidate obsolescence cache when:
701 #
698 #
702 # - new changeset is added:
699 # - new changeset is added:
703 # - public phase is changed
700 # - public phase is changed
704 # - obsolescence marker are added
701 # - obsolescence marker are added
705 # - strip is used a repo
702 # - strip is used a repo
706 def clearobscaches(repo):
703 def clearobscaches(repo):
707 """Remove all obsolescence related cache from a repo
704 """Remove all obsolescence related cache from a repo
708
705
709 This remove all cache in obsstore is the obsstore already exist on the
706 This remove all cache in obsstore is the obsstore already exist on the
710 repo.
707 repo.
711
708
712 (We could be smarter here given the exact event that trigger the cache
709 (We could be smarter here given the exact event that trigger the cache
713 clearing)"""
710 clearing)"""
714 # only clear cache is there is obsstore data in this repo
711 # only clear cache is there is obsstore data in this repo
715 if 'obsstore' in repo._filecache:
712 if 'obsstore' in repo._filecache:
716 repo.obsstore.caches.clear()
713 repo.obsstore.caches.clear()
717
714
718 @cachefor('obsolete')
715 @cachefor('obsolete')
719 def _computeobsoleteset(repo):
716 def _computeobsoleteset(repo):
720 """the set of obsolete revisions"""
717 """the set of obsolete revisions"""
721 obs = set()
718 obs = set()
722 getrev = repo.changelog.nodemap.get
719 getrev = repo.changelog.nodemap.get
723 getphase = repo._phasecache.phase
720 getphase = repo._phasecache.phase
724 for node in repo.obsstore.successors:
721 for node in repo.obsstore.successors:
725 rev = getrev(node)
722 rev = getrev(node)
726 if rev is not None and getphase(repo, rev):
723 if rev is not None and getphase(repo, rev):
727 obs.add(rev)
724 obs.add(rev)
728 return obs
725 return obs
729
726
730 @cachefor('unstable')
727 @cachefor('unstable')
731 def _computeunstableset(repo):
728 def _computeunstableset(repo):
732 """the set of non obsolete revisions with obsolete parents"""
729 """the set of non obsolete revisions with obsolete parents"""
733 # revset is not efficient enough here
730 # revset is not efficient enough here
734 # we do (obsolete()::) - obsolete() by hand
731 # we do (obsolete()::) - obsolete() by hand
735 obs = getrevs(repo, 'obsolete')
732 obs = getrevs(repo, 'obsolete')
736 if not obs:
733 if not obs:
737 return set()
734 return set()
738 cl = repo.changelog
735 cl = repo.changelog
739 return set(r for r in cl.descendants(obs) if r not in obs)
736 return set(r for r in cl.descendants(obs) if r not in obs)
740
737
741 @cachefor('suspended')
738 @cachefor('suspended')
742 def _computesuspendedset(repo):
739 def _computesuspendedset(repo):
743 """the set of obsolete parents with non obsolete descendants"""
740 """the set of obsolete parents with non obsolete descendants"""
744 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
741 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
745 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
742 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
746
743
747 @cachefor('extinct')
744 @cachefor('extinct')
748 def _computeextinctset(repo):
745 def _computeextinctset(repo):
749 """the set of obsolete parents without non obsolete descendants"""
746 """the set of obsolete parents without non obsolete descendants"""
750 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
747 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
751
748
752
749
753 @cachefor('bumped')
750 @cachefor('bumped')
754 def _computebumpedset(repo):
751 def _computebumpedset(repo):
755 """the set of revs trying to obsolete public revisions"""
752 """the set of revs trying to obsolete public revisions"""
756 # get all possible bumped changesets
753 # get all possible bumped changesets
757 tonode = repo.changelog.node
754 tonode = repo.changelog.node
758 publicnodes = (tonode(r) for r in repo.revs('public()'))
755 publicnodes = (tonode(r) for r in repo.revs('public()'))
759 successors = allsuccessors(repo.obsstore, publicnodes,
756 successors = allsuccessors(repo.obsstore, publicnodes,
760 ignoreflags=bumpedfix)
757 ignoreflags=bumpedfix)
761 # revision public or already obsolete don't count as bumped
758 # revision public or already obsolete don't count as bumped
762 query = '%ld - obsolete() - public()'
759 query = '%ld - obsolete() - public()'
763 return set(repo.revs(query, _knownrevs(repo, successors)))
760 return set(repo.revs(query, _knownrevs(repo, successors)))
764
761
765 @cachefor('divergent')
762 @cachefor('divergent')
766 def _computedivergentset(repo):
763 def _computedivergentset(repo):
767 """the set of rev that compete to be the final successors of some revision.
764 """the set of rev that compete to be the final successors of some revision.
768 """
765 """
769 divergent = set()
766 divergent = set()
770 obsstore = repo.obsstore
767 obsstore = repo.obsstore
771 newermap = {}
768 newermap = {}
772 for ctx in repo.set('(not public()) - obsolete()'):
769 for ctx in repo.set('(not public()) - obsolete()'):
773 mark = obsstore.precursors.get(ctx.node(), ())
770 mark = obsstore.precursors.get(ctx.node(), ())
774 toprocess = set(mark)
771 toprocess = set(mark)
775 while toprocess:
772 while toprocess:
776 prec = toprocess.pop()[0]
773 prec = toprocess.pop()[0]
777 if prec not in newermap:
774 if prec not in newermap:
778 successorssets(repo, prec, newermap)
775 successorssets(repo, prec, newermap)
779 newer = [n for n in newermap[prec] if n]
776 newer = [n for n in newermap[prec] if n]
780 if len(newer) > 1:
777 if len(newer) > 1:
781 divergent.add(ctx.rev())
778 divergent.add(ctx.rev())
782 break
779 break
783 toprocess.update(obsstore.precursors.get(prec, ()))
780 toprocess.update(obsstore.precursors.get(prec, ()))
784 return divergent
781 return divergent
785
782
786
783
787 def createmarkers(repo, relations, flag=0, metadata=None):
784 def createmarkers(repo, relations, flag=0, metadata=None):
788 """Add obsolete markers between changesets in a repo
785 """Add obsolete markers between changesets in a repo
789
786
790 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
787 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
791 `old` and `news` are changectx.
788 `old` and `news` are changectx.
792
789
793 Trying to obsolete a public changeset will raise an exception.
790 Trying to obsolete a public changeset will raise an exception.
794
791
795 Current user and date are used except if specified otherwise in the
792 Current user and date are used except if specified otherwise in the
796 metadata attribute.
793 metadata attribute.
797
794
798 This function operates within a transaction of its own, but does
795 This function operates within a transaction of its own, but does
799 not take any lock on the repo.
796 not take any lock on the repo.
800 """
797 """
801 # prepare metadata
798 # prepare metadata
802 if metadata is None:
799 if metadata is None:
803 metadata = {}
800 metadata = {}
804 if 'date' not in metadata:
801 if 'date' not in metadata:
805 metadata['date'] = '%i %i' % util.makedate()
802 metadata['date'] = '%i %i' % util.makedate()
806 if 'user' not in metadata:
803 if 'user' not in metadata:
807 metadata['user'] = repo.ui.username()
804 metadata['user'] = repo.ui.username()
808 tr = repo.transaction('add-obsolescence-marker')
805 tr = repo.transaction('add-obsolescence-marker')
809 try:
806 try:
810 for prec, sucs in relations:
807 for prec, sucs in relations:
811 if not prec.mutable():
808 if not prec.mutable():
812 raise util.Abort("cannot obsolete immutable changeset: %s"
809 raise util.Abort("cannot obsolete immutable changeset: %s"
813 % prec)
810 % prec)
814 nprec = prec.node()
811 nprec = prec.node()
815 nsucs = tuple(s.node() for s in sucs)
812 nsucs = tuple(s.node() for s in sucs)
816 if nprec in nsucs:
813 if nprec in nsucs:
817 raise util.Abort("changeset %s cannot obsolete itself" % prec)
814 raise util.Abort("changeset %s cannot obsolete itself" % prec)
818 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
815 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
819 repo.filteredrevcache.clear()
816 repo.filteredrevcache.clear()
820 tr.close()
817 tr.close()
821 finally:
818 finally:
822 tr.release()
819 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now