##// END OF EJS Templates
obsolete: fix bad comment...
Pierre-Yves David -
r20203:509768fc default
parent child Browse files
Show More
@@ -1,832 +1,832
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 from i18n import _
87 from i18n import _
88
88
89 _pack = struct.pack
89 _pack = struct.pack
90 _unpack = struct.unpack
90 _unpack = struct.unpack
91
91
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93
93
94 # the obsolete feature is not mature enough to be enabled by default.
94 # the obsolete feature is not mature enough to be enabled by default.
95 # you have to rely on third party extension extension to enable this.
95 # you have to rely on third party extension extension to enable this.
96 _enabled = False
96 _enabled = False
97
97
98 # data used for parsing and writing
98 # data used for parsing and writing
99 _fmversion = 0
99 _fmversion = 0
100 _fmfixed = '>BIB20s'
100 _fmfixed = '>BIB20s'
101 _fmnode = '20s'
101 _fmnode = '20s'
102 _fmfsize = struct.calcsize(_fmfixed)
102 _fmfsize = struct.calcsize(_fmfixed)
103 _fnodesize = struct.calcsize(_fmnode)
103 _fnodesize = struct.calcsize(_fmnode)
104
104
105 ### obsolescence marker flag
105 ### obsolescence marker flag
106
106
107 ## bumpedfix flag
107 ## bumpedfix flag
108 #
108 #
109 # When a changeset A' succeed to a changeset A which became public, we call A'
109 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # "bumped" because it's a successors of a public changesets
110 # "bumped" because it's a successors of a public changesets
111 #
111 #
112 # o A' (bumped)
112 # o A' (bumped)
113 # |`:
113 # |`:
114 # | o A
114 # | o A
115 # |/
115 # |/
116 # o Z
116 # o Z
117 #
117 #
118 # The way to solve this situation is to create a new changeset Ad as children
118 # The way to solve this situation is to create a new changeset Ad as children
119 # of A. This changeset have the same content than A'. So the diff from A to A'
119 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 #
121 #
122 # o Ad
122 # o Ad
123 # |`:
123 # |`:
124 # | x A'
124 # | x A'
125 # |'|
125 # |'|
126 # o | A
126 # o | A
127 # |/
127 # |/
128 # o Z
128 # o Z
129 #
129 #
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # This flag mean that the successors express the changes between the public and
132 # This flag mean that the successors express the changes between the public and
133 # bumped version and fix the situation, breaking the transitivity of
133 # bumped version and fix the situation, breaking the transitivity of
134 # "bumped" here.
134 # "bumped" here.
135 bumpedfix = 1
135 bumpedfix = 1
136
136
137 def _readmarkers(data):
137 def _readmarkers(data):
138 """Read and enumerate markers from raw data"""
138 """Read and enumerate markers from raw data"""
139 off = 0
139 off = 0
140 diskversion = _unpack('>B', data[off:off + 1])[0]
140 diskversion = _unpack('>B', data[off:off + 1])[0]
141 off += 1
141 off += 1
142 if diskversion != _fmversion:
142 if diskversion != _fmversion:
143 raise util.Abort(_('parsing obsolete marker: unknown version %r')
143 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 % diskversion)
144 % diskversion)
145
145
146 # Loop on markers
146 # Loop on markers
147 l = len(data)
147 l = len(data)
148 while off + _fmfsize <= l:
148 while off + _fmfsize <= l:
149 # read fixed part
149 # read fixed part
150 cur = data[off:off + _fmfsize]
150 cur = data[off:off + _fmfsize]
151 off += _fmfsize
151 off += _fmfsize
152 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
152 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 # read replacement
153 # read replacement
154 sucs = ()
154 sucs = ()
155 if nbsuc:
155 if nbsuc:
156 s = (_fnodesize * nbsuc)
156 s = (_fnodesize * nbsuc)
157 cur = data[off:off + s]
157 cur = data[off:off + s]
158 sucs = _unpack(_fmnode * nbsuc, cur)
158 sucs = _unpack(_fmnode * nbsuc, cur)
159 off += s
159 off += s
160 # read metadata
160 # read metadata
161 # (metadata will be decoded on demand)
161 # (metadata will be decoded on demand)
162 metadata = data[off:off + mdsize]
162 metadata = data[off:off + mdsize]
163 if len(metadata) != mdsize:
163 if len(metadata) != mdsize:
164 raise util.Abort(_('parsing obsolete marker: metadata is too '
164 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 'short, %d bytes expected, got %d')
165 'short, %d bytes expected, got %d')
166 % (mdsize, len(metadata)))
166 % (mdsize, len(metadata)))
167 off += mdsize
167 off += mdsize
168 yield (pre, sucs, flags, metadata)
168 yield (pre, sucs, flags, metadata)
169
169
170 def encodemeta(meta):
170 def encodemeta(meta):
171 """Return encoded metadata string to string mapping.
171 """Return encoded metadata string to string mapping.
172
172
173 Assume no ':' in key and no '\0' in both key and value."""
173 Assume no ':' in key and no '\0' in both key and value."""
174 for key, value in meta.iteritems():
174 for key, value in meta.iteritems():
175 if ':' in key or '\0' in key:
175 if ':' in key or '\0' in key:
176 raise ValueError("':' and '\0' are forbidden in metadata key'")
176 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 if '\0' in value:
177 if '\0' in value:
178 raise ValueError("':' are forbidden in metadata value'")
178 raise ValueError("':' are forbidden in metadata value'")
179 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
179 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180
180
181 def decodemeta(data):
181 def decodemeta(data):
182 """Return string to string dictionary from encoded version."""
182 """Return string to string dictionary from encoded version."""
183 d = {}
183 d = {}
184 for l in data.split('\0'):
184 for l in data.split('\0'):
185 if l:
185 if l:
186 key, value = l.split(':')
186 key, value = l.split(':')
187 d[key] = value
187 d[key] = value
188 return d
188 return d
189
189
190 class marker(object):
190 class marker(object):
191 """Wrap obsolete marker raw data"""
191 """Wrap obsolete marker raw data"""
192
192
193 def __init__(self, repo, data):
193 def __init__(self, repo, data):
194 # the repo argument will be used to create changectx in later version
194 # the repo argument will be used to create changectx in later version
195 self._repo = repo
195 self._repo = repo
196 self._data = data
196 self._data = data
197 self._decodedmeta = None
197 self._decodedmeta = None
198
198
199 def __hash__(self):
199 def __hash__(self):
200 return hash(self._data)
200 return hash(self._data)
201
201
202 def __eq__(self, other):
202 def __eq__(self, other):
203 if type(other) != type(self):
203 if type(other) != type(self):
204 return False
204 return False
205 return self._data == other._data
205 return self._data == other._data
206
206
207 def precnode(self):
207 def precnode(self):
208 """Precursor changeset node identifier"""
208 """Precursor changeset node identifier"""
209 return self._data[0]
209 return self._data[0]
210
210
211 def succnodes(self):
211 def succnodes(self):
212 """List of successor changesets node identifiers"""
212 """List of successor changesets node identifiers"""
213 return self._data[1]
213 return self._data[1]
214
214
215 def metadata(self):
215 def metadata(self):
216 """Decoded metadata dictionary"""
216 """Decoded metadata dictionary"""
217 if self._decodedmeta is None:
217 if self._decodedmeta is None:
218 self._decodedmeta = decodemeta(self._data[3])
218 self._decodedmeta = decodemeta(self._data[3])
219 return self._decodedmeta
219 return self._decodedmeta
220
220
221 def date(self):
221 def date(self):
222 """Creation date as (unixtime, offset)"""
222 """Creation date as (unixtime, offset)"""
223 parts = self.metadata()['date'].split(' ')
223 parts = self.metadata()['date'].split(' ')
224 return (float(parts[0]), int(parts[1]))
224 return (float(parts[0]), int(parts[1]))
225
225
226 class obsstore(object):
226 class obsstore(object):
227 """Store obsolete markers
227 """Store obsolete markers
228
228
229 Markers can be accessed with two mappings:
229 Markers can be accessed with two mappings:
230 - precursors[x] -> set(markers on precursors edges of x)
230 - precursors[x] -> set(markers on precursors edges of x)
231 - successors[x] -> set(markers on successors edges of x)
231 - successors[x] -> set(markers on successors edges of x)
232 """
232 """
233
233
234 def __init__(self, sopener):
234 def __init__(self, sopener):
235 # caches for various obsolescence related cache
235 # caches for various obsolescence related cache
236 self.caches = {}
236 self.caches = {}
237 self._all = []
237 self._all = []
238 # new markers to serialize
238 # new markers to serialize
239 self.precursors = {}
239 self.precursors = {}
240 self.successors = {}
240 self.successors = {}
241 self.sopener = sopener
241 self.sopener = sopener
242 data = sopener.tryread('obsstore')
242 data = sopener.tryread('obsstore')
243 if data:
243 if data:
244 self._load(_readmarkers(data))
244 self._load(_readmarkers(data))
245
245
246 def __iter__(self):
246 def __iter__(self):
247 return iter(self._all)
247 return iter(self._all)
248
248
249 def __nonzero__(self):
249 def __nonzero__(self):
250 return bool(self._all)
250 return bool(self._all)
251
251
252 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
252 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
253 """obsolete: add a new obsolete marker
253 """obsolete: add a new obsolete marker
254
254
255 * ensuring it is hashable
255 * ensuring it is hashable
256 * check mandatory metadata
256 * check mandatory metadata
257 * encode metadata
257 * encode metadata
258 """
258 """
259 if metadata is None:
259 if metadata is None:
260 metadata = {}
260 metadata = {}
261 if 'date' not in metadata:
261 if 'date' not in metadata:
262 metadata['date'] = "%d %d" % util.makedate()
262 metadata['date'] = "%d %d" % util.makedate()
263 if len(prec) != 20:
263 if len(prec) != 20:
264 raise ValueError(prec)
264 raise ValueError(prec)
265 for succ in succs:
265 for succ in succs:
266 if len(succ) != 20:
266 if len(succ) != 20:
267 raise ValueError(succ)
267 raise ValueError(succ)
268 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
268 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
269 self.add(transaction, [marker])
269 self.add(transaction, [marker])
270
270
271 def add(self, transaction, markers):
271 def add(self, transaction, markers):
272 """Add new markers to the store
272 """Add new markers to the store
273
273
274 Take care of filtering duplicate.
274 Take care of filtering duplicate.
275 Return the number of new marker."""
275 Return the number of new marker."""
276 if not _enabled:
276 if not _enabled:
277 raise util.Abort('obsolete feature is not enabled on this repo')
277 raise util.Abort('obsolete feature is not enabled on this repo')
278 known = set(self._all)
278 known = set(self._all)
279 new = []
279 new = []
280 for m in markers:
280 for m in markers:
281 if m not in known:
281 if m not in known:
282 known.add(m)
282 known.add(m)
283 new.append(m)
283 new.append(m)
284 if new:
284 if new:
285 f = self.sopener('obsstore', 'ab')
285 f = self.sopener('obsstore', 'ab')
286 try:
286 try:
287 # Whether the file's current position is at the begin or at
287 # Whether the file's current position is at the begin or at
288 # the end after opening a file for appending is implementation
288 # the end after opening a file for appending is implementation
289 # defined. So we must seek to the end before calling tell(),
289 # defined. So we must seek to the end before calling tell(),
290 # or we may get a zero offset for non-zero sized files on
290 # or we may get a zero offset for non-zero sized files on
291 # some platforms (issue3543).
291 # some platforms (issue3543).
292 f.seek(0, _SEEK_END)
292 f.seek(0, _SEEK_END)
293 offset = f.tell()
293 offset = f.tell()
294 transaction.add('obsstore', offset)
294 transaction.add('obsstore', offset)
295 # offset == 0: new file - add the version header
295 # offset == 0: new file - add the version header
296 for bytes in _encodemarkers(new, offset == 0):
296 for bytes in _encodemarkers(new, offset == 0):
297 f.write(bytes)
297 f.write(bytes)
298 finally:
298 finally:
299 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
299 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
300 # call 'filecacheentry.refresh()' here
300 # call 'filecacheentry.refresh()' here
301 f.close()
301 f.close()
302 self._load(new)
302 self._load(new)
303 # new marker *may* have changed several set. invalidate the cache.
303 # new marker *may* have changed several set. invalidate the cache.
304 self.caches.clear()
304 self.caches.clear()
305 return len(new)
305 return len(new)
306
306
307 def mergemarkers(self, transaction, data):
307 def mergemarkers(self, transaction, data):
308 markers = _readmarkers(data)
308 markers = _readmarkers(data)
309 self.add(transaction, markers)
309 self.add(transaction, markers)
310
310
311 def _load(self, markers):
311 def _load(self, markers):
312 for mark in markers:
312 for mark in markers:
313 self._all.append(mark)
313 self._all.append(mark)
314 pre, sucs = mark[:2]
314 pre, sucs = mark[:2]
315 self.successors.setdefault(pre, set()).add(mark)
315 self.successors.setdefault(pre, set()).add(mark)
316 for suc in sucs:
316 for suc in sucs:
317 self.precursors.setdefault(suc, set()).add(mark)
317 self.precursors.setdefault(suc, set()).add(mark)
318 if node.nullid in self.precursors:
318 if node.nullid in self.precursors:
319 raise util.Abort(_('bad obsolescence marker detected: '
319 raise util.Abort(_('bad obsolescence marker detected: '
320 'invalid successors nullid'))
320 'invalid successors nullid'))
321
321
322 def _encodemarkers(markers, addheader=False):
322 def _encodemarkers(markers, addheader=False):
323 # Kept separate from flushmarkers(), it will be reused for
323 # Kept separate from flushmarkers(), it will be reused for
324 # markers exchange.
324 # markers exchange.
325 if addheader:
325 if addheader:
326 yield _pack('>B', _fmversion)
326 yield _pack('>B', _fmversion)
327 for marker in markers:
327 for marker in markers:
328 yield _encodeonemarker(marker)
328 yield _encodeonemarker(marker)
329
329
330
330
331 def _encodeonemarker(marker):
331 def _encodeonemarker(marker):
332 pre, sucs, flags, metadata = marker
332 pre, sucs, flags, metadata = marker
333 nbsuc = len(sucs)
333 nbsuc = len(sucs)
334 format = _fmfixed + (_fmnode * nbsuc)
334 format = _fmfixed + (_fmnode * nbsuc)
335 data = [nbsuc, len(metadata), flags, pre]
335 data = [nbsuc, len(metadata), flags, pre]
336 data.extend(sucs)
336 data.extend(sucs)
337 return _pack(format, *data) + metadata
337 return _pack(format, *data) + metadata
338
338
339 # arbitrary picked to fit into 8K limit from HTTP server
339 # arbitrary picked to fit into 8K limit from HTTP server
340 # you have to take in account:
340 # you have to take in account:
341 # - the version header
341 # - the version header
342 # - the base85 encoding
342 # - the base85 encoding
343 _maxpayload = 5300
343 _maxpayload = 5300
344
344
345 def listmarkers(repo):
345 def listmarkers(repo):
346 """List markers over pushkey"""
346 """List markers over pushkey"""
347 if not repo.obsstore:
347 if not repo.obsstore:
348 return {}
348 return {}
349 keys = {}
349 keys = {}
350 parts = []
350 parts = []
351 currentlen = _maxpayload * 2 # ensure we create a new part
351 currentlen = _maxpayload * 2 # ensure we create a new part
352 for marker in repo.obsstore:
352 for marker in repo.obsstore:
353 nextdata = _encodeonemarker(marker)
353 nextdata = _encodeonemarker(marker)
354 if (len(nextdata) + currentlen > _maxpayload):
354 if (len(nextdata) + currentlen > _maxpayload):
355 currentpart = []
355 currentpart = []
356 currentlen = 0
356 currentlen = 0
357 parts.append(currentpart)
357 parts.append(currentpart)
358 currentpart.append(nextdata)
358 currentpart.append(nextdata)
359 currentlen += len(nextdata)
359 currentlen += len(nextdata)
360 for idx, part in enumerate(reversed(parts)):
360 for idx, part in enumerate(reversed(parts)):
361 data = ''.join([_pack('>B', _fmversion)] + part)
361 data = ''.join([_pack('>B', _fmversion)] + part)
362 keys['dump%i' % idx] = base85.b85encode(data)
362 keys['dump%i' % idx] = base85.b85encode(data)
363 return keys
363 return keys
364
364
365 def pushmarker(repo, key, old, new):
365 def pushmarker(repo, key, old, new):
366 """Push markers over pushkey"""
366 """Push markers over pushkey"""
367 if not key.startswith('dump'):
367 if not key.startswith('dump'):
368 repo.ui.warn(_('unknown key: %r') % key)
368 repo.ui.warn(_('unknown key: %r') % key)
369 return 0
369 return 0
370 if old:
370 if old:
371 repo.ui.warn(_('unexpected old value') % key)
371 repo.ui.warn(_('unexpected old value') % key)
372 return 0
372 return 0
373 data = base85.b85decode(new)
373 data = base85.b85decode(new)
374 lock = repo.lock()
374 lock = repo.lock()
375 try:
375 try:
376 tr = repo.transaction('pushkey: obsolete markers')
376 tr = repo.transaction('pushkey: obsolete markers')
377 try:
377 try:
378 repo.obsstore.mergemarkers(tr, data)
378 repo.obsstore.mergemarkers(tr, data)
379 tr.close()
379 tr.close()
380 return 1
380 return 1
381 finally:
381 finally:
382 tr.release()
382 tr.release()
383 finally:
383 finally:
384 lock.release()
384 lock.release()
385
385
386 def syncpush(repo, remote):
386 def syncpush(repo, remote):
387 """utility function to push obsolete markers to a remote
387 """utility function to push obsolete markers to a remote
388
388
389 Exist mostly to allow overriding for experimentation purpose"""
389 Exist mostly to allow overriding for experimentation purpose"""
390 if (_enabled and repo.obsstore and
390 if (_enabled and repo.obsstore and
391 'obsolete' in remote.listkeys('namespaces')):
391 'obsolete' in remote.listkeys('namespaces')):
392 rslts = []
392 rslts = []
393 remotedata = repo.listkeys('obsolete')
393 remotedata = repo.listkeys('obsolete')
394 for key in sorted(remotedata, reverse=True):
394 for key in sorted(remotedata, reverse=True):
395 # reverse sort to ensure we end with dump0
395 # reverse sort to ensure we end with dump0
396 data = remotedata[key]
396 data = remotedata[key]
397 rslts.append(remote.pushkey('obsolete', key, '', data))
397 rslts.append(remote.pushkey('obsolete', key, '', data))
398 if [r for r in rslts if not r]:
398 if [r for r in rslts if not r]:
399 msg = _('failed to push some obsolete markers!\n')
399 msg = _('failed to push some obsolete markers!\n')
400 repo.ui.warn(msg)
400 repo.ui.warn(msg)
401
401
402 def syncpull(repo, remote, gettransaction):
402 def syncpull(repo, remote, gettransaction):
403 """utility function to pull obsolete markers from a remote
403 """utility function to pull obsolete markers from a remote
404
404
405 The `gettransaction` is function that return the pull transaction, creating
405 The `gettransaction` is function that return the pull transaction, creating
406 one if necessary. We return the transaction to inform the calling code that
406 one if necessary. We return the transaction to inform the calling code that
407 a new transaction have been created (when applicable).
407 a new transaction have been created (when applicable).
408
408
409 Exists mostly to allow overriding for experimentation purpose"""
409 Exists mostly to allow overriding for experimentation purpose"""
410 tr = None
410 tr = None
411 if _enabled:
411 if _enabled:
412 repo.ui.debug('fetching remote obsolete markers\n')
412 repo.ui.debug('fetching remote obsolete markers\n')
413 remoteobs = remote.listkeys('obsolete')
413 remoteobs = remote.listkeys('obsolete')
414 if 'dump0' in remoteobs:
414 if 'dump0' in remoteobs:
415 tr = gettransaction()
415 tr = gettransaction()
416 for key in sorted(remoteobs, reverse=True):
416 for key in sorted(remoteobs, reverse=True):
417 if key.startswith('dump'):
417 if key.startswith('dump'):
418 data = base85.b85decode(remoteobs[key])
418 data = base85.b85decode(remoteobs[key])
419 repo.obsstore.mergemarkers(tr, data)
419 repo.obsstore.mergemarkers(tr, data)
420 repo.invalidatevolatilesets()
420 repo.invalidatevolatilesets()
421 return tr
421 return tr
422
422
423 def allmarkers(repo):
423 def allmarkers(repo):
424 """all obsolete markers known in a repository"""
424 """all obsolete markers known in a repository"""
425 for markerdata in repo.obsstore:
425 for markerdata in repo.obsstore:
426 yield marker(repo, markerdata)
426 yield marker(repo, markerdata)
427
427
428 def precursormarkers(ctx):
428 def precursormarkers(ctx):
429 """obsolete marker marking this changeset as a successors"""
429 """obsolete marker marking this changeset as a successors"""
430 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
430 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
431 yield marker(ctx._repo, data)
431 yield marker(ctx._repo, data)
432
432
433 def successormarkers(ctx):
433 def successormarkers(ctx):
434 """obsolete marker making this changeset obsolete"""
434 """obsolete marker making this changeset obsolete"""
435 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
435 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
436 yield marker(ctx._repo, data)
436 yield marker(ctx._repo, data)
437
437
438 def allsuccessors(obsstore, nodes, ignoreflags=0):
438 def allsuccessors(obsstore, nodes, ignoreflags=0):
439 """Yield node for every successor of <nodes>.
439 """Yield node for every successor of <nodes>.
440
440
441 Some successors may be unknown locally.
441 Some successors may be unknown locally.
442
442
443 This is a linear yield unsuited to detecting split changesets."""
443 This is a linear yield unsuited to detecting split changesets."""
444 remaining = set(nodes)
444 remaining = set(nodes)
445 seen = set(remaining)
445 seen = set(remaining)
446 while remaining:
446 while remaining:
447 current = remaining.pop()
447 current = remaining.pop()
448 yield current
448 yield current
449 for mark in obsstore.successors.get(current, ()):
449 for mark in obsstore.successors.get(current, ()):
450 # ignore marker flagged with with specified flag
450 # ignore marker flagged with specified flag
451 if mark[2] & ignoreflags:
451 if mark[2] & ignoreflags:
452 continue
452 continue
453 for suc in mark[1]:
453 for suc in mark[1]:
454 if suc not in seen:
454 if suc not in seen:
455 seen.add(suc)
455 seen.add(suc)
456 remaining.add(suc)
456 remaining.add(suc)
457
457
458 def foreground(repo, nodes):
458 def foreground(repo, nodes):
459 """return all nodes in the "foreground" of other node
459 """return all nodes in the "foreground" of other node
460
460
461 The foreground of a revision is anything reachable using parent -> children
461 The foreground of a revision is anything reachable using parent -> children
462 or precursor -> successor relation. It is very similar to "descendant" but
462 or precursor -> successor relation. It is very similar to "descendant" but
463 augmented with obsolescence information.
463 augmented with obsolescence information.
464
464
465 Beware that possible obsolescence cycle may result if complex situation.
465 Beware that possible obsolescence cycle may result if complex situation.
466 """
466 """
467 repo = repo.unfiltered()
467 repo = repo.unfiltered()
468 foreground = set(repo.set('%ln::', nodes))
468 foreground = set(repo.set('%ln::', nodes))
469 if repo.obsstore:
469 if repo.obsstore:
470 # We only need this complicated logic if there is obsolescence
470 # We only need this complicated logic if there is obsolescence
471 # XXX will probably deserve an optimised revset.
471 # XXX will probably deserve an optimised revset.
472 nm = repo.changelog.nodemap
472 nm = repo.changelog.nodemap
473 plen = -1
473 plen = -1
474 # compute the whole set of successors or descendants
474 # compute the whole set of successors or descendants
475 while len(foreground) != plen:
475 while len(foreground) != plen:
476 plen = len(foreground)
476 plen = len(foreground)
477 succs = set(c.node() for c in foreground)
477 succs = set(c.node() for c in foreground)
478 mutable = [c.node() for c in foreground if c.mutable()]
478 mutable = [c.node() for c in foreground if c.mutable()]
479 succs.update(allsuccessors(repo.obsstore, mutable))
479 succs.update(allsuccessors(repo.obsstore, mutable))
480 known = (n for n in succs if n in nm)
480 known = (n for n in succs if n in nm)
481 foreground = set(repo.set('%ln::', known))
481 foreground = set(repo.set('%ln::', known))
482 return set(c.node() for c in foreground)
482 return set(c.node() for c in foreground)
483
483
484
484
485 def successorssets(repo, initialnode, cache=None):
485 def successorssets(repo, initialnode, cache=None):
486 """Return all set of successors of initial nodes
486 """Return all set of successors of initial nodes
487
487
488 Successors set of changeset A are a group of revision that succeed A. It
488 Successors set of changeset A are a group of revision that succeed A. It
489 succeed A as a consistent whole, each revision being only partial
489 succeed A as a consistent whole, each revision being only partial
490 replacement. Successors set contains non-obsolete changeset only.
490 replacement. Successors set contains non-obsolete changeset only.
491
491
492 In most cases a changeset A have zero (changeset pruned) or a single
492 In most cases a changeset A have zero (changeset pruned) or a single
493 successors set that contains a single successor (changeset A replaced by
493 successors set that contains a single successor (changeset A replaced by
494 A')
494 A')
495
495
496 When changeset is split, it results successors set containing more than
496 When changeset is split, it results successors set containing more than
497 a single element. Divergent rewriting will result in multiple successors
497 a single element. Divergent rewriting will result in multiple successors
498 sets.
498 sets.
499
499
500 They are returned as a list of tuples containing all valid successors sets.
500 They are returned as a list of tuples containing all valid successors sets.
501
501
502 Final successors unknown locally are considered plain prune (obsoleted
502 Final successors unknown locally are considered plain prune (obsoleted
503 without successors).
503 without successors).
504
504
505 The optional `cache` parameter is a dictionary that may contains
505 The optional `cache` parameter is a dictionary that may contains
506 precomputed successors sets. It is meant to reuse the computation of
506 precomputed successors sets. It is meant to reuse the computation of
507 previous call to `successorssets` when multiple calls are made at the same
507 previous call to `successorssets` when multiple calls are made at the same
508 time. The cache dictionary is updated in place. The caller is responsible
508 time. The cache dictionary is updated in place. The caller is responsible
509 for its live spawn. Code that makes multiple calls to `successorssets`
509 for its live spawn. Code that makes multiple calls to `successorssets`
510 *must* use this cache mechanism or suffer terrible performances."""
510 *must* use this cache mechanism or suffer terrible performances."""
511
511
512 succmarkers = repo.obsstore.successors
512 succmarkers = repo.obsstore.successors
513
513
514 # Stack of nodes we search successors sets for
514 # Stack of nodes we search successors sets for
515 toproceed = [initialnode]
515 toproceed = [initialnode]
516 # set version of above list for fast loop detection
516 # set version of above list for fast loop detection
517 # element added to "toproceed" must be added here
517 # element added to "toproceed" must be added here
518 stackedset = set(toproceed)
518 stackedset = set(toproceed)
519 if cache is None:
519 if cache is None:
520 cache = {}
520 cache = {}
521
521
522 # This while loop is the flattened version of a recursive search for
522 # This while loop is the flattened version of a recursive search for
523 # successors sets
523 # successors sets
524 #
524 #
525 # def successorssets(x):
525 # def successorssets(x):
526 # successors = directsuccessors(x)
526 # successors = directsuccessors(x)
527 # ss = [[]]
527 # ss = [[]]
528 # for succ in directsuccessors(x):
528 # for succ in directsuccessors(x):
529 # # product as in itertools cartesian product
529 # # product as in itertools cartesian product
530 # ss = product(ss, successorssets(succ))
530 # ss = product(ss, successorssets(succ))
531 # return ss
531 # return ss
532 #
532 #
533 # But we can not use plain recursive calls here:
533 # But we can not use plain recursive calls here:
534 # - that would blow the python call stack
534 # - that would blow the python call stack
535 # - obsolescence markers may have cycles, we need to handle them.
535 # - obsolescence markers may have cycles, we need to handle them.
536 #
536 #
537 # The `toproceed` list act as our call stack. Every node we search
537 # The `toproceed` list act as our call stack. Every node we search
538 # successors set for are stacked there.
538 # successors set for are stacked there.
539 #
539 #
540 # The `stackedset` is set version of this stack used to check if a node is
540 # The `stackedset` is set version of this stack used to check if a node is
541 # already stacked. This check is used to detect cycles and prevent infinite
541 # already stacked. This check is used to detect cycles and prevent infinite
542 # loop.
542 # loop.
543 #
543 #
544 # successors set of all nodes are stored in the `cache` dictionary.
544 # successors set of all nodes are stored in the `cache` dictionary.
545 #
545 #
546 # After this while loop ends we use the cache to return the successors sets
546 # After this while loop ends we use the cache to return the successors sets
547 # for the node requested by the caller.
547 # for the node requested by the caller.
548 while toproceed:
548 while toproceed:
549 # Every iteration tries to compute the successors sets of the topmost
549 # Every iteration tries to compute the successors sets of the topmost
550 # node of the stack: CURRENT.
550 # node of the stack: CURRENT.
551 #
551 #
552 # There are four possible outcomes:
552 # There are four possible outcomes:
553 #
553 #
554 # 1) We already know the successors sets of CURRENT:
554 # 1) We already know the successors sets of CURRENT:
555 # -> mission accomplished, pop it from the stack.
555 # -> mission accomplished, pop it from the stack.
556 # 2) Node is not obsolete:
556 # 2) Node is not obsolete:
557 # -> the node is its own successors sets. Add it to the cache.
557 # -> the node is its own successors sets. Add it to the cache.
558 # 3) We do not know successors set of direct successors of CURRENT:
558 # 3) We do not know successors set of direct successors of CURRENT:
559 # -> We add those successors to the stack.
559 # -> We add those successors to the stack.
560 # 4) We know successors sets of all direct successors of CURRENT:
560 # 4) We know successors sets of all direct successors of CURRENT:
561 # -> We can compute CURRENT successors set and add it to the
561 # -> We can compute CURRENT successors set and add it to the
562 # cache.
562 # cache.
563 #
563 #
564 current = toproceed[-1]
564 current = toproceed[-1]
565 if current in cache:
565 if current in cache:
566 # case (1): We already know the successors sets
566 # case (1): We already know the successors sets
567 stackedset.remove(toproceed.pop())
567 stackedset.remove(toproceed.pop())
568 elif current not in succmarkers:
568 elif current not in succmarkers:
569 # case (2): The node is not obsolete.
569 # case (2): The node is not obsolete.
570 if current in repo:
570 if current in repo:
571 # We have a valid last successors.
571 # We have a valid last successors.
572 cache[current] = [(current,)]
572 cache[current] = [(current,)]
573 else:
573 else:
574 # Final obsolete version is unknown locally.
574 # Final obsolete version is unknown locally.
575 # Do not count that as a valid successors
575 # Do not count that as a valid successors
576 cache[current] = []
576 cache[current] = []
577 else:
577 else:
578 # cases (3) and (4)
578 # cases (3) and (4)
579 #
579 #
580 # We proceed in two phases. Phase 1 aims to distinguish case (3)
580 # We proceed in two phases. Phase 1 aims to distinguish case (3)
581 # from case (4):
581 # from case (4):
582 #
582 #
583 # For each direct successors of CURRENT, we check whether its
583 # For each direct successors of CURRENT, we check whether its
584 # successors sets are known. If they are not, we stack the
584 # successors sets are known. If they are not, we stack the
585 # unknown node and proceed to the next iteration of the while
585 # unknown node and proceed to the next iteration of the while
586 # loop. (case 3)
586 # loop. (case 3)
587 #
587 #
588 # During this step, we may detect obsolescence cycles: a node
588 # During this step, we may detect obsolescence cycles: a node
589 # with unknown successors sets but already in the call stack.
589 # with unknown successors sets but already in the call stack.
590 # In such a situation, we arbitrary set the successors sets of
590 # In such a situation, we arbitrary set the successors sets of
591 # the node to nothing (node pruned) to break the cycle.
591 # the node to nothing (node pruned) to break the cycle.
592 #
592 #
593 # If no break was encountered we proceed to phase 2.
593 # If no break was encountered we proceed to phase 2.
594 #
594 #
595 # Phase 2 computes successors sets of CURRENT (case 4); see details
595 # Phase 2 computes successors sets of CURRENT (case 4); see details
596 # in phase 2 itself.
596 # in phase 2 itself.
597 #
597 #
598 # Note the two levels of iteration in each phase.
598 # Note the two levels of iteration in each phase.
599 # - The first one handles obsolescence markers using CURRENT as
599 # - The first one handles obsolescence markers using CURRENT as
600 # precursor (successors markers of CURRENT).
600 # precursor (successors markers of CURRENT).
601 #
601 #
602 # Having multiple entry here means divergence.
602 # Having multiple entry here means divergence.
603 #
603 #
604 # - The second one handles successors defined in each marker.
604 # - The second one handles successors defined in each marker.
605 #
605 #
606 # Having none means pruned node, multiple successors means split,
606 # Having none means pruned node, multiple successors means split,
607 # single successors are standard replacement.
607 # single successors are standard replacement.
608 #
608 #
609 for mark in sorted(succmarkers[current]):
609 for mark in sorted(succmarkers[current]):
610 for suc in mark[1]:
610 for suc in mark[1]:
611 if suc not in cache:
611 if suc not in cache:
612 if suc in stackedset:
612 if suc in stackedset:
613 # cycle breaking
613 # cycle breaking
614 cache[suc] = []
614 cache[suc] = []
615 else:
615 else:
616 # case (3) If we have not computed successors sets
616 # case (3) If we have not computed successors sets
617 # of one of those successors we add it to the
617 # of one of those successors we add it to the
618 # `toproceed` stack and stop all work for this
618 # `toproceed` stack and stop all work for this
619 # iteration.
619 # iteration.
620 toproceed.append(suc)
620 toproceed.append(suc)
621 stackedset.add(suc)
621 stackedset.add(suc)
622 break
622 break
623 else:
623 else:
624 continue
624 continue
625 break
625 break
626 else:
626 else:
627 # case (4): we know all successors sets of all direct
627 # case (4): we know all successors sets of all direct
628 # successors
628 # successors
629 #
629 #
630 # Successors set contributed by each marker depends on the
630 # Successors set contributed by each marker depends on the
631 # successors sets of all its "successors" node.
631 # successors sets of all its "successors" node.
632 #
632 #
633 # Each different marker is a divergence in the obsolescence
633 # Each different marker is a divergence in the obsolescence
634 # history. It contributes successors sets distinct from other
634 # history. It contributes successors sets distinct from other
635 # markers.
635 # markers.
636 #
636 #
637 # Within a marker, a successor may have divergent successors
637 # Within a marker, a successor may have divergent successors
638 # sets. In such a case, the marker will contribute multiple
638 # sets. In such a case, the marker will contribute multiple
639 # divergent successors sets. If multiple successors have
639 # divergent successors sets. If multiple successors have
640 # divergent successors sets, a cartesian product is used.
640 # divergent successors sets, a cartesian product is used.
641 #
641 #
642 # At the end we post-process successors sets to remove
642 # At the end we post-process successors sets to remove
643 # duplicated entry and successors set that are strict subset of
643 # duplicated entry and successors set that are strict subset of
644 # another one.
644 # another one.
645 succssets = []
645 succssets = []
646 for mark in sorted(succmarkers[current]):
646 for mark in sorted(succmarkers[current]):
647 # successors sets contributed by this marker
647 # successors sets contributed by this marker
648 markss = [[]]
648 markss = [[]]
649 for suc in mark[1]:
649 for suc in mark[1]:
650 # cardinal product with previous successors
650 # cardinal product with previous successors
651 productresult = []
651 productresult = []
652 for prefix in markss:
652 for prefix in markss:
653 for suffix in cache[suc]:
653 for suffix in cache[suc]:
654 newss = list(prefix)
654 newss = list(prefix)
655 for part in suffix:
655 for part in suffix:
656 # do not duplicated entry in successors set
656 # do not duplicated entry in successors set
657 # first entry wins.
657 # first entry wins.
658 if part not in newss:
658 if part not in newss:
659 newss.append(part)
659 newss.append(part)
660 productresult.append(newss)
660 productresult.append(newss)
661 markss = productresult
661 markss = productresult
662 succssets.extend(markss)
662 succssets.extend(markss)
663 # remove duplicated and subset
663 # remove duplicated and subset
664 seen = []
664 seen = []
665 final = []
665 final = []
666 candidate = sorted(((set(s), s) for s in succssets if s),
666 candidate = sorted(((set(s), s) for s in succssets if s),
667 key=lambda x: len(x[1]), reverse=True)
667 key=lambda x: len(x[1]), reverse=True)
668 for setversion, listversion in candidate:
668 for setversion, listversion in candidate:
669 for seenset in seen:
669 for seenset in seen:
670 if setversion.issubset(seenset):
670 if setversion.issubset(seenset):
671 break
671 break
672 else:
672 else:
673 final.append(listversion)
673 final.append(listversion)
674 seen.append(setversion)
674 seen.append(setversion)
675 final.reverse() # put small successors set first
675 final.reverse() # put small successors set first
676 cache[current] = final
676 cache[current] = final
677 return cache[initialnode]
677 return cache[initialnode]
678
678
679 def _knownrevs(repo, nodes):
679 def _knownrevs(repo, nodes):
680 """yield revision numbers of known nodes passed in parameters
680 """yield revision numbers of known nodes passed in parameters
681
681
682 Unknown revisions are silently ignored."""
682 Unknown revisions are silently ignored."""
683 torev = repo.changelog.nodemap.get
683 torev = repo.changelog.nodemap.get
684 for n in nodes:
684 for n in nodes:
685 rev = torev(n)
685 rev = torev(n)
686 if rev is not None:
686 if rev is not None:
687 yield rev
687 yield rev
688
688
689 # mapping of 'set-name' -> <function to compute this set>
689 # mapping of 'set-name' -> <function to compute this set>
690 cachefuncs = {}
690 cachefuncs = {}
691 def cachefor(name):
691 def cachefor(name):
692 """Decorator to register a function as computing the cache for a set"""
692 """Decorator to register a function as computing the cache for a set"""
693 def decorator(func):
693 def decorator(func):
694 assert name not in cachefuncs
694 assert name not in cachefuncs
695 cachefuncs[name] = func
695 cachefuncs[name] = func
696 return func
696 return func
697 return decorator
697 return decorator
698
698
699 def getrevs(repo, name):
699 def getrevs(repo, name):
700 """Return the set of revision that belong to the <name> set
700 """Return the set of revision that belong to the <name> set
701
701
702 Such access may compute the set and cache it for future use"""
702 Such access may compute the set and cache it for future use"""
703 repo = repo.unfiltered()
703 repo = repo.unfiltered()
704 if not repo.obsstore:
704 if not repo.obsstore:
705 return ()
705 return ()
706 if name not in repo.obsstore.caches:
706 if name not in repo.obsstore.caches:
707 repo.obsstore.caches[name] = cachefuncs[name](repo)
707 repo.obsstore.caches[name] = cachefuncs[name](repo)
708 return repo.obsstore.caches[name]
708 return repo.obsstore.caches[name]
709
709
710 # To be simple we need to invalidate obsolescence cache when:
710 # To be simple we need to invalidate obsolescence cache when:
711 #
711 #
712 # - new changeset is added:
712 # - new changeset is added:
713 # - public phase is changed
713 # - public phase is changed
714 # - obsolescence marker are added
714 # - obsolescence marker are added
715 # - strip is used a repo
715 # - strip is used a repo
716 def clearobscaches(repo):
716 def clearobscaches(repo):
717 """Remove all obsolescence related cache from a repo
717 """Remove all obsolescence related cache from a repo
718
718
719 This remove all cache in obsstore is the obsstore already exist on the
719 This remove all cache in obsstore is the obsstore already exist on the
720 repo.
720 repo.
721
721
722 (We could be smarter here given the exact event that trigger the cache
722 (We could be smarter here given the exact event that trigger the cache
723 clearing)"""
723 clearing)"""
724 # only clear cache is there is obsstore data in this repo
724 # only clear cache is there is obsstore data in this repo
725 if 'obsstore' in repo._filecache:
725 if 'obsstore' in repo._filecache:
726 repo.obsstore.caches.clear()
726 repo.obsstore.caches.clear()
727
727
728 @cachefor('obsolete')
728 @cachefor('obsolete')
729 def _computeobsoleteset(repo):
729 def _computeobsoleteset(repo):
730 """the set of obsolete revisions"""
730 """the set of obsolete revisions"""
731 obs = set()
731 obs = set()
732 getrev = repo.changelog.nodemap.get
732 getrev = repo.changelog.nodemap.get
733 getphase = repo._phasecache.phase
733 getphase = repo._phasecache.phase
734 for node in repo.obsstore.successors:
734 for node in repo.obsstore.successors:
735 rev = getrev(node)
735 rev = getrev(node)
736 if rev is not None and getphase(repo, rev):
736 if rev is not None and getphase(repo, rev):
737 obs.add(rev)
737 obs.add(rev)
738 return obs
738 return obs
739
739
740 @cachefor('unstable')
740 @cachefor('unstable')
741 def _computeunstableset(repo):
741 def _computeunstableset(repo):
742 """the set of non obsolete revisions with obsolete parents"""
742 """the set of non obsolete revisions with obsolete parents"""
743 # revset is not efficient enough here
743 # revset is not efficient enough here
744 # we do (obsolete()::) - obsolete() by hand
744 # we do (obsolete()::) - obsolete() by hand
745 obs = getrevs(repo, 'obsolete')
745 obs = getrevs(repo, 'obsolete')
746 if not obs:
746 if not obs:
747 return set()
747 return set()
748 cl = repo.changelog
748 cl = repo.changelog
749 return set(r for r in cl.descendants(obs) if r not in obs)
749 return set(r for r in cl.descendants(obs) if r not in obs)
750
750
751 @cachefor('suspended')
751 @cachefor('suspended')
752 def _computesuspendedset(repo):
752 def _computesuspendedset(repo):
753 """the set of obsolete parents with non obsolete descendants"""
753 """the set of obsolete parents with non obsolete descendants"""
754 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
754 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
755 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
755 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
756
756
757 @cachefor('extinct')
757 @cachefor('extinct')
758 def _computeextinctset(repo):
758 def _computeextinctset(repo):
759 """the set of obsolete parents without non obsolete descendants"""
759 """the set of obsolete parents without non obsolete descendants"""
760 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
760 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
761
761
762
762
763 @cachefor('bumped')
763 @cachefor('bumped')
764 def _computebumpedset(repo):
764 def _computebumpedset(repo):
765 """the set of revs trying to obsolete public revisions"""
765 """the set of revs trying to obsolete public revisions"""
766 # get all possible bumped changesets
766 # get all possible bumped changesets
767 tonode = repo.changelog.node
767 tonode = repo.changelog.node
768 publicnodes = (tonode(r) for r in repo.revs('public()'))
768 publicnodes = (tonode(r) for r in repo.revs('public()'))
769 successors = allsuccessors(repo.obsstore, publicnodes,
769 successors = allsuccessors(repo.obsstore, publicnodes,
770 ignoreflags=bumpedfix)
770 ignoreflags=bumpedfix)
771 # revision public or already obsolete don't count as bumped
771 # revision public or already obsolete don't count as bumped
772 query = '%ld - obsolete() - public()'
772 query = '%ld - obsolete() - public()'
773 return set(repo.revs(query, _knownrevs(repo, successors)))
773 return set(repo.revs(query, _knownrevs(repo, successors)))
774
774
775 @cachefor('divergent')
775 @cachefor('divergent')
776 def _computedivergentset(repo):
776 def _computedivergentset(repo):
777 """the set of rev that compete to be the final successors of some revision.
777 """the set of rev that compete to be the final successors of some revision.
778 """
778 """
779 divergent = set()
779 divergent = set()
780 obsstore = repo.obsstore
780 obsstore = repo.obsstore
781 newermap = {}
781 newermap = {}
782 for ctx in repo.set('(not public()) - obsolete()'):
782 for ctx in repo.set('(not public()) - obsolete()'):
783 mark = obsstore.precursors.get(ctx.node(), ())
783 mark = obsstore.precursors.get(ctx.node(), ())
784 toprocess = set(mark)
784 toprocess = set(mark)
785 while toprocess:
785 while toprocess:
786 prec = toprocess.pop()[0]
786 prec = toprocess.pop()[0]
787 if prec not in newermap:
787 if prec not in newermap:
788 successorssets(repo, prec, newermap)
788 successorssets(repo, prec, newermap)
789 newer = [n for n in newermap[prec] if n]
789 newer = [n for n in newermap[prec] if n]
790 if len(newer) > 1:
790 if len(newer) > 1:
791 divergent.add(ctx.rev())
791 divergent.add(ctx.rev())
792 break
792 break
793 toprocess.update(obsstore.precursors.get(prec, ()))
793 toprocess.update(obsstore.precursors.get(prec, ()))
794 return divergent
794 return divergent
795
795
796
796
797 def createmarkers(repo, relations, flag=0, metadata=None):
797 def createmarkers(repo, relations, flag=0, metadata=None):
798 """Add obsolete markers between changesets in a repo
798 """Add obsolete markers between changesets in a repo
799
799
800 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
800 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
801 `old` and `news` are changectx.
801 `old` and `news` are changectx.
802
802
803 Trying to obsolete a public changeset will raise an exception.
803 Trying to obsolete a public changeset will raise an exception.
804
804
805 Current user and date are used except if specified otherwise in the
805 Current user and date are used except if specified otherwise in the
806 metadata attribute.
806 metadata attribute.
807
807
808 This function operates within a transaction of its own, but does
808 This function operates within a transaction of its own, but does
809 not take any lock on the repo.
809 not take any lock on the repo.
810 """
810 """
811 # prepare metadata
811 # prepare metadata
812 if metadata is None:
812 if metadata is None:
813 metadata = {}
813 metadata = {}
814 if 'date' not in metadata:
814 if 'date' not in metadata:
815 metadata['date'] = '%i %i' % util.makedate()
815 metadata['date'] = '%i %i' % util.makedate()
816 if 'user' not in metadata:
816 if 'user' not in metadata:
817 metadata['user'] = repo.ui.username()
817 metadata['user'] = repo.ui.username()
818 tr = repo.transaction('add-obsolescence-marker')
818 tr = repo.transaction('add-obsolescence-marker')
819 try:
819 try:
820 for prec, sucs in relations:
820 for prec, sucs in relations:
821 if not prec.mutable():
821 if not prec.mutable():
822 raise util.Abort("cannot obsolete immutable changeset: %s"
822 raise util.Abort("cannot obsolete immutable changeset: %s"
823 % prec)
823 % prec)
824 nprec = prec.node()
824 nprec = prec.node()
825 nsucs = tuple(s.node() for s in sucs)
825 nsucs = tuple(s.node() for s in sucs)
826 if nprec in nsucs:
826 if nprec in nsucs:
827 raise util.Abort("changeset %s cannot obsolete itself" % prec)
827 raise util.Abort("changeset %s cannot obsolete itself" % prec)
828 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
828 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
829 repo.filteredrevcache.clear()
829 repo.filteredrevcache.clear()
830 tr.close()
830 tr.close()
831 finally:
831 finally:
832 tr.release()
832 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now