##// END OF EJS Templates
createmarkers: allow to pass metadata for a marker only...
Pierre-Yves David -
r20517:2158e8f3 default
parent child Browse files
Show More
@@ -1,846 +1,854 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 import phases
87 import phases
88 from i18n import _
88 from i18n import _
89
89
90 _pack = struct.pack
90 _pack = struct.pack
91 _unpack = struct.unpack
91 _unpack = struct.unpack
92
92
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
94
94
95 # the obsolete feature is not mature enough to be enabled by default.
95 # the obsolete feature is not mature enough to be enabled by default.
96 # you have to rely on third party extension extension to enable this.
96 # you have to rely on third party extension extension to enable this.
97 _enabled = False
97 _enabled = False
98
98
99 # data used for parsing and writing
99 # data used for parsing and writing
100 _fmversion = 0
100 _fmversion = 0
101 _fmfixed = '>BIB20s'
101 _fmfixed = '>BIB20s'
102 _fmnode = '20s'
102 _fmnode = '20s'
103 _fmfsize = struct.calcsize(_fmfixed)
103 _fmfsize = struct.calcsize(_fmfixed)
104 _fnodesize = struct.calcsize(_fmnode)
104 _fnodesize = struct.calcsize(_fmnode)
105
105
106 ### obsolescence marker flag
106 ### obsolescence marker flag
107
107
108 ## bumpedfix flag
108 ## bumpedfix flag
109 #
109 #
110 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # When a changeset A' succeed to a changeset A which became public, we call A'
111 # "bumped" because it's a successors of a public changesets
111 # "bumped" because it's a successors of a public changesets
112 #
112 #
113 # o A' (bumped)
113 # o A' (bumped)
114 # |`:
114 # |`:
115 # | o A
115 # | o A
116 # |/
116 # |/
117 # o Z
117 # o Z
118 #
118 #
119 # The way to solve this situation is to create a new changeset Ad as children
119 # The way to solve this situation is to create a new changeset Ad as children
120 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # of A. This changeset have the same content than A'. So the diff from A to A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
122 #
122 #
123 # o Ad
123 # o Ad
124 # |`:
124 # |`:
125 # | x A'
125 # | x A'
126 # |'|
126 # |'|
127 # o | A
127 # o | A
128 # |/
128 # |/
129 # o Z
129 # o Z
130 #
130 #
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
133 # This flag mean that the successors express the changes between the public and
133 # This flag mean that the successors express the changes between the public and
134 # bumped version and fix the situation, breaking the transitivity of
134 # bumped version and fix the situation, breaking the transitivity of
135 # "bumped" here.
135 # "bumped" here.
136 bumpedfix = 1
136 bumpedfix = 1
137
137
138 def _readmarkers(data):
138 def _readmarkers(data):
139 """Read and enumerate markers from raw data"""
139 """Read and enumerate markers from raw data"""
140 off = 0
140 off = 0
141 diskversion = _unpack('>B', data[off:off + 1])[0]
141 diskversion = _unpack('>B', data[off:off + 1])[0]
142 off += 1
142 off += 1
143 if diskversion != _fmversion:
143 if diskversion != _fmversion:
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
145 % diskversion)
145 % diskversion)
146
146
147 # Loop on markers
147 # Loop on markers
148 l = len(data)
148 l = len(data)
149 while off + _fmfsize <= l:
149 while off + _fmfsize <= l:
150 # read fixed part
150 # read fixed part
151 cur = data[off:off + _fmfsize]
151 cur = data[off:off + _fmfsize]
152 off += _fmfsize
152 off += _fmfsize
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
154 # read replacement
154 # read replacement
155 sucs = ()
155 sucs = ()
156 if nbsuc:
156 if nbsuc:
157 s = (_fnodesize * nbsuc)
157 s = (_fnodesize * nbsuc)
158 cur = data[off:off + s]
158 cur = data[off:off + s]
159 sucs = _unpack(_fmnode * nbsuc, cur)
159 sucs = _unpack(_fmnode * nbsuc, cur)
160 off += s
160 off += s
161 # read metadata
161 # read metadata
162 # (metadata will be decoded on demand)
162 # (metadata will be decoded on demand)
163 metadata = data[off:off + mdsize]
163 metadata = data[off:off + mdsize]
164 if len(metadata) != mdsize:
164 if len(metadata) != mdsize:
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
166 'short, %d bytes expected, got %d')
166 'short, %d bytes expected, got %d')
167 % (mdsize, len(metadata)))
167 % (mdsize, len(metadata)))
168 off += mdsize
168 off += mdsize
169 yield (pre, sucs, flags, metadata)
169 yield (pre, sucs, flags, metadata)
170
170
171 def encodemeta(meta):
171 def encodemeta(meta):
172 """Return encoded metadata string to string mapping.
172 """Return encoded metadata string to string mapping.
173
173
174 Assume no ':' in key and no '\0' in both key and value."""
174 Assume no ':' in key and no '\0' in both key and value."""
175 for key, value in meta.iteritems():
175 for key, value in meta.iteritems():
176 if ':' in key or '\0' in key:
176 if ':' in key or '\0' in key:
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
178 if '\0' in value:
178 if '\0' in value:
179 raise ValueError("':' are forbidden in metadata value'")
179 raise ValueError("':' are forbidden in metadata value'")
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
181
181
182 def decodemeta(data):
182 def decodemeta(data):
183 """Return string to string dictionary from encoded version."""
183 """Return string to string dictionary from encoded version."""
184 d = {}
184 d = {}
185 for l in data.split('\0'):
185 for l in data.split('\0'):
186 if l:
186 if l:
187 key, value = l.split(':')
187 key, value = l.split(':')
188 d[key] = value
188 d[key] = value
189 return d
189 return d
190
190
191 class marker(object):
191 class marker(object):
192 """Wrap obsolete marker raw data"""
192 """Wrap obsolete marker raw data"""
193
193
194 def __init__(self, repo, data):
194 def __init__(self, repo, data):
195 # the repo argument will be used to create changectx in later version
195 # the repo argument will be used to create changectx in later version
196 self._repo = repo
196 self._repo = repo
197 self._data = data
197 self._data = data
198 self._decodedmeta = None
198 self._decodedmeta = None
199
199
200 def __hash__(self):
200 def __hash__(self):
201 return hash(self._data)
201 return hash(self._data)
202
202
203 def __eq__(self, other):
203 def __eq__(self, other):
204 if type(other) != type(self):
204 if type(other) != type(self):
205 return False
205 return False
206 return self._data == other._data
206 return self._data == other._data
207
207
208 def precnode(self):
208 def precnode(self):
209 """Precursor changeset node identifier"""
209 """Precursor changeset node identifier"""
210 return self._data[0]
210 return self._data[0]
211
211
212 def succnodes(self):
212 def succnodes(self):
213 """List of successor changesets node identifiers"""
213 """List of successor changesets node identifiers"""
214 return self._data[1]
214 return self._data[1]
215
215
216 def metadata(self):
216 def metadata(self):
217 """Decoded metadata dictionary"""
217 """Decoded metadata dictionary"""
218 if self._decodedmeta is None:
218 if self._decodedmeta is None:
219 self._decodedmeta = decodemeta(self._data[3])
219 self._decodedmeta = decodemeta(self._data[3])
220 return self._decodedmeta
220 return self._decodedmeta
221
221
222 def date(self):
222 def date(self):
223 """Creation date as (unixtime, offset)"""
223 """Creation date as (unixtime, offset)"""
224 parts = self.metadata()['date'].split(' ')
224 parts = self.metadata()['date'].split(' ')
225 return (float(parts[0]), int(parts[1]))
225 return (float(parts[0]), int(parts[1]))
226
226
227 class obsstore(object):
227 class obsstore(object):
228 """Store obsolete markers
228 """Store obsolete markers
229
229
230 Markers can be accessed with two mappings:
230 Markers can be accessed with two mappings:
231 - precursors[x] -> set(markers on precursors edges of x)
231 - precursors[x] -> set(markers on precursors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
233 """
233 """
234
234
235 def __init__(self, sopener):
235 def __init__(self, sopener):
236 # caches for various obsolescence related cache
236 # caches for various obsolescence related cache
237 self.caches = {}
237 self.caches = {}
238 self._all = []
238 self._all = []
239 # new markers to serialize
239 # new markers to serialize
240 self.precursors = {}
240 self.precursors = {}
241 self.successors = {}
241 self.successors = {}
242 self.sopener = sopener
242 self.sopener = sopener
243 data = sopener.tryread('obsstore')
243 data = sopener.tryread('obsstore')
244 if data:
244 if data:
245 self._load(_readmarkers(data))
245 self._load(_readmarkers(data))
246
246
247 def __iter__(self):
247 def __iter__(self):
248 return iter(self._all)
248 return iter(self._all)
249
249
250 def __nonzero__(self):
250 def __nonzero__(self):
251 return bool(self._all)
251 return bool(self._all)
252
252
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
254 """obsolete: add a new obsolete marker
254 """obsolete: add a new obsolete marker
255
255
256 * ensuring it is hashable
256 * ensuring it is hashable
257 * check mandatory metadata
257 * check mandatory metadata
258 * encode metadata
258 * encode metadata
259
259
260 If you are a human writing code creating marker you want to use the
260 If you are a human writing code creating marker you want to use the
261 `createmarkers` function in this module instead.
261 `createmarkers` function in this module instead.
262 """
262 """
263 if metadata is None:
263 if metadata is None:
264 metadata = {}
264 metadata = {}
265 if 'date' not in metadata:
265 if 'date' not in metadata:
266 metadata['date'] = "%d %d" % util.makedate()
266 metadata['date'] = "%d %d" % util.makedate()
267 if len(prec) != 20:
267 if len(prec) != 20:
268 raise ValueError(prec)
268 raise ValueError(prec)
269 for succ in succs:
269 for succ in succs:
270 if len(succ) != 20:
270 if len(succ) != 20:
271 raise ValueError(succ)
271 raise ValueError(succ)
272 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
272 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
273 self.add(transaction, [marker])
273 self.add(transaction, [marker])
274
274
275 def add(self, transaction, markers):
275 def add(self, transaction, markers):
276 """Add new markers to the store
276 """Add new markers to the store
277
277
278 Take care of filtering duplicate.
278 Take care of filtering duplicate.
279 Return the number of new marker."""
279 Return the number of new marker."""
280 if not _enabled:
280 if not _enabled:
281 raise util.Abort('obsolete feature is not enabled on this repo')
281 raise util.Abort('obsolete feature is not enabled on this repo')
282 known = set(self._all)
282 known = set(self._all)
283 new = []
283 new = []
284 for m in markers:
284 for m in markers:
285 if m not in known:
285 if m not in known:
286 known.add(m)
286 known.add(m)
287 new.append(m)
287 new.append(m)
288 if new:
288 if new:
289 f = self.sopener('obsstore', 'ab')
289 f = self.sopener('obsstore', 'ab')
290 try:
290 try:
291 # Whether the file's current position is at the begin or at
291 # Whether the file's current position is at the begin or at
292 # the end after opening a file for appending is implementation
292 # the end after opening a file for appending is implementation
293 # defined. So we must seek to the end before calling tell(),
293 # defined. So we must seek to the end before calling tell(),
294 # or we may get a zero offset for non-zero sized files on
294 # or we may get a zero offset for non-zero sized files on
295 # some platforms (issue3543).
295 # some platforms (issue3543).
296 f.seek(0, _SEEK_END)
296 f.seek(0, _SEEK_END)
297 offset = f.tell()
297 offset = f.tell()
298 transaction.add('obsstore', offset)
298 transaction.add('obsstore', offset)
299 # offset == 0: new file - add the version header
299 # offset == 0: new file - add the version header
300 for bytes in _encodemarkers(new, offset == 0):
300 for bytes in _encodemarkers(new, offset == 0):
301 f.write(bytes)
301 f.write(bytes)
302 finally:
302 finally:
303 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
303 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
304 # call 'filecacheentry.refresh()' here
304 # call 'filecacheentry.refresh()' here
305 f.close()
305 f.close()
306 self._load(new)
306 self._load(new)
307 # new marker *may* have changed several set. invalidate the cache.
307 # new marker *may* have changed several set. invalidate the cache.
308 self.caches.clear()
308 self.caches.clear()
309 return len(new)
309 return len(new)
310
310
311 def mergemarkers(self, transaction, data):
311 def mergemarkers(self, transaction, data):
312 markers = _readmarkers(data)
312 markers = _readmarkers(data)
313 self.add(transaction, markers)
313 self.add(transaction, markers)
314
314
315 def _load(self, markers):
315 def _load(self, markers):
316 for mark in markers:
316 for mark in markers:
317 self._all.append(mark)
317 self._all.append(mark)
318 pre, sucs = mark[:2]
318 pre, sucs = mark[:2]
319 self.successors.setdefault(pre, set()).add(mark)
319 self.successors.setdefault(pre, set()).add(mark)
320 for suc in sucs:
320 for suc in sucs:
321 self.precursors.setdefault(suc, set()).add(mark)
321 self.precursors.setdefault(suc, set()).add(mark)
322 if node.nullid in self.precursors:
322 if node.nullid in self.precursors:
323 raise util.Abort(_('bad obsolescence marker detected: '
323 raise util.Abort(_('bad obsolescence marker detected: '
324 'invalid successors nullid'))
324 'invalid successors nullid'))
325
325
326 def _encodemarkers(markers, addheader=False):
326 def _encodemarkers(markers, addheader=False):
327 # Kept separate from flushmarkers(), it will be reused for
327 # Kept separate from flushmarkers(), it will be reused for
328 # markers exchange.
328 # markers exchange.
329 if addheader:
329 if addheader:
330 yield _pack('>B', _fmversion)
330 yield _pack('>B', _fmversion)
331 for marker in markers:
331 for marker in markers:
332 yield _encodeonemarker(marker)
332 yield _encodeonemarker(marker)
333
333
334
334
335 def _encodeonemarker(marker):
335 def _encodeonemarker(marker):
336 pre, sucs, flags, metadata = marker
336 pre, sucs, flags, metadata = marker
337 nbsuc = len(sucs)
337 nbsuc = len(sucs)
338 format = _fmfixed + (_fmnode * nbsuc)
338 format = _fmfixed + (_fmnode * nbsuc)
339 data = [nbsuc, len(metadata), flags, pre]
339 data = [nbsuc, len(metadata), flags, pre]
340 data.extend(sucs)
340 data.extend(sucs)
341 return _pack(format, *data) + metadata
341 return _pack(format, *data) + metadata
342
342
343 # arbitrary picked to fit into 8K limit from HTTP server
343 # arbitrary picked to fit into 8K limit from HTTP server
344 # you have to take in account:
344 # you have to take in account:
345 # - the version header
345 # - the version header
346 # - the base85 encoding
346 # - the base85 encoding
347 _maxpayload = 5300
347 _maxpayload = 5300
348
348
349 def listmarkers(repo):
349 def listmarkers(repo):
350 """List markers over pushkey"""
350 """List markers over pushkey"""
351 if not repo.obsstore:
351 if not repo.obsstore:
352 return {}
352 return {}
353 keys = {}
353 keys = {}
354 parts = []
354 parts = []
355 currentlen = _maxpayload * 2 # ensure we create a new part
355 currentlen = _maxpayload * 2 # ensure we create a new part
356 for marker in repo.obsstore:
356 for marker in repo.obsstore:
357 nextdata = _encodeonemarker(marker)
357 nextdata = _encodeonemarker(marker)
358 if (len(nextdata) + currentlen > _maxpayload):
358 if (len(nextdata) + currentlen > _maxpayload):
359 currentpart = []
359 currentpart = []
360 currentlen = 0
360 currentlen = 0
361 parts.append(currentpart)
361 parts.append(currentpart)
362 currentpart.append(nextdata)
362 currentpart.append(nextdata)
363 currentlen += len(nextdata)
363 currentlen += len(nextdata)
364 for idx, part in enumerate(reversed(parts)):
364 for idx, part in enumerate(reversed(parts)):
365 data = ''.join([_pack('>B', _fmversion)] + part)
365 data = ''.join([_pack('>B', _fmversion)] + part)
366 keys['dump%i' % idx] = base85.b85encode(data)
366 keys['dump%i' % idx] = base85.b85encode(data)
367 return keys
367 return keys
368
368
369 def pushmarker(repo, key, old, new):
369 def pushmarker(repo, key, old, new):
370 """Push markers over pushkey"""
370 """Push markers over pushkey"""
371 if not key.startswith('dump'):
371 if not key.startswith('dump'):
372 repo.ui.warn(_('unknown key: %r') % key)
372 repo.ui.warn(_('unknown key: %r') % key)
373 return 0
373 return 0
374 if old:
374 if old:
375 repo.ui.warn(_('unexpected old value') % key)
375 repo.ui.warn(_('unexpected old value') % key)
376 return 0
376 return 0
377 data = base85.b85decode(new)
377 data = base85.b85decode(new)
378 lock = repo.lock()
378 lock = repo.lock()
379 try:
379 try:
380 tr = repo.transaction('pushkey: obsolete markers')
380 tr = repo.transaction('pushkey: obsolete markers')
381 try:
381 try:
382 repo.obsstore.mergemarkers(tr, data)
382 repo.obsstore.mergemarkers(tr, data)
383 tr.close()
383 tr.close()
384 return 1
384 return 1
385 finally:
385 finally:
386 tr.release()
386 tr.release()
387 finally:
387 finally:
388 lock.release()
388 lock.release()
389
389
390 def allmarkers(repo):
390 def allmarkers(repo):
391 """all obsolete markers known in a repository"""
391 """all obsolete markers known in a repository"""
392 for markerdata in repo.obsstore:
392 for markerdata in repo.obsstore:
393 yield marker(repo, markerdata)
393 yield marker(repo, markerdata)
394
394
395 def precursormarkers(ctx):
395 def precursormarkers(ctx):
396 """obsolete marker marking this changeset as a successors"""
396 """obsolete marker marking this changeset as a successors"""
397 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
397 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
398 yield marker(ctx._repo, data)
398 yield marker(ctx._repo, data)
399
399
400 def successormarkers(ctx):
400 def successormarkers(ctx):
401 """obsolete marker making this changeset obsolete"""
401 """obsolete marker making this changeset obsolete"""
402 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
402 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
403 yield marker(ctx._repo, data)
403 yield marker(ctx._repo, data)
404
404
405 def allsuccessors(obsstore, nodes, ignoreflags=0):
405 def allsuccessors(obsstore, nodes, ignoreflags=0):
406 """Yield node for every successor of <nodes>.
406 """Yield node for every successor of <nodes>.
407
407
408 Some successors may be unknown locally.
408 Some successors may be unknown locally.
409
409
410 This is a linear yield unsuited to detecting split changesets. It includes
410 This is a linear yield unsuited to detecting split changesets. It includes
411 initial nodes too."""
411 initial nodes too."""
412 remaining = set(nodes)
412 remaining = set(nodes)
413 seen = set(remaining)
413 seen = set(remaining)
414 while remaining:
414 while remaining:
415 current = remaining.pop()
415 current = remaining.pop()
416 yield current
416 yield current
417 for mark in obsstore.successors.get(current, ()):
417 for mark in obsstore.successors.get(current, ()):
418 # ignore marker flagged with specified flag
418 # ignore marker flagged with specified flag
419 if mark[2] & ignoreflags:
419 if mark[2] & ignoreflags:
420 continue
420 continue
421 for suc in mark[1]:
421 for suc in mark[1]:
422 if suc not in seen:
422 if suc not in seen:
423 seen.add(suc)
423 seen.add(suc)
424 remaining.add(suc)
424 remaining.add(suc)
425
425
426 def allprecursors(obsstore, nodes, ignoreflags=0):
426 def allprecursors(obsstore, nodes, ignoreflags=0):
427 """Yield node for every precursors of <nodes>.
427 """Yield node for every precursors of <nodes>.
428
428
429 Some precursors may be unknown locally.
429 Some precursors may be unknown locally.
430
430
431 This is a linear yield unsuited to detecting folded changesets. It includes
431 This is a linear yield unsuited to detecting folded changesets. It includes
432 initial nodes too."""
432 initial nodes too."""
433
433
434 remaining = set(nodes)
434 remaining = set(nodes)
435 seen = set(remaining)
435 seen = set(remaining)
436 while remaining:
436 while remaining:
437 current = remaining.pop()
437 current = remaining.pop()
438 yield current
438 yield current
439 for mark in obsstore.precursors.get(current, ()):
439 for mark in obsstore.precursors.get(current, ()):
440 # ignore marker flagged with specified flag
440 # ignore marker flagged with specified flag
441 if mark[2] & ignoreflags:
441 if mark[2] & ignoreflags:
442 continue
442 continue
443 suc = mark[0]
443 suc = mark[0]
444 if suc not in seen:
444 if suc not in seen:
445 seen.add(suc)
445 seen.add(suc)
446 remaining.add(suc)
446 remaining.add(suc)
447
447
448 def foreground(repo, nodes):
448 def foreground(repo, nodes):
449 """return all nodes in the "foreground" of other node
449 """return all nodes in the "foreground" of other node
450
450
451 The foreground of a revision is anything reachable using parent -> children
451 The foreground of a revision is anything reachable using parent -> children
452 or precursor -> successor relation. It is very similar to "descendant" but
452 or precursor -> successor relation. It is very similar to "descendant" but
453 augmented with obsolescence information.
453 augmented with obsolescence information.
454
454
455 Beware that possible obsolescence cycle may result if complex situation.
455 Beware that possible obsolescence cycle may result if complex situation.
456 """
456 """
457 repo = repo.unfiltered()
457 repo = repo.unfiltered()
458 foreground = set(repo.set('%ln::', nodes))
458 foreground = set(repo.set('%ln::', nodes))
459 if repo.obsstore:
459 if repo.obsstore:
460 # We only need this complicated logic if there is obsolescence
460 # We only need this complicated logic if there is obsolescence
461 # XXX will probably deserve an optimised revset.
461 # XXX will probably deserve an optimised revset.
462 nm = repo.changelog.nodemap
462 nm = repo.changelog.nodemap
463 plen = -1
463 plen = -1
464 # compute the whole set of successors or descendants
464 # compute the whole set of successors or descendants
465 while len(foreground) != plen:
465 while len(foreground) != plen:
466 plen = len(foreground)
466 plen = len(foreground)
467 succs = set(c.node() for c in foreground)
467 succs = set(c.node() for c in foreground)
468 mutable = [c.node() for c in foreground if c.mutable()]
468 mutable = [c.node() for c in foreground if c.mutable()]
469 succs.update(allsuccessors(repo.obsstore, mutable))
469 succs.update(allsuccessors(repo.obsstore, mutable))
470 known = (n for n in succs if n in nm)
470 known = (n for n in succs if n in nm)
471 foreground = set(repo.set('%ln::', known))
471 foreground = set(repo.set('%ln::', known))
472 return set(c.node() for c in foreground)
472 return set(c.node() for c in foreground)
473
473
474
474
475 def successorssets(repo, initialnode, cache=None):
475 def successorssets(repo, initialnode, cache=None):
476 """Return all set of successors of initial nodes
476 """Return all set of successors of initial nodes
477
477
478 The successors set of a changeset A are a group of revisions that succeed
478 The successors set of a changeset A are a group of revisions that succeed
479 A. It succeeds A as a consistent whole, each revision being only a partial
479 A. It succeeds A as a consistent whole, each revision being only a partial
480 replacement. The successors set contains non-obsolete changesets only.
480 replacement. The successors set contains non-obsolete changesets only.
481
481
482 This function returns the full list of successor sets which is why it
482 This function returns the full list of successor sets which is why it
483 returns a list of tuples and not just a single tuple. Each tuple is a valid
483 returns a list of tuples and not just a single tuple. Each tuple is a valid
484 successors set. Not that (A,) may be a valid successors set for changeset A
484 successors set. Not that (A,) may be a valid successors set for changeset A
485 (see below).
485 (see below).
486
486
487 In most cases, a changeset A will have a single element (e.g. the changeset
487 In most cases, a changeset A will have a single element (e.g. the changeset
488 A is replaced by A') in its successors set. Though, it is also common for a
488 A is replaced by A') in its successors set. Though, it is also common for a
489 changeset A to have no elements in its successor set (e.g. the changeset
489 changeset A to have no elements in its successor set (e.g. the changeset
490 has been pruned). Therefore, the returned list of successors sets will be
490 has been pruned). Therefore, the returned list of successors sets will be
491 [(A',)] or [], respectively.
491 [(A',)] or [], respectively.
492
492
493 When a changeset A is split into A' and B', however, it will result in a
493 When a changeset A is split into A' and B', however, it will result in a
494 successors set containing more than a single element, i.e. [(A',B')].
494 successors set containing more than a single element, i.e. [(A',B')].
495 Divergent changesets will result in multiple successors sets, i.e. [(A',),
495 Divergent changesets will result in multiple successors sets, i.e. [(A',),
496 (A'')].
496 (A'')].
497
497
498 If a changeset A is not obsolete, then it will conceptually have no
498 If a changeset A is not obsolete, then it will conceptually have no
499 successors set. To distinguish this from a pruned changeset, the successor
499 successors set. To distinguish this from a pruned changeset, the successor
500 set will only contain itself, i.e. [(A,)].
500 set will only contain itself, i.e. [(A,)].
501
501
502 Finally, successors unknown locally are considered to be pruned (obsoleted
502 Finally, successors unknown locally are considered to be pruned (obsoleted
503 without any successors).
503 without any successors).
504
504
505 The optional `cache` parameter is a dictionary that may contain precomputed
505 The optional `cache` parameter is a dictionary that may contain precomputed
506 successors sets. It is meant to reuse the computation of a previous call to
506 successors sets. It is meant to reuse the computation of a previous call to
507 `successorssets` when multiple calls are made at the same time. The cache
507 `successorssets` when multiple calls are made at the same time. The cache
508 dictionary is updated in place. The caller is responsible for its live
508 dictionary is updated in place. The caller is responsible for its live
509 spawn. Code that makes multiple calls to `successorssets` *must* use this
509 spawn. Code that makes multiple calls to `successorssets` *must* use this
510 cache mechanism or suffer terrible performances.
510 cache mechanism or suffer terrible performances.
511
511
512 """
512 """
513
513
514 succmarkers = repo.obsstore.successors
514 succmarkers = repo.obsstore.successors
515
515
516 # Stack of nodes we search successors sets for
516 # Stack of nodes we search successors sets for
517 toproceed = [initialnode]
517 toproceed = [initialnode]
518 # set version of above list for fast loop detection
518 # set version of above list for fast loop detection
519 # element added to "toproceed" must be added here
519 # element added to "toproceed" must be added here
520 stackedset = set(toproceed)
520 stackedset = set(toproceed)
521 if cache is None:
521 if cache is None:
522 cache = {}
522 cache = {}
523
523
524 # This while loop is the flattened version of a recursive search for
524 # This while loop is the flattened version of a recursive search for
525 # successors sets
525 # successors sets
526 #
526 #
527 # def successorssets(x):
527 # def successorssets(x):
528 # successors = directsuccessors(x)
528 # successors = directsuccessors(x)
529 # ss = [[]]
529 # ss = [[]]
530 # for succ in directsuccessors(x):
530 # for succ in directsuccessors(x):
531 # # product as in itertools cartesian product
531 # # product as in itertools cartesian product
532 # ss = product(ss, successorssets(succ))
532 # ss = product(ss, successorssets(succ))
533 # return ss
533 # return ss
534 #
534 #
535 # But we can not use plain recursive calls here:
535 # But we can not use plain recursive calls here:
536 # - that would blow the python call stack
536 # - that would blow the python call stack
537 # - obsolescence markers may have cycles, we need to handle them.
537 # - obsolescence markers may have cycles, we need to handle them.
538 #
538 #
539 # The `toproceed` list act as our call stack. Every node we search
539 # The `toproceed` list act as our call stack. Every node we search
540 # successors set for are stacked there.
540 # successors set for are stacked there.
541 #
541 #
542 # The `stackedset` is set version of this stack used to check if a node is
542 # The `stackedset` is set version of this stack used to check if a node is
543 # already stacked. This check is used to detect cycles and prevent infinite
543 # already stacked. This check is used to detect cycles and prevent infinite
544 # loop.
544 # loop.
545 #
545 #
546 # successors set of all nodes are stored in the `cache` dictionary.
546 # successors set of all nodes are stored in the `cache` dictionary.
547 #
547 #
548 # After this while loop ends we use the cache to return the successors sets
548 # After this while loop ends we use the cache to return the successors sets
549 # for the node requested by the caller.
549 # for the node requested by the caller.
550 while toproceed:
550 while toproceed:
551 # Every iteration tries to compute the successors sets of the topmost
551 # Every iteration tries to compute the successors sets of the topmost
552 # node of the stack: CURRENT.
552 # node of the stack: CURRENT.
553 #
553 #
554 # There are four possible outcomes:
554 # There are four possible outcomes:
555 #
555 #
556 # 1) We already know the successors sets of CURRENT:
556 # 1) We already know the successors sets of CURRENT:
557 # -> mission accomplished, pop it from the stack.
557 # -> mission accomplished, pop it from the stack.
558 # 2) Node is not obsolete:
558 # 2) Node is not obsolete:
559 # -> the node is its own successors sets. Add it to the cache.
559 # -> the node is its own successors sets. Add it to the cache.
560 # 3) We do not know successors set of direct successors of CURRENT:
560 # 3) We do not know successors set of direct successors of CURRENT:
561 # -> We add those successors to the stack.
561 # -> We add those successors to the stack.
562 # 4) We know successors sets of all direct successors of CURRENT:
562 # 4) We know successors sets of all direct successors of CURRENT:
563 # -> We can compute CURRENT successors set and add it to the
563 # -> We can compute CURRENT successors set and add it to the
564 # cache.
564 # cache.
565 #
565 #
566 current = toproceed[-1]
566 current = toproceed[-1]
567 if current in cache:
567 if current in cache:
568 # case (1): We already know the successors sets
568 # case (1): We already know the successors sets
569 stackedset.remove(toproceed.pop())
569 stackedset.remove(toproceed.pop())
570 elif current not in succmarkers:
570 elif current not in succmarkers:
571 # case (2): The node is not obsolete.
571 # case (2): The node is not obsolete.
572 if current in repo:
572 if current in repo:
573 # We have a valid last successors.
573 # We have a valid last successors.
574 cache[current] = [(current,)]
574 cache[current] = [(current,)]
575 else:
575 else:
576 # Final obsolete version is unknown locally.
576 # Final obsolete version is unknown locally.
577 # Do not count that as a valid successors
577 # Do not count that as a valid successors
578 cache[current] = []
578 cache[current] = []
579 else:
579 else:
580 # cases (3) and (4)
580 # cases (3) and (4)
581 #
581 #
582 # We proceed in two phases. Phase 1 aims to distinguish case (3)
582 # We proceed in two phases. Phase 1 aims to distinguish case (3)
583 # from case (4):
583 # from case (4):
584 #
584 #
585 # For each direct successors of CURRENT, we check whether its
585 # For each direct successors of CURRENT, we check whether its
586 # successors sets are known. If they are not, we stack the
586 # successors sets are known. If they are not, we stack the
587 # unknown node and proceed to the next iteration of the while
587 # unknown node and proceed to the next iteration of the while
588 # loop. (case 3)
588 # loop. (case 3)
589 #
589 #
590 # During this step, we may detect obsolescence cycles: a node
590 # During this step, we may detect obsolescence cycles: a node
591 # with unknown successors sets but already in the call stack.
591 # with unknown successors sets but already in the call stack.
592 # In such a situation, we arbitrary set the successors sets of
592 # In such a situation, we arbitrary set the successors sets of
593 # the node to nothing (node pruned) to break the cycle.
593 # the node to nothing (node pruned) to break the cycle.
594 #
594 #
595 # If no break was encountered we proceed to phase 2.
595 # If no break was encountered we proceed to phase 2.
596 #
596 #
597 # Phase 2 computes successors sets of CURRENT (case 4); see details
597 # Phase 2 computes successors sets of CURRENT (case 4); see details
598 # in phase 2 itself.
598 # in phase 2 itself.
599 #
599 #
600 # Note the two levels of iteration in each phase.
600 # Note the two levels of iteration in each phase.
601 # - The first one handles obsolescence markers using CURRENT as
601 # - The first one handles obsolescence markers using CURRENT as
602 # precursor (successors markers of CURRENT).
602 # precursor (successors markers of CURRENT).
603 #
603 #
604 # Having multiple entry here means divergence.
604 # Having multiple entry here means divergence.
605 #
605 #
606 # - The second one handles successors defined in each marker.
606 # - The second one handles successors defined in each marker.
607 #
607 #
608 # Having none means pruned node, multiple successors means split,
608 # Having none means pruned node, multiple successors means split,
609 # single successors are standard replacement.
609 # single successors are standard replacement.
610 #
610 #
611 for mark in sorted(succmarkers[current]):
611 for mark in sorted(succmarkers[current]):
612 for suc in mark[1]:
612 for suc in mark[1]:
613 if suc not in cache:
613 if suc not in cache:
614 if suc in stackedset:
614 if suc in stackedset:
615 # cycle breaking
615 # cycle breaking
616 cache[suc] = []
616 cache[suc] = []
617 else:
617 else:
618 # case (3) If we have not computed successors sets
618 # case (3) If we have not computed successors sets
619 # of one of those successors we add it to the
619 # of one of those successors we add it to the
620 # `toproceed` stack and stop all work for this
620 # `toproceed` stack and stop all work for this
621 # iteration.
621 # iteration.
622 toproceed.append(suc)
622 toproceed.append(suc)
623 stackedset.add(suc)
623 stackedset.add(suc)
624 break
624 break
625 else:
625 else:
626 continue
626 continue
627 break
627 break
628 else:
628 else:
629 # case (4): we know all successors sets of all direct
629 # case (4): we know all successors sets of all direct
630 # successors
630 # successors
631 #
631 #
632 # Successors set contributed by each marker depends on the
632 # Successors set contributed by each marker depends on the
633 # successors sets of all its "successors" node.
633 # successors sets of all its "successors" node.
634 #
634 #
635 # Each different marker is a divergence in the obsolescence
635 # Each different marker is a divergence in the obsolescence
636 # history. It contributes successors sets distinct from other
636 # history. It contributes successors sets distinct from other
637 # markers.
637 # markers.
638 #
638 #
639 # Within a marker, a successor may have divergent successors
639 # Within a marker, a successor may have divergent successors
640 # sets. In such a case, the marker will contribute multiple
640 # sets. In such a case, the marker will contribute multiple
641 # divergent successors sets. If multiple successors have
641 # divergent successors sets. If multiple successors have
642 # divergent successors sets, a cartesian product is used.
642 # divergent successors sets, a cartesian product is used.
643 #
643 #
644 # At the end we post-process successors sets to remove
644 # At the end we post-process successors sets to remove
645 # duplicated entry and successors set that are strict subset of
645 # duplicated entry and successors set that are strict subset of
646 # another one.
646 # another one.
647 succssets = []
647 succssets = []
648 for mark in sorted(succmarkers[current]):
648 for mark in sorted(succmarkers[current]):
649 # successors sets contributed by this marker
649 # successors sets contributed by this marker
650 markss = [[]]
650 markss = [[]]
651 for suc in mark[1]:
651 for suc in mark[1]:
652 # cardinal product with previous successors
652 # cardinal product with previous successors
653 productresult = []
653 productresult = []
654 for prefix in markss:
654 for prefix in markss:
655 for suffix in cache[suc]:
655 for suffix in cache[suc]:
656 newss = list(prefix)
656 newss = list(prefix)
657 for part in suffix:
657 for part in suffix:
658 # do not duplicated entry in successors set
658 # do not duplicated entry in successors set
659 # first entry wins.
659 # first entry wins.
660 if part not in newss:
660 if part not in newss:
661 newss.append(part)
661 newss.append(part)
662 productresult.append(newss)
662 productresult.append(newss)
663 markss = productresult
663 markss = productresult
664 succssets.extend(markss)
664 succssets.extend(markss)
665 # remove duplicated and subset
665 # remove duplicated and subset
666 seen = []
666 seen = []
667 final = []
667 final = []
668 candidate = sorted(((set(s), s) for s in succssets if s),
668 candidate = sorted(((set(s), s) for s in succssets if s),
669 key=lambda x: len(x[1]), reverse=True)
669 key=lambda x: len(x[1]), reverse=True)
670 for setversion, listversion in candidate:
670 for setversion, listversion in candidate:
671 for seenset in seen:
671 for seenset in seen:
672 if setversion.issubset(seenset):
672 if setversion.issubset(seenset):
673 break
673 break
674 else:
674 else:
675 final.append(listversion)
675 final.append(listversion)
676 seen.append(setversion)
676 seen.append(setversion)
677 final.reverse() # put small successors set first
677 final.reverse() # put small successors set first
678 cache[current] = final
678 cache[current] = final
679 return cache[initialnode]
679 return cache[initialnode]
680
680
681 def _knownrevs(repo, nodes):
681 def _knownrevs(repo, nodes):
682 """yield revision numbers of known nodes passed in parameters
682 """yield revision numbers of known nodes passed in parameters
683
683
684 Unknown revisions are silently ignored."""
684 Unknown revisions are silently ignored."""
685 torev = repo.changelog.nodemap.get
685 torev = repo.changelog.nodemap.get
686 for n in nodes:
686 for n in nodes:
687 rev = torev(n)
687 rev = torev(n)
688 if rev is not None:
688 if rev is not None:
689 yield rev
689 yield rev
690
690
691 # mapping of 'set-name' -> <function to compute this set>
691 # mapping of 'set-name' -> <function to compute this set>
692 cachefuncs = {}
692 cachefuncs = {}
693 def cachefor(name):
693 def cachefor(name):
694 """Decorator to register a function as computing the cache for a set"""
694 """Decorator to register a function as computing the cache for a set"""
695 def decorator(func):
695 def decorator(func):
696 assert name not in cachefuncs
696 assert name not in cachefuncs
697 cachefuncs[name] = func
697 cachefuncs[name] = func
698 return func
698 return func
699 return decorator
699 return decorator
700
700
701 def getrevs(repo, name):
701 def getrevs(repo, name):
702 """Return the set of revision that belong to the <name> set
702 """Return the set of revision that belong to the <name> set
703
703
704 Such access may compute the set and cache it for future use"""
704 Such access may compute the set and cache it for future use"""
705 repo = repo.unfiltered()
705 repo = repo.unfiltered()
706 if not repo.obsstore:
706 if not repo.obsstore:
707 return ()
707 return ()
708 if name not in repo.obsstore.caches:
708 if name not in repo.obsstore.caches:
709 repo.obsstore.caches[name] = cachefuncs[name](repo)
709 repo.obsstore.caches[name] = cachefuncs[name](repo)
710 return repo.obsstore.caches[name]
710 return repo.obsstore.caches[name]
711
711
712 # To be simple we need to invalidate obsolescence cache when:
712 # To be simple we need to invalidate obsolescence cache when:
713 #
713 #
714 # - new changeset is added:
714 # - new changeset is added:
715 # - public phase is changed
715 # - public phase is changed
716 # - obsolescence marker are added
716 # - obsolescence marker are added
717 # - strip is used a repo
717 # - strip is used a repo
718 def clearobscaches(repo):
718 def clearobscaches(repo):
719 """Remove all obsolescence related cache from a repo
719 """Remove all obsolescence related cache from a repo
720
720
721 This remove all cache in obsstore is the obsstore already exist on the
721 This remove all cache in obsstore is the obsstore already exist on the
722 repo.
722 repo.
723
723
724 (We could be smarter here given the exact event that trigger the cache
724 (We could be smarter here given the exact event that trigger the cache
725 clearing)"""
725 clearing)"""
726 # only clear cache is there is obsstore data in this repo
726 # only clear cache is there is obsstore data in this repo
727 if 'obsstore' in repo._filecache:
727 if 'obsstore' in repo._filecache:
728 repo.obsstore.caches.clear()
728 repo.obsstore.caches.clear()
729
729
730 @cachefor('obsolete')
730 @cachefor('obsolete')
731 def _computeobsoleteset(repo):
731 def _computeobsoleteset(repo):
732 """the set of obsolete revisions"""
732 """the set of obsolete revisions"""
733 obs = set()
733 obs = set()
734 getrev = repo.changelog.nodemap.get
734 getrev = repo.changelog.nodemap.get
735 getphase = repo._phasecache.phase
735 getphase = repo._phasecache.phase
736 for node in repo.obsstore.successors:
736 for node in repo.obsstore.successors:
737 rev = getrev(node)
737 rev = getrev(node)
738 if rev is not None and getphase(repo, rev):
738 if rev is not None and getphase(repo, rev):
739 obs.add(rev)
739 obs.add(rev)
740 return obs
740 return obs
741
741
742 @cachefor('unstable')
742 @cachefor('unstable')
743 def _computeunstableset(repo):
743 def _computeunstableset(repo):
744 """the set of non obsolete revisions with obsolete parents"""
744 """the set of non obsolete revisions with obsolete parents"""
745 # revset is not efficient enough here
745 # revset is not efficient enough here
746 # we do (obsolete()::) - obsolete() by hand
746 # we do (obsolete()::) - obsolete() by hand
747 obs = getrevs(repo, 'obsolete')
747 obs = getrevs(repo, 'obsolete')
748 if not obs:
748 if not obs:
749 return set()
749 return set()
750 cl = repo.changelog
750 cl = repo.changelog
751 return set(r for r in cl.descendants(obs) if r not in obs)
751 return set(r for r in cl.descendants(obs) if r not in obs)
752
752
753 @cachefor('suspended')
753 @cachefor('suspended')
754 def _computesuspendedset(repo):
754 def _computesuspendedset(repo):
755 """the set of obsolete parents with non obsolete descendants"""
755 """the set of obsolete parents with non obsolete descendants"""
756 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
756 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
757 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
757 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
758
758
759 @cachefor('extinct')
759 @cachefor('extinct')
760 def _computeextinctset(repo):
760 def _computeextinctset(repo):
761 """the set of obsolete parents without non obsolete descendants"""
761 """the set of obsolete parents without non obsolete descendants"""
762 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
762 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
763
763
764
764
765 @cachefor('bumped')
765 @cachefor('bumped')
766 def _computebumpedset(repo):
766 def _computebumpedset(repo):
767 """the set of revs trying to obsolete public revisions"""
767 """the set of revs trying to obsolete public revisions"""
768 bumped = set()
768 bumped = set()
769 # utils function (avoid attribut lookup in the loop)
769 # utils function (avoid attribut lookup in the loop)
770 phase = repo._phasecache.phase # would be faster to grab the full list
770 phase = repo._phasecache.phase # would be faster to grab the full list
771 public = phases.public
771 public = phases.public
772 cl = repo.changelog
772 cl = repo.changelog
773 torev = cl.nodemap.get
773 torev = cl.nodemap.get
774 obs = getrevs(repo, 'obsolete')
774 obs = getrevs(repo, 'obsolete')
775 for rev in repo:
775 for rev in repo:
776 # We only evaluate mutable, non-obsolete revision
776 # We only evaluate mutable, non-obsolete revision
777 if (public < phase(repo, rev)) and (rev not in obs):
777 if (public < phase(repo, rev)) and (rev not in obs):
778 node = cl.node(rev)
778 node = cl.node(rev)
779 # (future) A cache of precursors may worth if split is very common
779 # (future) A cache of precursors may worth if split is very common
780 for pnode in allprecursors(repo.obsstore, [node],
780 for pnode in allprecursors(repo.obsstore, [node],
781 ignoreflags=bumpedfix):
781 ignoreflags=bumpedfix):
782 prev = torev(pnode) # unfiltered! but so is phasecache
782 prev = torev(pnode) # unfiltered! but so is phasecache
783 if (prev is not None) and (phase(repo, prev) <= public):
783 if (prev is not None) and (phase(repo, prev) <= public):
784 # we have a public precursors
784 # we have a public precursors
785 bumped.add(rev)
785 bumped.add(rev)
786 break # Next draft!
786 break # Next draft!
787 return bumped
787 return bumped
788
788
789 @cachefor('divergent')
789 @cachefor('divergent')
790 def _computedivergentset(repo):
790 def _computedivergentset(repo):
791 """the set of rev that compete to be the final successors of some revision.
791 """the set of rev that compete to be the final successors of some revision.
792 """
792 """
793 divergent = set()
793 divergent = set()
794 obsstore = repo.obsstore
794 obsstore = repo.obsstore
795 newermap = {}
795 newermap = {}
796 for ctx in repo.set('(not public()) - obsolete()'):
796 for ctx in repo.set('(not public()) - obsolete()'):
797 mark = obsstore.precursors.get(ctx.node(), ())
797 mark = obsstore.precursors.get(ctx.node(), ())
798 toprocess = set(mark)
798 toprocess = set(mark)
799 while toprocess:
799 while toprocess:
800 prec = toprocess.pop()[0]
800 prec = toprocess.pop()[0]
801 if prec not in newermap:
801 if prec not in newermap:
802 successorssets(repo, prec, newermap)
802 successorssets(repo, prec, newermap)
803 newer = [n for n in newermap[prec] if n]
803 newer = [n for n in newermap[prec] if n]
804 if len(newer) > 1:
804 if len(newer) > 1:
805 divergent.add(ctx.rev())
805 divergent.add(ctx.rev())
806 break
806 break
807 toprocess.update(obsstore.precursors.get(prec, ()))
807 toprocess.update(obsstore.precursors.get(prec, ()))
808 return divergent
808 return divergent
809
809
810
810
811 def createmarkers(repo, relations, flag=0, metadata=None):
811 def createmarkers(repo, relations, flag=0, metadata=None):
812 """Add obsolete markers between changesets in a repo
812 """Add obsolete markers between changesets in a repo
813
813
814 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
814 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
815 `old` and `news` are changectx.
815 tuple. `old` and `news` are changectx. metadata is an optional dictionnary
816 containing metadata for this marker only. It is merged with the global
817 metadata specified through the `metadata` argument of this function,
816
818
817 Trying to obsolete a public changeset will raise an exception.
819 Trying to obsolete a public changeset will raise an exception.
818
820
819 Current user and date are used except if specified otherwise in the
821 Current user and date are used except if specified otherwise in the
820 metadata attribute.
822 metadata attribute.
821
823
822 This function operates within a transaction of its own, but does
824 This function operates within a transaction of its own, but does
823 not take any lock on the repo.
825 not take any lock on the repo.
824 """
826 """
825 # prepare metadata
827 # prepare metadata
826 if metadata is None:
828 if metadata is None:
827 metadata = {}
829 metadata = {}
828 if 'date' not in metadata:
830 if 'date' not in metadata:
829 metadata['date'] = '%i %i' % util.makedate()
831 metadata['date'] = '%i %i' % util.makedate()
830 if 'user' not in metadata:
832 if 'user' not in metadata:
831 metadata['user'] = repo.ui.username()
833 metadata['user'] = repo.ui.username()
832 tr = repo.transaction('add-obsolescence-marker')
834 tr = repo.transaction('add-obsolescence-marker')
833 try:
835 try:
834 for prec, sucs in relations:
836 for rel in relations:
837 prec = rel[0]
838 sucs = rel[1]
839 localmetadata = metadata.copy()
840 if 2 < len(rel):
841 localmetadata.update(rel[2])
842
835 if not prec.mutable():
843 if not prec.mutable():
836 raise util.Abort("cannot obsolete immutable changeset: %s"
844 raise util.Abort("cannot obsolete immutable changeset: %s"
837 % prec)
845 % prec)
838 nprec = prec.node()
846 nprec = prec.node()
839 nsucs = tuple(s.node() for s in sucs)
847 nsucs = tuple(s.node() for s in sucs)
840 if nprec in nsucs:
848 if nprec in nsucs:
841 raise util.Abort("changeset %s cannot obsolete itself" % prec)
849 raise util.Abort("changeset %s cannot obsolete itself" % prec)
842 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
850 repo.obsstore.create(tr, nprec, nsucs, flag, localmetadata)
843 repo.filteredrevcache.clear()
851 repo.filteredrevcache.clear()
844 tr.close()
852 tr.close()
845 finally:
853 finally:
846 tr.release()
854 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now