##// END OF EJS Templates
obsolete: clarify documentation for succcessorssets
Sean Farley -
r20277:c05b968d default
parent child Browse files
Show More
@@ -1,868 +1,880
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independents operation rewrite the same changeset A in to A' and
49 case. If two independents operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 import phases
87 import phases
88 from i18n import _
88 from i18n import _
89
89
90 _pack = struct.pack
90 _pack = struct.pack
91 _unpack = struct.unpack
91 _unpack = struct.unpack
92
92
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
94
94
95 # the obsolete feature is not mature enough to be enabled by default.
95 # the obsolete feature is not mature enough to be enabled by default.
96 # you have to rely on third party extension extension to enable this.
96 # you have to rely on third party extension extension to enable this.
97 _enabled = False
97 _enabled = False
98
98
99 # data used for parsing and writing
99 # data used for parsing and writing
100 _fmversion = 0
100 _fmversion = 0
101 _fmfixed = '>BIB20s'
101 _fmfixed = '>BIB20s'
102 _fmnode = '20s'
102 _fmnode = '20s'
103 _fmfsize = struct.calcsize(_fmfixed)
103 _fmfsize = struct.calcsize(_fmfixed)
104 _fnodesize = struct.calcsize(_fmnode)
104 _fnodesize = struct.calcsize(_fmnode)
105
105
106 ### obsolescence marker flag
106 ### obsolescence marker flag
107
107
108 ## bumpedfix flag
108 ## bumpedfix flag
109 #
109 #
110 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # When a changeset A' succeed to a changeset A which became public, we call A'
111 # "bumped" because it's a successors of a public changesets
111 # "bumped" because it's a successors of a public changesets
112 #
112 #
113 # o A' (bumped)
113 # o A' (bumped)
114 # |`:
114 # |`:
115 # | o A
115 # | o A
116 # |/
116 # |/
117 # o Z
117 # o Z
118 #
118 #
119 # The way to solve this situation is to create a new changeset Ad as children
119 # The way to solve this situation is to create a new changeset Ad as children
120 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # of A. This changeset have the same content than A'. So the diff from A to A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
122 #
122 #
123 # o Ad
123 # o Ad
124 # |`:
124 # |`:
125 # | x A'
125 # | x A'
126 # |'|
126 # |'|
127 # o | A
127 # o | A
128 # |/
128 # |/
129 # o Z
129 # o Z
130 #
130 #
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
133 # This flag mean that the successors express the changes between the public and
133 # This flag mean that the successors express the changes between the public and
134 # bumped version and fix the situation, breaking the transitivity of
134 # bumped version and fix the situation, breaking the transitivity of
135 # "bumped" here.
135 # "bumped" here.
136 bumpedfix = 1
136 bumpedfix = 1
137
137
138 def _readmarkers(data):
138 def _readmarkers(data):
139 """Read and enumerate markers from raw data"""
139 """Read and enumerate markers from raw data"""
140 off = 0
140 off = 0
141 diskversion = _unpack('>B', data[off:off + 1])[0]
141 diskversion = _unpack('>B', data[off:off + 1])[0]
142 off += 1
142 off += 1
143 if diskversion != _fmversion:
143 if diskversion != _fmversion:
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
144 raise util.Abort(_('parsing obsolete marker: unknown version %r')
145 % diskversion)
145 % diskversion)
146
146
147 # Loop on markers
147 # Loop on markers
148 l = len(data)
148 l = len(data)
149 while off + _fmfsize <= l:
149 while off + _fmfsize <= l:
150 # read fixed part
150 # read fixed part
151 cur = data[off:off + _fmfsize]
151 cur = data[off:off + _fmfsize]
152 off += _fmfsize
152 off += _fmfsize
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
153 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
154 # read replacement
154 # read replacement
155 sucs = ()
155 sucs = ()
156 if nbsuc:
156 if nbsuc:
157 s = (_fnodesize * nbsuc)
157 s = (_fnodesize * nbsuc)
158 cur = data[off:off + s]
158 cur = data[off:off + s]
159 sucs = _unpack(_fmnode * nbsuc, cur)
159 sucs = _unpack(_fmnode * nbsuc, cur)
160 off += s
160 off += s
161 # read metadata
161 # read metadata
162 # (metadata will be decoded on demand)
162 # (metadata will be decoded on demand)
163 metadata = data[off:off + mdsize]
163 metadata = data[off:off + mdsize]
164 if len(metadata) != mdsize:
164 if len(metadata) != mdsize:
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
165 raise util.Abort(_('parsing obsolete marker: metadata is too '
166 'short, %d bytes expected, got %d')
166 'short, %d bytes expected, got %d')
167 % (mdsize, len(metadata)))
167 % (mdsize, len(metadata)))
168 off += mdsize
168 off += mdsize
169 yield (pre, sucs, flags, metadata)
169 yield (pre, sucs, flags, metadata)
170
170
171 def encodemeta(meta):
171 def encodemeta(meta):
172 """Return encoded metadata string to string mapping.
172 """Return encoded metadata string to string mapping.
173
173
174 Assume no ':' in key and no '\0' in both key and value."""
174 Assume no ':' in key and no '\0' in both key and value."""
175 for key, value in meta.iteritems():
175 for key, value in meta.iteritems():
176 if ':' in key or '\0' in key:
176 if ':' in key or '\0' in key:
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
177 raise ValueError("':' and '\0' are forbidden in metadata key'")
178 if '\0' in value:
178 if '\0' in value:
179 raise ValueError("':' are forbidden in metadata value'")
179 raise ValueError("':' are forbidden in metadata value'")
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
180 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
181
181
182 def decodemeta(data):
182 def decodemeta(data):
183 """Return string to string dictionary from encoded version."""
183 """Return string to string dictionary from encoded version."""
184 d = {}
184 d = {}
185 for l in data.split('\0'):
185 for l in data.split('\0'):
186 if l:
186 if l:
187 key, value = l.split(':')
187 key, value = l.split(':')
188 d[key] = value
188 d[key] = value
189 return d
189 return d
190
190
191 class marker(object):
191 class marker(object):
192 """Wrap obsolete marker raw data"""
192 """Wrap obsolete marker raw data"""
193
193
194 def __init__(self, repo, data):
194 def __init__(self, repo, data):
195 # the repo argument will be used to create changectx in later version
195 # the repo argument will be used to create changectx in later version
196 self._repo = repo
196 self._repo = repo
197 self._data = data
197 self._data = data
198 self._decodedmeta = None
198 self._decodedmeta = None
199
199
200 def __hash__(self):
200 def __hash__(self):
201 return hash(self._data)
201 return hash(self._data)
202
202
203 def __eq__(self, other):
203 def __eq__(self, other):
204 if type(other) != type(self):
204 if type(other) != type(self):
205 return False
205 return False
206 return self._data == other._data
206 return self._data == other._data
207
207
208 def precnode(self):
208 def precnode(self):
209 """Precursor changeset node identifier"""
209 """Precursor changeset node identifier"""
210 return self._data[0]
210 return self._data[0]
211
211
212 def succnodes(self):
212 def succnodes(self):
213 """List of successor changesets node identifiers"""
213 """List of successor changesets node identifiers"""
214 return self._data[1]
214 return self._data[1]
215
215
216 def metadata(self):
216 def metadata(self):
217 """Decoded metadata dictionary"""
217 """Decoded metadata dictionary"""
218 if self._decodedmeta is None:
218 if self._decodedmeta is None:
219 self._decodedmeta = decodemeta(self._data[3])
219 self._decodedmeta = decodemeta(self._data[3])
220 return self._decodedmeta
220 return self._decodedmeta
221
221
222 def date(self):
222 def date(self):
223 """Creation date as (unixtime, offset)"""
223 """Creation date as (unixtime, offset)"""
224 parts = self.metadata()['date'].split(' ')
224 parts = self.metadata()['date'].split(' ')
225 return (float(parts[0]), int(parts[1]))
225 return (float(parts[0]), int(parts[1]))
226
226
227 class obsstore(object):
227 class obsstore(object):
228 """Store obsolete markers
228 """Store obsolete markers
229
229
230 Markers can be accessed with two mappings:
230 Markers can be accessed with two mappings:
231 - precursors[x] -> set(markers on precursors edges of x)
231 - precursors[x] -> set(markers on precursors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
232 - successors[x] -> set(markers on successors edges of x)
233 """
233 """
234
234
235 def __init__(self, sopener):
235 def __init__(self, sopener):
236 # caches for various obsolescence related cache
236 # caches for various obsolescence related cache
237 self.caches = {}
237 self.caches = {}
238 self._all = []
238 self._all = []
239 # new markers to serialize
239 # new markers to serialize
240 self.precursors = {}
240 self.precursors = {}
241 self.successors = {}
241 self.successors = {}
242 self.sopener = sopener
242 self.sopener = sopener
243 data = sopener.tryread('obsstore')
243 data = sopener.tryread('obsstore')
244 if data:
244 if data:
245 self._load(_readmarkers(data))
245 self._load(_readmarkers(data))
246
246
247 def __iter__(self):
247 def __iter__(self):
248 return iter(self._all)
248 return iter(self._all)
249
249
250 def __nonzero__(self):
250 def __nonzero__(self):
251 return bool(self._all)
251 return bool(self._all)
252
252
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
253 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
254 """obsolete: add a new obsolete marker
254 """obsolete: add a new obsolete marker
255
255
256 * ensuring it is hashable
256 * ensuring it is hashable
257 * check mandatory metadata
257 * check mandatory metadata
258 * encode metadata
258 * encode metadata
259 """
259 """
260 if metadata is None:
260 if metadata is None:
261 metadata = {}
261 metadata = {}
262 if 'date' not in metadata:
262 if 'date' not in metadata:
263 metadata['date'] = "%d %d" % util.makedate()
263 metadata['date'] = "%d %d" % util.makedate()
264 if len(prec) != 20:
264 if len(prec) != 20:
265 raise ValueError(prec)
265 raise ValueError(prec)
266 for succ in succs:
266 for succ in succs:
267 if len(succ) != 20:
267 if len(succ) != 20:
268 raise ValueError(succ)
268 raise ValueError(succ)
269 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
269 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
270 self.add(transaction, [marker])
270 self.add(transaction, [marker])
271
271
272 def add(self, transaction, markers):
272 def add(self, transaction, markers):
273 """Add new markers to the store
273 """Add new markers to the store
274
274
275 Take care of filtering duplicate.
275 Take care of filtering duplicate.
276 Return the number of new marker."""
276 Return the number of new marker."""
277 if not _enabled:
277 if not _enabled:
278 raise util.Abort('obsolete feature is not enabled on this repo')
278 raise util.Abort('obsolete feature is not enabled on this repo')
279 known = set(self._all)
279 known = set(self._all)
280 new = []
280 new = []
281 for m in markers:
281 for m in markers:
282 if m not in known:
282 if m not in known:
283 known.add(m)
283 known.add(m)
284 new.append(m)
284 new.append(m)
285 if new:
285 if new:
286 f = self.sopener('obsstore', 'ab')
286 f = self.sopener('obsstore', 'ab')
287 try:
287 try:
288 # Whether the file's current position is at the begin or at
288 # Whether the file's current position is at the begin or at
289 # the end after opening a file for appending is implementation
289 # the end after opening a file for appending is implementation
290 # defined. So we must seek to the end before calling tell(),
290 # defined. So we must seek to the end before calling tell(),
291 # or we may get a zero offset for non-zero sized files on
291 # or we may get a zero offset for non-zero sized files on
292 # some platforms (issue3543).
292 # some platforms (issue3543).
293 f.seek(0, _SEEK_END)
293 f.seek(0, _SEEK_END)
294 offset = f.tell()
294 offset = f.tell()
295 transaction.add('obsstore', offset)
295 transaction.add('obsstore', offset)
296 # offset == 0: new file - add the version header
296 # offset == 0: new file - add the version header
297 for bytes in _encodemarkers(new, offset == 0):
297 for bytes in _encodemarkers(new, offset == 0):
298 f.write(bytes)
298 f.write(bytes)
299 finally:
299 finally:
300 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
300 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
301 # call 'filecacheentry.refresh()' here
301 # call 'filecacheentry.refresh()' here
302 f.close()
302 f.close()
303 self._load(new)
303 self._load(new)
304 # new marker *may* have changed several set. invalidate the cache.
304 # new marker *may* have changed several set. invalidate the cache.
305 self.caches.clear()
305 self.caches.clear()
306 return len(new)
306 return len(new)
307
307
308 def mergemarkers(self, transaction, data):
308 def mergemarkers(self, transaction, data):
309 markers = _readmarkers(data)
309 markers = _readmarkers(data)
310 self.add(transaction, markers)
310 self.add(transaction, markers)
311
311
312 def _load(self, markers):
312 def _load(self, markers):
313 for mark in markers:
313 for mark in markers:
314 self._all.append(mark)
314 self._all.append(mark)
315 pre, sucs = mark[:2]
315 pre, sucs = mark[:2]
316 self.successors.setdefault(pre, set()).add(mark)
316 self.successors.setdefault(pre, set()).add(mark)
317 for suc in sucs:
317 for suc in sucs:
318 self.precursors.setdefault(suc, set()).add(mark)
318 self.precursors.setdefault(suc, set()).add(mark)
319 if node.nullid in self.precursors:
319 if node.nullid in self.precursors:
320 raise util.Abort(_('bad obsolescence marker detected: '
320 raise util.Abort(_('bad obsolescence marker detected: '
321 'invalid successors nullid'))
321 'invalid successors nullid'))
322
322
323 def _encodemarkers(markers, addheader=False):
323 def _encodemarkers(markers, addheader=False):
324 # Kept separate from flushmarkers(), it will be reused for
324 # Kept separate from flushmarkers(), it will be reused for
325 # markers exchange.
325 # markers exchange.
326 if addheader:
326 if addheader:
327 yield _pack('>B', _fmversion)
327 yield _pack('>B', _fmversion)
328 for marker in markers:
328 for marker in markers:
329 yield _encodeonemarker(marker)
329 yield _encodeonemarker(marker)
330
330
331
331
332 def _encodeonemarker(marker):
332 def _encodeonemarker(marker):
333 pre, sucs, flags, metadata = marker
333 pre, sucs, flags, metadata = marker
334 nbsuc = len(sucs)
334 nbsuc = len(sucs)
335 format = _fmfixed + (_fmnode * nbsuc)
335 format = _fmfixed + (_fmnode * nbsuc)
336 data = [nbsuc, len(metadata), flags, pre]
336 data = [nbsuc, len(metadata), flags, pre]
337 data.extend(sucs)
337 data.extend(sucs)
338 return _pack(format, *data) + metadata
338 return _pack(format, *data) + metadata
339
339
340 # arbitrary picked to fit into 8K limit from HTTP server
340 # arbitrary picked to fit into 8K limit from HTTP server
341 # you have to take in account:
341 # you have to take in account:
342 # - the version header
342 # - the version header
343 # - the base85 encoding
343 # - the base85 encoding
344 _maxpayload = 5300
344 _maxpayload = 5300
345
345
346 def listmarkers(repo):
346 def listmarkers(repo):
347 """List markers over pushkey"""
347 """List markers over pushkey"""
348 if not repo.obsstore:
348 if not repo.obsstore:
349 return {}
349 return {}
350 keys = {}
350 keys = {}
351 parts = []
351 parts = []
352 currentlen = _maxpayload * 2 # ensure we create a new part
352 currentlen = _maxpayload * 2 # ensure we create a new part
353 for marker in repo.obsstore:
353 for marker in repo.obsstore:
354 nextdata = _encodeonemarker(marker)
354 nextdata = _encodeonemarker(marker)
355 if (len(nextdata) + currentlen > _maxpayload):
355 if (len(nextdata) + currentlen > _maxpayload):
356 currentpart = []
356 currentpart = []
357 currentlen = 0
357 currentlen = 0
358 parts.append(currentpart)
358 parts.append(currentpart)
359 currentpart.append(nextdata)
359 currentpart.append(nextdata)
360 currentlen += len(nextdata)
360 currentlen += len(nextdata)
361 for idx, part in enumerate(reversed(parts)):
361 for idx, part in enumerate(reversed(parts)):
362 data = ''.join([_pack('>B', _fmversion)] + part)
362 data = ''.join([_pack('>B', _fmversion)] + part)
363 keys['dump%i' % idx] = base85.b85encode(data)
363 keys['dump%i' % idx] = base85.b85encode(data)
364 return keys
364 return keys
365
365
366 def pushmarker(repo, key, old, new):
366 def pushmarker(repo, key, old, new):
367 """Push markers over pushkey"""
367 """Push markers over pushkey"""
368 if not key.startswith('dump'):
368 if not key.startswith('dump'):
369 repo.ui.warn(_('unknown key: %r') % key)
369 repo.ui.warn(_('unknown key: %r') % key)
370 return 0
370 return 0
371 if old:
371 if old:
372 repo.ui.warn(_('unexpected old value') % key)
372 repo.ui.warn(_('unexpected old value') % key)
373 return 0
373 return 0
374 data = base85.b85decode(new)
374 data = base85.b85decode(new)
375 lock = repo.lock()
375 lock = repo.lock()
376 try:
376 try:
377 tr = repo.transaction('pushkey: obsolete markers')
377 tr = repo.transaction('pushkey: obsolete markers')
378 try:
378 try:
379 repo.obsstore.mergemarkers(tr, data)
379 repo.obsstore.mergemarkers(tr, data)
380 tr.close()
380 tr.close()
381 return 1
381 return 1
382 finally:
382 finally:
383 tr.release()
383 tr.release()
384 finally:
384 finally:
385 lock.release()
385 lock.release()
386
386
387 def syncpush(repo, remote):
387 def syncpush(repo, remote):
388 """utility function to push obsolete markers to a remote
388 """utility function to push obsolete markers to a remote
389
389
390 Exist mostly to allow overriding for experimentation purpose"""
390 Exist mostly to allow overriding for experimentation purpose"""
391 if (_enabled and repo.obsstore and
391 if (_enabled and repo.obsstore and
392 'obsolete' in remote.listkeys('namespaces')):
392 'obsolete' in remote.listkeys('namespaces')):
393 rslts = []
393 rslts = []
394 remotedata = repo.listkeys('obsolete')
394 remotedata = repo.listkeys('obsolete')
395 for key in sorted(remotedata, reverse=True):
395 for key in sorted(remotedata, reverse=True):
396 # reverse sort to ensure we end with dump0
396 # reverse sort to ensure we end with dump0
397 data = remotedata[key]
397 data = remotedata[key]
398 rslts.append(remote.pushkey('obsolete', key, '', data))
398 rslts.append(remote.pushkey('obsolete', key, '', data))
399 if [r for r in rslts if not r]:
399 if [r for r in rslts if not r]:
400 msg = _('failed to push some obsolete markers!\n')
400 msg = _('failed to push some obsolete markers!\n')
401 repo.ui.warn(msg)
401 repo.ui.warn(msg)
402
402
403 def syncpull(repo, remote, gettransaction):
403 def syncpull(repo, remote, gettransaction):
404 """utility function to pull obsolete markers from a remote
404 """utility function to pull obsolete markers from a remote
405
405
406 The `gettransaction` is function that return the pull transaction, creating
406 The `gettransaction` is function that return the pull transaction, creating
407 one if necessary. We return the transaction to inform the calling code that
407 one if necessary. We return the transaction to inform the calling code that
408 a new transaction have been created (when applicable).
408 a new transaction have been created (when applicable).
409
409
410 Exists mostly to allow overriding for experimentation purpose"""
410 Exists mostly to allow overriding for experimentation purpose"""
411 tr = None
411 tr = None
412 if _enabled:
412 if _enabled:
413 repo.ui.debug('fetching remote obsolete markers\n')
413 repo.ui.debug('fetching remote obsolete markers\n')
414 remoteobs = remote.listkeys('obsolete')
414 remoteobs = remote.listkeys('obsolete')
415 if 'dump0' in remoteobs:
415 if 'dump0' in remoteobs:
416 tr = gettransaction()
416 tr = gettransaction()
417 for key in sorted(remoteobs, reverse=True):
417 for key in sorted(remoteobs, reverse=True):
418 if key.startswith('dump'):
418 if key.startswith('dump'):
419 data = base85.b85decode(remoteobs[key])
419 data = base85.b85decode(remoteobs[key])
420 repo.obsstore.mergemarkers(tr, data)
420 repo.obsstore.mergemarkers(tr, data)
421 repo.invalidatevolatilesets()
421 repo.invalidatevolatilesets()
422 return tr
422 return tr
423
423
424 def allmarkers(repo):
424 def allmarkers(repo):
425 """all obsolete markers known in a repository"""
425 """all obsolete markers known in a repository"""
426 for markerdata in repo.obsstore:
426 for markerdata in repo.obsstore:
427 yield marker(repo, markerdata)
427 yield marker(repo, markerdata)
428
428
429 def precursormarkers(ctx):
429 def precursormarkers(ctx):
430 """obsolete marker marking this changeset as a successors"""
430 """obsolete marker marking this changeset as a successors"""
431 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
431 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
432 yield marker(ctx._repo, data)
432 yield marker(ctx._repo, data)
433
433
434 def successormarkers(ctx):
434 def successormarkers(ctx):
435 """obsolete marker making this changeset obsolete"""
435 """obsolete marker making this changeset obsolete"""
436 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
436 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
437 yield marker(ctx._repo, data)
437 yield marker(ctx._repo, data)
438
438
439 def allsuccessors(obsstore, nodes, ignoreflags=0):
439 def allsuccessors(obsstore, nodes, ignoreflags=0):
440 """Yield node for every successor of <nodes>.
440 """Yield node for every successor of <nodes>.
441
441
442 Some successors may be unknown locally.
442 Some successors may be unknown locally.
443
443
444 This is a linear yield unsuited to detecting split changesets. It includes
444 This is a linear yield unsuited to detecting split changesets. It includes
445 initial nodes too."""
445 initial nodes too."""
446 remaining = set(nodes)
446 remaining = set(nodes)
447 seen = set(remaining)
447 seen = set(remaining)
448 while remaining:
448 while remaining:
449 current = remaining.pop()
449 current = remaining.pop()
450 yield current
450 yield current
451 for mark in obsstore.successors.get(current, ()):
451 for mark in obsstore.successors.get(current, ()):
452 # ignore marker flagged with specified flag
452 # ignore marker flagged with specified flag
453 if mark[2] & ignoreflags:
453 if mark[2] & ignoreflags:
454 continue
454 continue
455 for suc in mark[1]:
455 for suc in mark[1]:
456 if suc not in seen:
456 if suc not in seen:
457 seen.add(suc)
457 seen.add(suc)
458 remaining.add(suc)
458 remaining.add(suc)
459
459
460 def allprecursors(obsstore, nodes, ignoreflags=0):
460 def allprecursors(obsstore, nodes, ignoreflags=0):
461 """Yield node for every precursors of <nodes>.
461 """Yield node for every precursors of <nodes>.
462
462
463 Some precursors may be unknown locally.
463 Some precursors may be unknown locally.
464
464
465 This is a linear yield unsuited to detecting folded changesets. It includes
465 This is a linear yield unsuited to detecting folded changesets. It includes
466 initial nodes too."""
466 initial nodes too."""
467
467
468 remaining = set(nodes)
468 remaining = set(nodes)
469 seen = set(remaining)
469 seen = set(remaining)
470 while remaining:
470 while remaining:
471 current = remaining.pop()
471 current = remaining.pop()
472 yield current
472 yield current
473 for mark in obsstore.precursors.get(current, ()):
473 for mark in obsstore.precursors.get(current, ()):
474 # ignore marker flagged with specified flag
474 # ignore marker flagged with specified flag
475 if mark[2] & ignoreflags:
475 if mark[2] & ignoreflags:
476 continue
476 continue
477 suc = mark[0]
477 suc = mark[0]
478 if suc not in seen:
478 if suc not in seen:
479 seen.add(suc)
479 seen.add(suc)
480 remaining.add(suc)
480 remaining.add(suc)
481
481
482 def foreground(repo, nodes):
482 def foreground(repo, nodes):
483 """return all nodes in the "foreground" of other node
483 """return all nodes in the "foreground" of other node
484
484
485 The foreground of a revision is anything reachable using parent -> children
485 The foreground of a revision is anything reachable using parent -> children
486 or precursor -> successor relation. It is very similar to "descendant" but
486 or precursor -> successor relation. It is very similar to "descendant" but
487 augmented with obsolescence information.
487 augmented with obsolescence information.
488
488
489 Beware that possible obsolescence cycle may result if complex situation.
489 Beware that possible obsolescence cycle may result if complex situation.
490 """
490 """
491 repo = repo.unfiltered()
491 repo = repo.unfiltered()
492 foreground = set(repo.set('%ln::', nodes))
492 foreground = set(repo.set('%ln::', nodes))
493 if repo.obsstore:
493 if repo.obsstore:
494 # We only need this complicated logic if there is obsolescence
494 # We only need this complicated logic if there is obsolescence
495 # XXX will probably deserve an optimised revset.
495 # XXX will probably deserve an optimised revset.
496 nm = repo.changelog.nodemap
496 nm = repo.changelog.nodemap
497 plen = -1
497 plen = -1
498 # compute the whole set of successors or descendants
498 # compute the whole set of successors or descendants
499 while len(foreground) != plen:
499 while len(foreground) != plen:
500 plen = len(foreground)
500 plen = len(foreground)
501 succs = set(c.node() for c in foreground)
501 succs = set(c.node() for c in foreground)
502 mutable = [c.node() for c in foreground if c.mutable()]
502 mutable = [c.node() for c in foreground if c.mutable()]
503 succs.update(allsuccessors(repo.obsstore, mutable))
503 succs.update(allsuccessors(repo.obsstore, mutable))
504 known = (n for n in succs if n in nm)
504 known = (n for n in succs if n in nm)
505 foreground = set(repo.set('%ln::', known))
505 foreground = set(repo.set('%ln::', known))
506 return set(c.node() for c in foreground)
506 return set(c.node() for c in foreground)
507
507
508
508
509 def successorssets(repo, initialnode, cache=None):
509 def successorssets(repo, initialnode, cache=None):
510 """Return all set of successors of initial nodes
510 """Return all set of successors of initial nodes
511
511
512 Successors set of changeset A are a group of revision that succeed A. It
512 The successors set of a changeset A are a group of revisions that succeed
513 succeed A as a consistent whole, each revision being only partial
513 A. It succeeds A as a consistent whole, each revision being only a partial
514 replacement. Successors set contains non-obsolete changeset only.
514 replacement. The successors set contains non-obsolete changesets only.
515
515
516 In most cases a changeset A have zero (changeset pruned) or a single
516 This function returns the full list of successor sets which is why it
517 successors set that contains a single successor (changeset A replaced by
517 returns a list of tuples and not just a single tuple. Each tuple is a valid
518 A')
518 successors set. Not that (A,) may be a valid successors set for changeset A
519 (see below).
519
520
520 When changeset is split, it results successors set containing more than
521 In most cases, a changeset A will have a single element (e.g. the changeset
521 a single element. Divergent rewriting will result in multiple successors
522 A is replaced by A') in its successors set. Though, it is also common for a
522 sets.
523 changeset A to have no elements in its successor set (e.g. the changeset
524 has been pruned). Therefore, the returned list of successors sets will be
525 [(A',)] or [], respectively.
523
526
524 They are returned as a list of tuples containing all valid successors sets.
527 When a changeset A is split into A' and B', however, it will result in a
528 successors set containing more than a single element, i.e. [(A',B')].
529 Divergent changesets will result in multiple successors sets, i.e. [(A',),
530 (A'')].
525
531
526 Final successors unknown locally are considered plain prune (obsoleted
532 If a changeset A is not obsolete, then it will conceptually have no
527 without successors).
533 successors set. To distinguish this from a pruned changeset, the successor
534 set will only contain itself, i.e. [(A,)].
528
535
529 The optional `cache` parameter is a dictionary that may contains
536 Finally, successors unknown locally are considered to be pruned (obsoleted
530 precomputed successors sets. It is meant to reuse the computation of
537 without any successors).
531 previous call to `successorssets` when multiple calls are made at the same
538
532 time. The cache dictionary is updated in place. The caller is responsible
539 The optional `cache` parameter is a dictionary that may contain precomputed
533 for its live spawn. Code that makes multiple calls to `successorssets`
540 successors sets. It is meant to reuse the computation of a previous call to
534 *must* use this cache mechanism or suffer terrible performances."""
541 `successorssets` when multiple calls are made at the same time. The cache
542 dictionary is updated in place. The caller is responsible for its live
543 spawn. Code that makes multiple calls to `successorssets` *must* use this
544 cache mechanism or suffer terrible performances.
545
546 """
535
547
536 succmarkers = repo.obsstore.successors
548 succmarkers = repo.obsstore.successors
537
549
538 # Stack of nodes we search successors sets for
550 # Stack of nodes we search successors sets for
539 toproceed = [initialnode]
551 toproceed = [initialnode]
540 # set version of above list for fast loop detection
552 # set version of above list for fast loop detection
541 # element added to "toproceed" must be added here
553 # element added to "toproceed" must be added here
542 stackedset = set(toproceed)
554 stackedset = set(toproceed)
543 if cache is None:
555 if cache is None:
544 cache = {}
556 cache = {}
545
557
546 # This while loop is the flattened version of a recursive search for
558 # This while loop is the flattened version of a recursive search for
547 # successors sets
559 # successors sets
548 #
560 #
549 # def successorssets(x):
561 # def successorssets(x):
550 # successors = directsuccessors(x)
562 # successors = directsuccessors(x)
551 # ss = [[]]
563 # ss = [[]]
552 # for succ in directsuccessors(x):
564 # for succ in directsuccessors(x):
553 # # product as in itertools cartesian product
565 # # product as in itertools cartesian product
554 # ss = product(ss, successorssets(succ))
566 # ss = product(ss, successorssets(succ))
555 # return ss
567 # return ss
556 #
568 #
557 # But we can not use plain recursive calls here:
569 # But we can not use plain recursive calls here:
558 # - that would blow the python call stack
570 # - that would blow the python call stack
559 # - obsolescence markers may have cycles, we need to handle them.
571 # - obsolescence markers may have cycles, we need to handle them.
560 #
572 #
561 # The `toproceed` list act as our call stack. Every node we search
573 # The `toproceed` list act as our call stack. Every node we search
562 # successors set for are stacked there.
574 # successors set for are stacked there.
563 #
575 #
564 # The `stackedset` is set version of this stack used to check if a node is
576 # The `stackedset` is set version of this stack used to check if a node is
565 # already stacked. This check is used to detect cycles and prevent infinite
577 # already stacked. This check is used to detect cycles and prevent infinite
566 # loop.
578 # loop.
567 #
579 #
568 # successors set of all nodes are stored in the `cache` dictionary.
580 # successors set of all nodes are stored in the `cache` dictionary.
569 #
581 #
570 # After this while loop ends we use the cache to return the successors sets
582 # After this while loop ends we use the cache to return the successors sets
571 # for the node requested by the caller.
583 # for the node requested by the caller.
572 while toproceed:
584 while toproceed:
573 # Every iteration tries to compute the successors sets of the topmost
585 # Every iteration tries to compute the successors sets of the topmost
574 # node of the stack: CURRENT.
586 # node of the stack: CURRENT.
575 #
587 #
576 # There are four possible outcomes:
588 # There are four possible outcomes:
577 #
589 #
578 # 1) We already know the successors sets of CURRENT:
590 # 1) We already know the successors sets of CURRENT:
579 # -> mission accomplished, pop it from the stack.
591 # -> mission accomplished, pop it from the stack.
580 # 2) Node is not obsolete:
592 # 2) Node is not obsolete:
581 # -> the node is its own successors sets. Add it to the cache.
593 # -> the node is its own successors sets. Add it to the cache.
582 # 3) We do not know successors set of direct successors of CURRENT:
594 # 3) We do not know successors set of direct successors of CURRENT:
583 # -> We add those successors to the stack.
595 # -> We add those successors to the stack.
584 # 4) We know successors sets of all direct successors of CURRENT:
596 # 4) We know successors sets of all direct successors of CURRENT:
585 # -> We can compute CURRENT successors set and add it to the
597 # -> We can compute CURRENT successors set and add it to the
586 # cache.
598 # cache.
587 #
599 #
588 current = toproceed[-1]
600 current = toproceed[-1]
589 if current in cache:
601 if current in cache:
590 # case (1): We already know the successors sets
602 # case (1): We already know the successors sets
591 stackedset.remove(toproceed.pop())
603 stackedset.remove(toproceed.pop())
592 elif current not in succmarkers:
604 elif current not in succmarkers:
593 # case (2): The node is not obsolete.
605 # case (2): The node is not obsolete.
594 if current in repo:
606 if current in repo:
595 # We have a valid last successors.
607 # We have a valid last successors.
596 cache[current] = [(current,)]
608 cache[current] = [(current,)]
597 else:
609 else:
598 # Final obsolete version is unknown locally.
610 # Final obsolete version is unknown locally.
599 # Do not count that as a valid successors
611 # Do not count that as a valid successors
600 cache[current] = []
612 cache[current] = []
601 else:
613 else:
602 # cases (3) and (4)
614 # cases (3) and (4)
603 #
615 #
604 # We proceed in two phases. Phase 1 aims to distinguish case (3)
616 # We proceed in two phases. Phase 1 aims to distinguish case (3)
605 # from case (4):
617 # from case (4):
606 #
618 #
607 # For each direct successors of CURRENT, we check whether its
619 # For each direct successors of CURRENT, we check whether its
608 # successors sets are known. If they are not, we stack the
620 # successors sets are known. If they are not, we stack the
609 # unknown node and proceed to the next iteration of the while
621 # unknown node and proceed to the next iteration of the while
610 # loop. (case 3)
622 # loop. (case 3)
611 #
623 #
612 # During this step, we may detect obsolescence cycles: a node
624 # During this step, we may detect obsolescence cycles: a node
613 # with unknown successors sets but already in the call stack.
625 # with unknown successors sets but already in the call stack.
614 # In such a situation, we arbitrary set the successors sets of
626 # In such a situation, we arbitrary set the successors sets of
615 # the node to nothing (node pruned) to break the cycle.
627 # the node to nothing (node pruned) to break the cycle.
616 #
628 #
617 # If no break was encountered we proceed to phase 2.
629 # If no break was encountered we proceed to phase 2.
618 #
630 #
619 # Phase 2 computes successors sets of CURRENT (case 4); see details
631 # Phase 2 computes successors sets of CURRENT (case 4); see details
620 # in phase 2 itself.
632 # in phase 2 itself.
621 #
633 #
622 # Note the two levels of iteration in each phase.
634 # Note the two levels of iteration in each phase.
623 # - The first one handles obsolescence markers using CURRENT as
635 # - The first one handles obsolescence markers using CURRENT as
624 # precursor (successors markers of CURRENT).
636 # precursor (successors markers of CURRENT).
625 #
637 #
626 # Having multiple entry here means divergence.
638 # Having multiple entry here means divergence.
627 #
639 #
628 # - The second one handles successors defined in each marker.
640 # - The second one handles successors defined in each marker.
629 #
641 #
630 # Having none means pruned node, multiple successors means split,
642 # Having none means pruned node, multiple successors means split,
631 # single successors are standard replacement.
643 # single successors are standard replacement.
632 #
644 #
633 for mark in sorted(succmarkers[current]):
645 for mark in sorted(succmarkers[current]):
634 for suc in mark[1]:
646 for suc in mark[1]:
635 if suc not in cache:
647 if suc not in cache:
636 if suc in stackedset:
648 if suc in stackedset:
637 # cycle breaking
649 # cycle breaking
638 cache[suc] = []
650 cache[suc] = []
639 else:
651 else:
640 # case (3) If we have not computed successors sets
652 # case (3) If we have not computed successors sets
641 # of one of those successors we add it to the
653 # of one of those successors we add it to the
642 # `toproceed` stack and stop all work for this
654 # `toproceed` stack and stop all work for this
643 # iteration.
655 # iteration.
644 toproceed.append(suc)
656 toproceed.append(suc)
645 stackedset.add(suc)
657 stackedset.add(suc)
646 break
658 break
647 else:
659 else:
648 continue
660 continue
649 break
661 break
650 else:
662 else:
651 # case (4): we know all successors sets of all direct
663 # case (4): we know all successors sets of all direct
652 # successors
664 # successors
653 #
665 #
654 # Successors set contributed by each marker depends on the
666 # Successors set contributed by each marker depends on the
655 # successors sets of all its "successors" node.
667 # successors sets of all its "successors" node.
656 #
668 #
657 # Each different marker is a divergence in the obsolescence
669 # Each different marker is a divergence in the obsolescence
658 # history. It contributes successors sets distinct from other
670 # history. It contributes successors sets distinct from other
659 # markers.
671 # markers.
660 #
672 #
661 # Within a marker, a successor may have divergent successors
673 # Within a marker, a successor may have divergent successors
662 # sets. In such a case, the marker will contribute multiple
674 # sets. In such a case, the marker will contribute multiple
663 # divergent successors sets. If multiple successors have
675 # divergent successors sets. If multiple successors have
664 # divergent successors sets, a cartesian product is used.
676 # divergent successors sets, a cartesian product is used.
665 #
677 #
666 # At the end we post-process successors sets to remove
678 # At the end we post-process successors sets to remove
667 # duplicated entry and successors set that are strict subset of
679 # duplicated entry and successors set that are strict subset of
668 # another one.
680 # another one.
669 succssets = []
681 succssets = []
670 for mark in sorted(succmarkers[current]):
682 for mark in sorted(succmarkers[current]):
671 # successors sets contributed by this marker
683 # successors sets contributed by this marker
672 markss = [[]]
684 markss = [[]]
673 for suc in mark[1]:
685 for suc in mark[1]:
674 # cardinal product with previous successors
686 # cardinal product with previous successors
675 productresult = []
687 productresult = []
676 for prefix in markss:
688 for prefix in markss:
677 for suffix in cache[suc]:
689 for suffix in cache[suc]:
678 newss = list(prefix)
690 newss = list(prefix)
679 for part in suffix:
691 for part in suffix:
680 # do not duplicated entry in successors set
692 # do not duplicated entry in successors set
681 # first entry wins.
693 # first entry wins.
682 if part not in newss:
694 if part not in newss:
683 newss.append(part)
695 newss.append(part)
684 productresult.append(newss)
696 productresult.append(newss)
685 markss = productresult
697 markss = productresult
686 succssets.extend(markss)
698 succssets.extend(markss)
687 # remove duplicated and subset
699 # remove duplicated and subset
688 seen = []
700 seen = []
689 final = []
701 final = []
690 candidate = sorted(((set(s), s) for s in succssets if s),
702 candidate = sorted(((set(s), s) for s in succssets if s),
691 key=lambda x: len(x[1]), reverse=True)
703 key=lambda x: len(x[1]), reverse=True)
692 for setversion, listversion in candidate:
704 for setversion, listversion in candidate:
693 for seenset in seen:
705 for seenset in seen:
694 if setversion.issubset(seenset):
706 if setversion.issubset(seenset):
695 break
707 break
696 else:
708 else:
697 final.append(listversion)
709 final.append(listversion)
698 seen.append(setversion)
710 seen.append(setversion)
699 final.reverse() # put small successors set first
711 final.reverse() # put small successors set first
700 cache[current] = final
712 cache[current] = final
701 return cache[initialnode]
713 return cache[initialnode]
702
714
703 def _knownrevs(repo, nodes):
715 def _knownrevs(repo, nodes):
704 """yield revision numbers of known nodes passed in parameters
716 """yield revision numbers of known nodes passed in parameters
705
717
706 Unknown revisions are silently ignored."""
718 Unknown revisions are silently ignored."""
707 torev = repo.changelog.nodemap.get
719 torev = repo.changelog.nodemap.get
708 for n in nodes:
720 for n in nodes:
709 rev = torev(n)
721 rev = torev(n)
710 if rev is not None:
722 if rev is not None:
711 yield rev
723 yield rev
712
724
713 # mapping of 'set-name' -> <function to compute this set>
725 # mapping of 'set-name' -> <function to compute this set>
714 cachefuncs = {}
726 cachefuncs = {}
715 def cachefor(name):
727 def cachefor(name):
716 """Decorator to register a function as computing the cache for a set"""
728 """Decorator to register a function as computing the cache for a set"""
717 def decorator(func):
729 def decorator(func):
718 assert name not in cachefuncs
730 assert name not in cachefuncs
719 cachefuncs[name] = func
731 cachefuncs[name] = func
720 return func
732 return func
721 return decorator
733 return decorator
722
734
723 def getrevs(repo, name):
735 def getrevs(repo, name):
724 """Return the set of revision that belong to the <name> set
736 """Return the set of revision that belong to the <name> set
725
737
726 Such access may compute the set and cache it for future use"""
738 Such access may compute the set and cache it for future use"""
727 repo = repo.unfiltered()
739 repo = repo.unfiltered()
728 if not repo.obsstore:
740 if not repo.obsstore:
729 return ()
741 return ()
730 if name not in repo.obsstore.caches:
742 if name not in repo.obsstore.caches:
731 repo.obsstore.caches[name] = cachefuncs[name](repo)
743 repo.obsstore.caches[name] = cachefuncs[name](repo)
732 return repo.obsstore.caches[name]
744 return repo.obsstore.caches[name]
733
745
734 # To be simple we need to invalidate obsolescence cache when:
746 # To be simple we need to invalidate obsolescence cache when:
735 #
747 #
736 # - new changeset is added:
748 # - new changeset is added:
737 # - public phase is changed
749 # - public phase is changed
738 # - obsolescence marker are added
750 # - obsolescence marker are added
739 # - strip is used a repo
751 # - strip is used a repo
740 def clearobscaches(repo):
752 def clearobscaches(repo):
741 """Remove all obsolescence related cache from a repo
753 """Remove all obsolescence related cache from a repo
742
754
743 This remove all cache in obsstore is the obsstore already exist on the
755 This remove all cache in obsstore is the obsstore already exist on the
744 repo.
756 repo.
745
757
746 (We could be smarter here given the exact event that trigger the cache
758 (We could be smarter here given the exact event that trigger the cache
747 clearing)"""
759 clearing)"""
748 # only clear cache is there is obsstore data in this repo
760 # only clear cache is there is obsstore data in this repo
749 if 'obsstore' in repo._filecache:
761 if 'obsstore' in repo._filecache:
750 repo.obsstore.caches.clear()
762 repo.obsstore.caches.clear()
751
763
752 @cachefor('obsolete')
764 @cachefor('obsolete')
753 def _computeobsoleteset(repo):
765 def _computeobsoleteset(repo):
754 """the set of obsolete revisions"""
766 """the set of obsolete revisions"""
755 obs = set()
767 obs = set()
756 getrev = repo.changelog.nodemap.get
768 getrev = repo.changelog.nodemap.get
757 getphase = repo._phasecache.phase
769 getphase = repo._phasecache.phase
758 for node in repo.obsstore.successors:
770 for node in repo.obsstore.successors:
759 rev = getrev(node)
771 rev = getrev(node)
760 if rev is not None and getphase(repo, rev):
772 if rev is not None and getphase(repo, rev):
761 obs.add(rev)
773 obs.add(rev)
762 return obs
774 return obs
763
775
764 @cachefor('unstable')
776 @cachefor('unstable')
765 def _computeunstableset(repo):
777 def _computeunstableset(repo):
766 """the set of non obsolete revisions with obsolete parents"""
778 """the set of non obsolete revisions with obsolete parents"""
767 # revset is not efficient enough here
779 # revset is not efficient enough here
768 # we do (obsolete()::) - obsolete() by hand
780 # we do (obsolete()::) - obsolete() by hand
769 obs = getrevs(repo, 'obsolete')
781 obs = getrevs(repo, 'obsolete')
770 if not obs:
782 if not obs:
771 return set()
783 return set()
772 cl = repo.changelog
784 cl = repo.changelog
773 return set(r for r in cl.descendants(obs) if r not in obs)
785 return set(r for r in cl.descendants(obs) if r not in obs)
774
786
775 @cachefor('suspended')
787 @cachefor('suspended')
776 def _computesuspendedset(repo):
788 def _computesuspendedset(repo):
777 """the set of obsolete parents with non obsolete descendants"""
789 """the set of obsolete parents with non obsolete descendants"""
778 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
790 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
779 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
791 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
780
792
781 @cachefor('extinct')
793 @cachefor('extinct')
782 def _computeextinctset(repo):
794 def _computeextinctset(repo):
783 """the set of obsolete parents without non obsolete descendants"""
795 """the set of obsolete parents without non obsolete descendants"""
784 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
796 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
785
797
786
798
787 @cachefor('bumped')
799 @cachefor('bumped')
788 def _computebumpedset(repo):
800 def _computebumpedset(repo):
789 """the set of revs trying to obsolete public revisions"""
801 """the set of revs trying to obsolete public revisions"""
790 bumped = set()
802 bumped = set()
791 # utils function (avoid attribut lookup in the loop)
803 # utils function (avoid attribut lookup in the loop)
792 phase = repo._phasecache.phase # would be faster to grab the full list
804 phase = repo._phasecache.phase # would be faster to grab the full list
793 public = phases.public
805 public = phases.public
794 cl = repo.changelog
806 cl = repo.changelog
795 torev = cl.nodemap.get
807 torev = cl.nodemap.get
796 obs = getrevs(repo, 'obsolete')
808 obs = getrevs(repo, 'obsolete')
797 for rev in repo:
809 for rev in repo:
798 # We only evaluate mutable, non-obsolete revision
810 # We only evaluate mutable, non-obsolete revision
799 if (public < phase(repo, rev)) and (rev not in obs):
811 if (public < phase(repo, rev)) and (rev not in obs):
800 node = cl.node(rev)
812 node = cl.node(rev)
801 # (future) A cache of precursors may worth if split is very common
813 # (future) A cache of precursors may worth if split is very common
802 for pnode in allprecursors(repo.obsstore, [node],
814 for pnode in allprecursors(repo.obsstore, [node],
803 ignoreflags=bumpedfix):
815 ignoreflags=bumpedfix):
804 prev = torev(pnode) # unfiltered! but so is phasecache
816 prev = torev(pnode) # unfiltered! but so is phasecache
805 if (prev is not None) and (phase(repo, prev) <= public):
817 if (prev is not None) and (phase(repo, prev) <= public):
806 # we have a public precursors
818 # we have a public precursors
807 bumped.add(rev)
819 bumped.add(rev)
808 break # Next draft!
820 break # Next draft!
809 return bumped
821 return bumped
810
822
811 @cachefor('divergent')
823 @cachefor('divergent')
812 def _computedivergentset(repo):
824 def _computedivergentset(repo):
813 """the set of rev that compete to be the final successors of some revision.
825 """the set of rev that compete to be the final successors of some revision.
814 """
826 """
815 divergent = set()
827 divergent = set()
816 obsstore = repo.obsstore
828 obsstore = repo.obsstore
817 newermap = {}
829 newermap = {}
818 for ctx in repo.set('(not public()) - obsolete()'):
830 for ctx in repo.set('(not public()) - obsolete()'):
819 mark = obsstore.precursors.get(ctx.node(), ())
831 mark = obsstore.precursors.get(ctx.node(), ())
820 toprocess = set(mark)
832 toprocess = set(mark)
821 while toprocess:
833 while toprocess:
822 prec = toprocess.pop()[0]
834 prec = toprocess.pop()[0]
823 if prec not in newermap:
835 if prec not in newermap:
824 successorssets(repo, prec, newermap)
836 successorssets(repo, prec, newermap)
825 newer = [n for n in newermap[prec] if n]
837 newer = [n for n in newermap[prec] if n]
826 if len(newer) > 1:
838 if len(newer) > 1:
827 divergent.add(ctx.rev())
839 divergent.add(ctx.rev())
828 break
840 break
829 toprocess.update(obsstore.precursors.get(prec, ()))
841 toprocess.update(obsstore.precursors.get(prec, ()))
830 return divergent
842 return divergent
831
843
832
844
833 def createmarkers(repo, relations, flag=0, metadata=None):
845 def createmarkers(repo, relations, flag=0, metadata=None):
834 """Add obsolete markers between changesets in a repo
846 """Add obsolete markers between changesets in a repo
835
847
836 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
848 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
837 `old` and `news` are changectx.
849 `old` and `news` are changectx.
838
850
839 Trying to obsolete a public changeset will raise an exception.
851 Trying to obsolete a public changeset will raise an exception.
840
852
841 Current user and date are used except if specified otherwise in the
853 Current user and date are used except if specified otherwise in the
842 metadata attribute.
854 metadata attribute.
843
855
844 This function operates within a transaction of its own, but does
856 This function operates within a transaction of its own, but does
845 not take any lock on the repo.
857 not take any lock on the repo.
846 """
858 """
847 # prepare metadata
859 # prepare metadata
848 if metadata is None:
860 if metadata is None:
849 metadata = {}
861 metadata = {}
850 if 'date' not in metadata:
862 if 'date' not in metadata:
851 metadata['date'] = '%i %i' % util.makedate()
863 metadata['date'] = '%i %i' % util.makedate()
852 if 'user' not in metadata:
864 if 'user' not in metadata:
853 metadata['user'] = repo.ui.username()
865 metadata['user'] = repo.ui.username()
854 tr = repo.transaction('add-obsolescence-marker')
866 tr = repo.transaction('add-obsolescence-marker')
855 try:
867 try:
856 for prec, sucs in relations:
868 for prec, sucs in relations:
857 if not prec.mutable():
869 if not prec.mutable():
858 raise util.Abort("cannot obsolete immutable changeset: %s"
870 raise util.Abort("cannot obsolete immutable changeset: %s"
859 % prec)
871 % prec)
860 nprec = prec.node()
872 nprec = prec.node()
861 nsucs = tuple(s.node() for s in sucs)
873 nsucs = tuple(s.node() for s in sucs)
862 if nprec in nsucs:
874 if nprec in nsucs:
863 raise util.Abort("changeset %s cannot obsolete itself" % prec)
875 raise util.Abort("changeset %s cannot obsolete itself" % prec)
864 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
876 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
865 repo.filteredrevcache.clear()
877 repo.filteredrevcache.clear()
866 tr.close()
878 tr.close()
867 finally:
879 finally:
868 tr.release()
880 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now