##// END OF EJS Templates
obsolete: add a date argument to the `createmarkers` function...
Pierre-Yves David -
r22219:79c68626 default
parent child Browse files
Show More
@@ -1,879 +1,877
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fmversion = 0
102 _fmversion = 0
103 _fmfixed = '>BIB20s'
103 _fmfixed = '>BIB20s'
104 _fmnode = '20s'
104 _fmnode = '20s'
105 _fmfsize = struct.calcsize(_fmfixed)
105 _fmfsize = struct.calcsize(_fmfixed)
106 _fnodesize = struct.calcsize(_fmnode)
106 _fnodesize = struct.calcsize(_fmnode)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion != _fmversion:
145 if diskversion != _fmversion:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148
148
149 # Loop on markers
149 # Loop on markers
150 l = len(data)
150 l = len(data)
151 while off + _fmfsize <= l:
151 while off + _fmfsize <= l:
152 # read fixed part
152 # read fixed part
153 cur = data[off:off + _fmfsize]
153 cur = data[off:off + _fmfsize]
154 off += _fmfsize
154 off += _fmfsize
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
156 # read replacement
156 # read replacement
157 sucs = ()
157 sucs = ()
158 if nbsuc:
158 if nbsuc:
159 s = (_fnodesize * nbsuc)
159 s = (_fnodesize * nbsuc)
160 cur = data[off:off + s]
160 cur = data[off:off + s]
161 sucs = _unpack(_fmnode * nbsuc, cur)
161 sucs = _unpack(_fmnode * nbsuc, cur)
162 off += s
162 off += s
163 # read metadata
163 # read metadata
164 # (metadata will be decoded on demand)
164 # (metadata will be decoded on demand)
165 metadata = data[off:off + mdsize]
165 metadata = data[off:off + mdsize]
166 if len(metadata) != mdsize:
166 if len(metadata) != mdsize:
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
168 'short, %d bytes expected, got %d')
168 'short, %d bytes expected, got %d')
169 % (mdsize, len(metadata)))
169 % (mdsize, len(metadata)))
170 off += mdsize
170 off += mdsize
171 yield (pre, sucs, flags, metadata)
171 yield (pre, sucs, flags, metadata)
172
172
173 def encodemeta(meta):
173 def encodemeta(meta):
174 """Return encoded metadata string to string mapping.
174 """Return encoded metadata string to string mapping.
175
175
176 Assume no ':' in key and no '\0' in both key and value."""
176 Assume no ':' in key and no '\0' in both key and value."""
177 for key, value in meta.iteritems():
177 for key, value in meta.iteritems():
178 if ':' in key or '\0' in key:
178 if ':' in key or '\0' in key:
179 raise ValueError("':' and '\0' are forbidden in metadata key'")
179 raise ValueError("':' and '\0' are forbidden in metadata key'")
180 if '\0' in value:
180 if '\0' in value:
181 raise ValueError("':' is forbidden in metadata value'")
181 raise ValueError("':' is forbidden in metadata value'")
182 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
182 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
183
183
184 def decodemeta(data):
184 def decodemeta(data):
185 """Return string to string dictionary from encoded version."""
185 """Return string to string dictionary from encoded version."""
186 d = {}
186 d = {}
187 for l in data.split('\0'):
187 for l in data.split('\0'):
188 if l:
188 if l:
189 key, value = l.split(':')
189 key, value = l.split(':')
190 d[key] = value
190 d[key] = value
191 return d
191 return d
192
192
193 class marker(object):
193 class marker(object):
194 """Wrap obsolete marker raw data"""
194 """Wrap obsolete marker raw data"""
195
195
196 def __init__(self, repo, data):
196 def __init__(self, repo, data):
197 # the repo argument will be used to create changectx in later version
197 # the repo argument will be used to create changectx in later version
198 self._repo = repo
198 self._repo = repo
199 self._data = data
199 self._data = data
200 self._decodedmeta = None
200 self._decodedmeta = None
201
201
202 def __hash__(self):
202 def __hash__(self):
203 return hash(self._data)
203 return hash(self._data)
204
204
205 def __eq__(self, other):
205 def __eq__(self, other):
206 if type(other) != type(self):
206 if type(other) != type(self):
207 return False
207 return False
208 return self._data == other._data
208 return self._data == other._data
209
209
210 def precnode(self):
210 def precnode(self):
211 """Precursor changeset node identifier"""
211 """Precursor changeset node identifier"""
212 return self._data[0]
212 return self._data[0]
213
213
214 def succnodes(self):
214 def succnodes(self):
215 """List of successor changesets node identifiers"""
215 """List of successor changesets node identifiers"""
216 return self._data[1]
216 return self._data[1]
217
217
218 def metadata(self):
218 def metadata(self):
219 """Decoded metadata dictionary"""
219 """Decoded metadata dictionary"""
220 if self._decodedmeta is None:
220 if self._decodedmeta is None:
221 self._decodedmeta = decodemeta(self._data[3])
221 self._decodedmeta = decodemeta(self._data[3])
222 return self._decodedmeta
222 return self._decodedmeta
223
223
224 def date(self):
224 def date(self):
225 """Creation date as (unixtime, offset)"""
225 """Creation date as (unixtime, offset)"""
226 parts = self.metadata()['date'].split(' ')
226 parts = self.metadata()['date'].split(' ')
227 return (float(parts[0]), int(parts[1]))
227 return (float(parts[0]), int(parts[1]))
228
228
229 def flags(self):
229 def flags(self):
230 """The flags field of the marker"""
230 """The flags field of the marker"""
231 return self._data[2]
231 return self._data[2]
232
232
233 class obsstore(object):
233 class obsstore(object):
234 """Store obsolete markers
234 """Store obsolete markers
235
235
236 Markers can be accessed with two mappings:
236 Markers can be accessed with two mappings:
237 - precursors[x] -> set(markers on precursors edges of x)
237 - precursors[x] -> set(markers on precursors edges of x)
238 - successors[x] -> set(markers on successors edges of x)
238 - successors[x] -> set(markers on successors edges of x)
239 """
239 """
240
240
241 def __init__(self, sopener):
241 def __init__(self, sopener):
242 # caches for various obsolescence related cache
242 # caches for various obsolescence related cache
243 self.caches = {}
243 self.caches = {}
244 self._all = []
244 self._all = []
245 # new markers to serialize
245 # new markers to serialize
246 self.precursors = {}
246 self.precursors = {}
247 self.successors = {}
247 self.successors = {}
248 self.sopener = sopener
248 self.sopener = sopener
249 data = sopener.tryread('obsstore')
249 data = sopener.tryread('obsstore')
250 if data:
250 if data:
251 self._load(_readmarkers(data))
251 self._load(_readmarkers(data))
252
252
253 def __iter__(self):
253 def __iter__(self):
254 return iter(self._all)
254 return iter(self._all)
255
255
256 def __len__(self):
256 def __len__(self):
257 return len(self._all)
257 return len(self._all)
258
258
259 def __nonzero__(self):
259 def __nonzero__(self):
260 return bool(self._all)
260 return bool(self._all)
261
261
262 def create(self, transaction, prec, succs=(), flag=0, date=None,
262 def create(self, transaction, prec, succs=(), flag=0, date=None,
263 metadata=None):
263 metadata=None):
264 """obsolete: add a new obsolete marker
264 """obsolete: add a new obsolete marker
265
265
266 * ensuring it is hashable
266 * ensuring it is hashable
267 * check mandatory metadata
267 * check mandatory metadata
268 * encode metadata
268 * encode metadata
269
269
270 If you are a human writing code creating marker you want to use the
270 If you are a human writing code creating marker you want to use the
271 `createmarkers` function in this module instead.
271 `createmarkers` function in this module instead.
272
272
273 return True if a new marker have been added, False if the markers
273 return True if a new marker have been added, False if the markers
274 already existed (no op).
274 already existed (no op).
275 """
275 """
276 if metadata is None:
276 if metadata is None:
277 metadata = {}
277 metadata = {}
278 if 'date' not in metadata:
278 if 'date' not in metadata:
279 if date is None:
279 if date is None:
280 date = util.makedate()
280 date = util.makedate()
281 metadata['date'] = "%d %d" % date
281 metadata['date'] = "%d %d" % date
282 if len(prec) != 20:
282 if len(prec) != 20:
283 raise ValueError(prec)
283 raise ValueError(prec)
284 for succ in succs:
284 for succ in succs:
285 if len(succ) != 20:
285 if len(succ) != 20:
286 raise ValueError(succ)
286 raise ValueError(succ)
287 if prec in succs:
287 if prec in succs:
288 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
288 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
289 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
289 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
290 return bool(self.add(transaction, [marker]))
290 return bool(self.add(transaction, [marker]))
291
291
292 def add(self, transaction, markers):
292 def add(self, transaction, markers):
293 """Add new markers to the store
293 """Add new markers to the store
294
294
295 Take care of filtering duplicate.
295 Take care of filtering duplicate.
296 Return the number of new marker."""
296 Return the number of new marker."""
297 if not _enabled:
297 if not _enabled:
298 raise util.Abort('obsolete feature is not enabled on this repo')
298 raise util.Abort('obsolete feature is not enabled on this repo')
299 known = set(self._all)
299 known = set(self._all)
300 new = []
300 new = []
301 for m in markers:
301 for m in markers:
302 if m not in known:
302 if m not in known:
303 known.add(m)
303 known.add(m)
304 new.append(m)
304 new.append(m)
305 if new:
305 if new:
306 f = self.sopener('obsstore', 'ab')
306 f = self.sopener('obsstore', 'ab')
307 try:
307 try:
308 # Whether the file's current position is at the begin or at
308 # Whether the file's current position is at the begin or at
309 # the end after opening a file for appending is implementation
309 # the end after opening a file for appending is implementation
310 # defined. So we must seek to the end before calling tell(),
310 # defined. So we must seek to the end before calling tell(),
311 # or we may get a zero offset for non-zero sized files on
311 # or we may get a zero offset for non-zero sized files on
312 # some platforms (issue3543).
312 # some platforms (issue3543).
313 f.seek(0, _SEEK_END)
313 f.seek(0, _SEEK_END)
314 offset = f.tell()
314 offset = f.tell()
315 transaction.add('obsstore', offset)
315 transaction.add('obsstore', offset)
316 # offset == 0: new file - add the version header
316 # offset == 0: new file - add the version header
317 for bytes in _encodemarkers(new, offset == 0):
317 for bytes in _encodemarkers(new, offset == 0):
318 f.write(bytes)
318 f.write(bytes)
319 finally:
319 finally:
320 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
320 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
321 # call 'filecacheentry.refresh()' here
321 # call 'filecacheentry.refresh()' here
322 f.close()
322 f.close()
323 self._load(new)
323 self._load(new)
324 # new marker *may* have changed several set. invalidate the cache.
324 # new marker *may* have changed several set. invalidate the cache.
325 self.caches.clear()
325 self.caches.clear()
326 return len(new)
326 return len(new)
327
327
328 def mergemarkers(self, transaction, data):
328 def mergemarkers(self, transaction, data):
329 markers = _readmarkers(data)
329 markers = _readmarkers(data)
330 self.add(transaction, markers)
330 self.add(transaction, markers)
331
331
332 def _load(self, markers):
332 def _load(self, markers):
333 for mark in markers:
333 for mark in markers:
334 self._all.append(mark)
334 self._all.append(mark)
335 pre, sucs = mark[:2]
335 pre, sucs = mark[:2]
336 self.successors.setdefault(pre, set()).add(mark)
336 self.successors.setdefault(pre, set()).add(mark)
337 for suc in sucs:
337 for suc in sucs:
338 self.precursors.setdefault(suc, set()).add(mark)
338 self.precursors.setdefault(suc, set()).add(mark)
339 if node.nullid in self.precursors:
339 if node.nullid in self.precursors:
340 raise util.Abort(_('bad obsolescence marker detected: '
340 raise util.Abort(_('bad obsolescence marker detected: '
341 'invalid successors nullid'))
341 'invalid successors nullid'))
342
342
343 def _encodemarkers(markers, addheader=False):
343 def _encodemarkers(markers, addheader=False):
344 # Kept separate from flushmarkers(), it will be reused for
344 # Kept separate from flushmarkers(), it will be reused for
345 # markers exchange.
345 # markers exchange.
346 if addheader:
346 if addheader:
347 yield _pack('>B', _fmversion)
347 yield _pack('>B', _fmversion)
348 for marker in markers:
348 for marker in markers:
349 yield _encodeonemarker(marker)
349 yield _encodeonemarker(marker)
350
350
351
351
352 def _encodeonemarker(marker):
352 def _encodeonemarker(marker):
353 pre, sucs, flags, metadata = marker
353 pre, sucs, flags, metadata = marker
354 nbsuc = len(sucs)
354 nbsuc = len(sucs)
355 format = _fmfixed + (_fmnode * nbsuc)
355 format = _fmfixed + (_fmnode * nbsuc)
356 data = [nbsuc, len(metadata), flags, pre]
356 data = [nbsuc, len(metadata), flags, pre]
357 data.extend(sucs)
357 data.extend(sucs)
358 return _pack(format, *data) + metadata
358 return _pack(format, *data) + metadata
359
359
360 # arbitrary picked to fit into 8K limit from HTTP server
360 # arbitrary picked to fit into 8K limit from HTTP server
361 # you have to take in account:
361 # you have to take in account:
362 # - the version header
362 # - the version header
363 # - the base85 encoding
363 # - the base85 encoding
364 _maxpayload = 5300
364 _maxpayload = 5300
365
365
366 def _pushkeyescape(markers):
366 def _pushkeyescape(markers):
367 """encode markers into a dict suitable for pushkey exchange
367 """encode markers into a dict suitable for pushkey exchange
368
368
369 - binary data is base85 encoded
369 - binary data is base85 encoded
370 - split in chunks smaller than 5300 bytes"""
370 - split in chunks smaller than 5300 bytes"""
371 keys = {}
371 keys = {}
372 parts = []
372 parts = []
373 currentlen = _maxpayload * 2 # ensure we create a new part
373 currentlen = _maxpayload * 2 # ensure we create a new part
374 for marker in markers:
374 for marker in markers:
375 nextdata = _encodeonemarker(marker)
375 nextdata = _encodeonemarker(marker)
376 if (len(nextdata) + currentlen > _maxpayload):
376 if (len(nextdata) + currentlen > _maxpayload):
377 currentpart = []
377 currentpart = []
378 currentlen = 0
378 currentlen = 0
379 parts.append(currentpart)
379 parts.append(currentpart)
380 currentpart.append(nextdata)
380 currentpart.append(nextdata)
381 currentlen += len(nextdata)
381 currentlen += len(nextdata)
382 for idx, part in enumerate(reversed(parts)):
382 for idx, part in enumerate(reversed(parts)):
383 data = ''.join([_pack('>B', _fmversion)] + part)
383 data = ''.join([_pack('>B', _fmversion)] + part)
384 keys['dump%i' % idx] = base85.b85encode(data)
384 keys['dump%i' % idx] = base85.b85encode(data)
385 return keys
385 return keys
386
386
387 def listmarkers(repo):
387 def listmarkers(repo):
388 """List markers over pushkey"""
388 """List markers over pushkey"""
389 if not repo.obsstore:
389 if not repo.obsstore:
390 return {}
390 return {}
391 return _pushkeyescape(repo.obsstore)
391 return _pushkeyescape(repo.obsstore)
392
392
393 def pushmarker(repo, key, old, new):
393 def pushmarker(repo, key, old, new):
394 """Push markers over pushkey"""
394 """Push markers over pushkey"""
395 if not key.startswith('dump'):
395 if not key.startswith('dump'):
396 repo.ui.warn(_('unknown key: %r') % key)
396 repo.ui.warn(_('unknown key: %r') % key)
397 return 0
397 return 0
398 if old:
398 if old:
399 repo.ui.warn(_('unexpected old value for %r') % key)
399 repo.ui.warn(_('unexpected old value for %r') % key)
400 return 0
400 return 0
401 data = base85.b85decode(new)
401 data = base85.b85decode(new)
402 lock = repo.lock()
402 lock = repo.lock()
403 try:
403 try:
404 tr = repo.transaction('pushkey: obsolete markers')
404 tr = repo.transaction('pushkey: obsolete markers')
405 try:
405 try:
406 repo.obsstore.mergemarkers(tr, data)
406 repo.obsstore.mergemarkers(tr, data)
407 tr.close()
407 tr.close()
408 return 1
408 return 1
409 finally:
409 finally:
410 tr.release()
410 tr.release()
411 finally:
411 finally:
412 lock.release()
412 lock.release()
413
413
414 def allmarkers(repo):
414 def allmarkers(repo):
415 """all obsolete markers known in a repository"""
415 """all obsolete markers known in a repository"""
416 for markerdata in repo.obsstore:
416 for markerdata in repo.obsstore:
417 yield marker(repo, markerdata)
417 yield marker(repo, markerdata)
418
418
419 def precursormarkers(ctx):
419 def precursormarkers(ctx):
420 """obsolete marker marking this changeset as a successors"""
420 """obsolete marker marking this changeset as a successors"""
421 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
421 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
422 yield marker(ctx._repo, data)
422 yield marker(ctx._repo, data)
423
423
424 def successormarkers(ctx):
424 def successormarkers(ctx):
425 """obsolete marker making this changeset obsolete"""
425 """obsolete marker making this changeset obsolete"""
426 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
426 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
427 yield marker(ctx._repo, data)
427 yield marker(ctx._repo, data)
428
428
429 def allsuccessors(obsstore, nodes, ignoreflags=0):
429 def allsuccessors(obsstore, nodes, ignoreflags=0):
430 """Yield node for every successor of <nodes>.
430 """Yield node for every successor of <nodes>.
431
431
432 Some successors may be unknown locally.
432 Some successors may be unknown locally.
433
433
434 This is a linear yield unsuited to detecting split changesets. It includes
434 This is a linear yield unsuited to detecting split changesets. It includes
435 initial nodes too."""
435 initial nodes too."""
436 remaining = set(nodes)
436 remaining = set(nodes)
437 seen = set(remaining)
437 seen = set(remaining)
438 while remaining:
438 while remaining:
439 current = remaining.pop()
439 current = remaining.pop()
440 yield current
440 yield current
441 for mark in obsstore.successors.get(current, ()):
441 for mark in obsstore.successors.get(current, ()):
442 # ignore marker flagged with specified flag
442 # ignore marker flagged with specified flag
443 if mark[2] & ignoreflags:
443 if mark[2] & ignoreflags:
444 continue
444 continue
445 for suc in mark[1]:
445 for suc in mark[1]:
446 if suc not in seen:
446 if suc not in seen:
447 seen.add(suc)
447 seen.add(suc)
448 remaining.add(suc)
448 remaining.add(suc)
449
449
450 def allprecursors(obsstore, nodes, ignoreflags=0):
450 def allprecursors(obsstore, nodes, ignoreflags=0):
451 """Yield node for every precursors of <nodes>.
451 """Yield node for every precursors of <nodes>.
452
452
453 Some precursors may be unknown locally.
453 Some precursors may be unknown locally.
454
454
455 This is a linear yield unsuited to detecting folded changesets. It includes
455 This is a linear yield unsuited to detecting folded changesets. It includes
456 initial nodes too."""
456 initial nodes too."""
457
457
458 remaining = set(nodes)
458 remaining = set(nodes)
459 seen = set(remaining)
459 seen = set(remaining)
460 while remaining:
460 while remaining:
461 current = remaining.pop()
461 current = remaining.pop()
462 yield current
462 yield current
463 for mark in obsstore.precursors.get(current, ()):
463 for mark in obsstore.precursors.get(current, ()):
464 # ignore marker flagged with specified flag
464 # ignore marker flagged with specified flag
465 if mark[2] & ignoreflags:
465 if mark[2] & ignoreflags:
466 continue
466 continue
467 suc = mark[0]
467 suc = mark[0]
468 if suc not in seen:
468 if suc not in seen:
469 seen.add(suc)
469 seen.add(suc)
470 remaining.add(suc)
470 remaining.add(suc)
471
471
472 def foreground(repo, nodes):
472 def foreground(repo, nodes):
473 """return all nodes in the "foreground" of other node
473 """return all nodes in the "foreground" of other node
474
474
475 The foreground of a revision is anything reachable using parent -> children
475 The foreground of a revision is anything reachable using parent -> children
476 or precursor -> successor relation. It is very similar to "descendant" but
476 or precursor -> successor relation. It is very similar to "descendant" but
477 augmented with obsolescence information.
477 augmented with obsolescence information.
478
478
479 Beware that possible obsolescence cycle may result if complex situation.
479 Beware that possible obsolescence cycle may result if complex situation.
480 """
480 """
481 repo = repo.unfiltered()
481 repo = repo.unfiltered()
482 foreground = set(repo.set('%ln::', nodes))
482 foreground = set(repo.set('%ln::', nodes))
483 if repo.obsstore:
483 if repo.obsstore:
484 # We only need this complicated logic if there is obsolescence
484 # We only need this complicated logic if there is obsolescence
485 # XXX will probably deserve an optimised revset.
485 # XXX will probably deserve an optimised revset.
486 nm = repo.changelog.nodemap
486 nm = repo.changelog.nodemap
487 plen = -1
487 plen = -1
488 # compute the whole set of successors or descendants
488 # compute the whole set of successors or descendants
489 while len(foreground) != plen:
489 while len(foreground) != plen:
490 plen = len(foreground)
490 plen = len(foreground)
491 succs = set(c.node() for c in foreground)
491 succs = set(c.node() for c in foreground)
492 mutable = [c.node() for c in foreground if c.mutable()]
492 mutable = [c.node() for c in foreground if c.mutable()]
493 succs.update(allsuccessors(repo.obsstore, mutable))
493 succs.update(allsuccessors(repo.obsstore, mutable))
494 known = (n for n in succs if n in nm)
494 known = (n for n in succs if n in nm)
495 foreground = set(repo.set('%ln::', known))
495 foreground = set(repo.set('%ln::', known))
496 return set(c.node() for c in foreground)
496 return set(c.node() for c in foreground)
497
497
498
498
499 def successorssets(repo, initialnode, cache=None):
499 def successorssets(repo, initialnode, cache=None):
500 """Return all set of successors of initial nodes
500 """Return all set of successors of initial nodes
501
501
502 The successors set of a changeset A are a group of revisions that succeed
502 The successors set of a changeset A are a group of revisions that succeed
503 A. It succeeds A as a consistent whole, each revision being only a partial
503 A. It succeeds A as a consistent whole, each revision being only a partial
504 replacement. The successors set contains non-obsolete changesets only.
504 replacement. The successors set contains non-obsolete changesets only.
505
505
506 This function returns the full list of successor sets which is why it
506 This function returns the full list of successor sets which is why it
507 returns a list of tuples and not just a single tuple. Each tuple is a valid
507 returns a list of tuples and not just a single tuple. Each tuple is a valid
508 successors set. Not that (A,) may be a valid successors set for changeset A
508 successors set. Not that (A,) may be a valid successors set for changeset A
509 (see below).
509 (see below).
510
510
511 In most cases, a changeset A will have a single element (e.g. the changeset
511 In most cases, a changeset A will have a single element (e.g. the changeset
512 A is replaced by A') in its successors set. Though, it is also common for a
512 A is replaced by A') in its successors set. Though, it is also common for a
513 changeset A to have no elements in its successor set (e.g. the changeset
513 changeset A to have no elements in its successor set (e.g. the changeset
514 has been pruned). Therefore, the returned list of successors sets will be
514 has been pruned). Therefore, the returned list of successors sets will be
515 [(A',)] or [], respectively.
515 [(A',)] or [], respectively.
516
516
517 When a changeset A is split into A' and B', however, it will result in a
517 When a changeset A is split into A' and B', however, it will result in a
518 successors set containing more than a single element, i.e. [(A',B')].
518 successors set containing more than a single element, i.e. [(A',B')].
519 Divergent changesets will result in multiple successors sets, i.e. [(A',),
519 Divergent changesets will result in multiple successors sets, i.e. [(A',),
520 (A'')].
520 (A'')].
521
521
522 If a changeset A is not obsolete, then it will conceptually have no
522 If a changeset A is not obsolete, then it will conceptually have no
523 successors set. To distinguish this from a pruned changeset, the successor
523 successors set. To distinguish this from a pruned changeset, the successor
524 set will only contain itself, i.e. [(A,)].
524 set will only contain itself, i.e. [(A,)].
525
525
526 Finally, successors unknown locally are considered to be pruned (obsoleted
526 Finally, successors unknown locally are considered to be pruned (obsoleted
527 without any successors).
527 without any successors).
528
528
529 The optional `cache` parameter is a dictionary that may contain precomputed
529 The optional `cache` parameter is a dictionary that may contain precomputed
530 successors sets. It is meant to reuse the computation of a previous call to
530 successors sets. It is meant to reuse the computation of a previous call to
531 `successorssets` when multiple calls are made at the same time. The cache
531 `successorssets` when multiple calls are made at the same time. The cache
532 dictionary is updated in place. The caller is responsible for its live
532 dictionary is updated in place. The caller is responsible for its live
533 spawn. Code that makes multiple calls to `successorssets` *must* use this
533 spawn. Code that makes multiple calls to `successorssets` *must* use this
534 cache mechanism or suffer terrible performances.
534 cache mechanism or suffer terrible performances.
535
535
536 """
536 """
537
537
538 succmarkers = repo.obsstore.successors
538 succmarkers = repo.obsstore.successors
539
539
540 # Stack of nodes we search successors sets for
540 # Stack of nodes we search successors sets for
541 toproceed = [initialnode]
541 toproceed = [initialnode]
542 # set version of above list for fast loop detection
542 # set version of above list for fast loop detection
543 # element added to "toproceed" must be added here
543 # element added to "toproceed" must be added here
544 stackedset = set(toproceed)
544 stackedset = set(toproceed)
545 if cache is None:
545 if cache is None:
546 cache = {}
546 cache = {}
547
547
548 # This while loop is the flattened version of a recursive search for
548 # This while loop is the flattened version of a recursive search for
549 # successors sets
549 # successors sets
550 #
550 #
551 # def successorssets(x):
551 # def successorssets(x):
552 # successors = directsuccessors(x)
552 # successors = directsuccessors(x)
553 # ss = [[]]
553 # ss = [[]]
554 # for succ in directsuccessors(x):
554 # for succ in directsuccessors(x):
555 # # product as in itertools cartesian product
555 # # product as in itertools cartesian product
556 # ss = product(ss, successorssets(succ))
556 # ss = product(ss, successorssets(succ))
557 # return ss
557 # return ss
558 #
558 #
559 # But we can not use plain recursive calls here:
559 # But we can not use plain recursive calls here:
560 # - that would blow the python call stack
560 # - that would blow the python call stack
561 # - obsolescence markers may have cycles, we need to handle them.
561 # - obsolescence markers may have cycles, we need to handle them.
562 #
562 #
563 # The `toproceed` list act as our call stack. Every node we search
563 # The `toproceed` list act as our call stack. Every node we search
564 # successors set for are stacked there.
564 # successors set for are stacked there.
565 #
565 #
566 # The `stackedset` is set version of this stack used to check if a node is
566 # The `stackedset` is set version of this stack used to check if a node is
567 # already stacked. This check is used to detect cycles and prevent infinite
567 # already stacked. This check is used to detect cycles and prevent infinite
568 # loop.
568 # loop.
569 #
569 #
570 # successors set of all nodes are stored in the `cache` dictionary.
570 # successors set of all nodes are stored in the `cache` dictionary.
571 #
571 #
572 # After this while loop ends we use the cache to return the successors sets
572 # After this while loop ends we use the cache to return the successors sets
573 # for the node requested by the caller.
573 # for the node requested by the caller.
574 while toproceed:
574 while toproceed:
575 # Every iteration tries to compute the successors sets of the topmost
575 # Every iteration tries to compute the successors sets of the topmost
576 # node of the stack: CURRENT.
576 # node of the stack: CURRENT.
577 #
577 #
578 # There are four possible outcomes:
578 # There are four possible outcomes:
579 #
579 #
580 # 1) We already know the successors sets of CURRENT:
580 # 1) We already know the successors sets of CURRENT:
581 # -> mission accomplished, pop it from the stack.
581 # -> mission accomplished, pop it from the stack.
582 # 2) Node is not obsolete:
582 # 2) Node is not obsolete:
583 # -> the node is its own successors sets. Add it to the cache.
583 # -> the node is its own successors sets. Add it to the cache.
584 # 3) We do not know successors set of direct successors of CURRENT:
584 # 3) We do not know successors set of direct successors of CURRENT:
585 # -> We add those successors to the stack.
585 # -> We add those successors to the stack.
586 # 4) We know successors sets of all direct successors of CURRENT:
586 # 4) We know successors sets of all direct successors of CURRENT:
587 # -> We can compute CURRENT successors set and add it to the
587 # -> We can compute CURRENT successors set and add it to the
588 # cache.
588 # cache.
589 #
589 #
590 current = toproceed[-1]
590 current = toproceed[-1]
591 if current in cache:
591 if current in cache:
592 # case (1): We already know the successors sets
592 # case (1): We already know the successors sets
593 stackedset.remove(toproceed.pop())
593 stackedset.remove(toproceed.pop())
594 elif current not in succmarkers:
594 elif current not in succmarkers:
595 # case (2): The node is not obsolete.
595 # case (2): The node is not obsolete.
596 if current in repo:
596 if current in repo:
597 # We have a valid last successors.
597 # We have a valid last successors.
598 cache[current] = [(current,)]
598 cache[current] = [(current,)]
599 else:
599 else:
600 # Final obsolete version is unknown locally.
600 # Final obsolete version is unknown locally.
601 # Do not count that as a valid successors
601 # Do not count that as a valid successors
602 cache[current] = []
602 cache[current] = []
603 else:
603 else:
604 # cases (3) and (4)
604 # cases (3) and (4)
605 #
605 #
606 # We proceed in two phases. Phase 1 aims to distinguish case (3)
606 # We proceed in two phases. Phase 1 aims to distinguish case (3)
607 # from case (4):
607 # from case (4):
608 #
608 #
609 # For each direct successors of CURRENT, we check whether its
609 # For each direct successors of CURRENT, we check whether its
610 # successors sets are known. If they are not, we stack the
610 # successors sets are known. If they are not, we stack the
611 # unknown node and proceed to the next iteration of the while
611 # unknown node and proceed to the next iteration of the while
612 # loop. (case 3)
612 # loop. (case 3)
613 #
613 #
614 # During this step, we may detect obsolescence cycles: a node
614 # During this step, we may detect obsolescence cycles: a node
615 # with unknown successors sets but already in the call stack.
615 # with unknown successors sets but already in the call stack.
616 # In such a situation, we arbitrary set the successors sets of
616 # In such a situation, we arbitrary set the successors sets of
617 # the node to nothing (node pruned) to break the cycle.
617 # the node to nothing (node pruned) to break the cycle.
618 #
618 #
619 # If no break was encountered we proceed to phase 2.
619 # If no break was encountered we proceed to phase 2.
620 #
620 #
621 # Phase 2 computes successors sets of CURRENT (case 4); see details
621 # Phase 2 computes successors sets of CURRENT (case 4); see details
622 # in phase 2 itself.
622 # in phase 2 itself.
623 #
623 #
624 # Note the two levels of iteration in each phase.
624 # Note the two levels of iteration in each phase.
625 # - The first one handles obsolescence markers using CURRENT as
625 # - The first one handles obsolescence markers using CURRENT as
626 # precursor (successors markers of CURRENT).
626 # precursor (successors markers of CURRENT).
627 #
627 #
628 # Having multiple entry here means divergence.
628 # Having multiple entry here means divergence.
629 #
629 #
630 # - The second one handles successors defined in each marker.
630 # - The second one handles successors defined in each marker.
631 #
631 #
632 # Having none means pruned node, multiple successors means split,
632 # Having none means pruned node, multiple successors means split,
633 # single successors are standard replacement.
633 # single successors are standard replacement.
634 #
634 #
635 for mark in sorted(succmarkers[current]):
635 for mark in sorted(succmarkers[current]):
636 for suc in mark[1]:
636 for suc in mark[1]:
637 if suc not in cache:
637 if suc not in cache:
638 if suc in stackedset:
638 if suc in stackedset:
639 # cycle breaking
639 # cycle breaking
640 cache[suc] = []
640 cache[suc] = []
641 else:
641 else:
642 # case (3) If we have not computed successors sets
642 # case (3) If we have not computed successors sets
643 # of one of those successors we add it to the
643 # of one of those successors we add it to the
644 # `toproceed` stack and stop all work for this
644 # `toproceed` stack and stop all work for this
645 # iteration.
645 # iteration.
646 toproceed.append(suc)
646 toproceed.append(suc)
647 stackedset.add(suc)
647 stackedset.add(suc)
648 break
648 break
649 else:
649 else:
650 continue
650 continue
651 break
651 break
652 else:
652 else:
653 # case (4): we know all successors sets of all direct
653 # case (4): we know all successors sets of all direct
654 # successors
654 # successors
655 #
655 #
656 # Successors set contributed by each marker depends on the
656 # Successors set contributed by each marker depends on the
657 # successors sets of all its "successors" node.
657 # successors sets of all its "successors" node.
658 #
658 #
659 # Each different marker is a divergence in the obsolescence
659 # Each different marker is a divergence in the obsolescence
660 # history. It contributes successors sets distinct from other
660 # history. It contributes successors sets distinct from other
661 # markers.
661 # markers.
662 #
662 #
663 # Within a marker, a successor may have divergent successors
663 # Within a marker, a successor may have divergent successors
664 # sets. In such a case, the marker will contribute multiple
664 # sets. In such a case, the marker will contribute multiple
665 # divergent successors sets. If multiple successors have
665 # divergent successors sets. If multiple successors have
666 # divergent successors sets, a Cartesian product is used.
666 # divergent successors sets, a Cartesian product is used.
667 #
667 #
668 # At the end we post-process successors sets to remove
668 # At the end we post-process successors sets to remove
669 # duplicated entry and successors set that are strict subset of
669 # duplicated entry and successors set that are strict subset of
670 # another one.
670 # another one.
671 succssets = []
671 succssets = []
672 for mark in sorted(succmarkers[current]):
672 for mark in sorted(succmarkers[current]):
673 # successors sets contributed by this marker
673 # successors sets contributed by this marker
674 markss = [[]]
674 markss = [[]]
675 for suc in mark[1]:
675 for suc in mark[1]:
676 # cardinal product with previous successors
676 # cardinal product with previous successors
677 productresult = []
677 productresult = []
678 for prefix in markss:
678 for prefix in markss:
679 for suffix in cache[suc]:
679 for suffix in cache[suc]:
680 newss = list(prefix)
680 newss = list(prefix)
681 for part in suffix:
681 for part in suffix:
682 # do not duplicated entry in successors set
682 # do not duplicated entry in successors set
683 # first entry wins.
683 # first entry wins.
684 if part not in newss:
684 if part not in newss:
685 newss.append(part)
685 newss.append(part)
686 productresult.append(newss)
686 productresult.append(newss)
687 markss = productresult
687 markss = productresult
688 succssets.extend(markss)
688 succssets.extend(markss)
689 # remove duplicated and subset
689 # remove duplicated and subset
690 seen = []
690 seen = []
691 final = []
691 final = []
692 candidate = sorted(((set(s), s) for s in succssets if s),
692 candidate = sorted(((set(s), s) for s in succssets if s),
693 key=lambda x: len(x[1]), reverse=True)
693 key=lambda x: len(x[1]), reverse=True)
694 for setversion, listversion in candidate:
694 for setversion, listversion in candidate:
695 for seenset in seen:
695 for seenset in seen:
696 if setversion.issubset(seenset):
696 if setversion.issubset(seenset):
697 break
697 break
698 else:
698 else:
699 final.append(listversion)
699 final.append(listversion)
700 seen.append(setversion)
700 seen.append(setversion)
701 final.reverse() # put small successors set first
701 final.reverse() # put small successors set first
702 cache[current] = final
702 cache[current] = final
703 return cache[initialnode]
703 return cache[initialnode]
704
704
705 def _knownrevs(repo, nodes):
705 def _knownrevs(repo, nodes):
706 """yield revision numbers of known nodes passed in parameters
706 """yield revision numbers of known nodes passed in parameters
707
707
708 Unknown revisions are silently ignored."""
708 Unknown revisions are silently ignored."""
709 torev = repo.changelog.nodemap.get
709 torev = repo.changelog.nodemap.get
710 for n in nodes:
710 for n in nodes:
711 rev = torev(n)
711 rev = torev(n)
712 if rev is not None:
712 if rev is not None:
713 yield rev
713 yield rev
714
714
715 # mapping of 'set-name' -> <function to compute this set>
715 # mapping of 'set-name' -> <function to compute this set>
716 cachefuncs = {}
716 cachefuncs = {}
717 def cachefor(name):
717 def cachefor(name):
718 """Decorator to register a function as computing the cache for a set"""
718 """Decorator to register a function as computing the cache for a set"""
719 def decorator(func):
719 def decorator(func):
720 assert name not in cachefuncs
720 assert name not in cachefuncs
721 cachefuncs[name] = func
721 cachefuncs[name] = func
722 return func
722 return func
723 return decorator
723 return decorator
724
724
725 def getrevs(repo, name):
725 def getrevs(repo, name):
726 """Return the set of revision that belong to the <name> set
726 """Return the set of revision that belong to the <name> set
727
727
728 Such access may compute the set and cache it for future use"""
728 Such access may compute the set and cache it for future use"""
729 repo = repo.unfiltered()
729 repo = repo.unfiltered()
730 if not repo.obsstore:
730 if not repo.obsstore:
731 return ()
731 return ()
732 if name not in repo.obsstore.caches:
732 if name not in repo.obsstore.caches:
733 repo.obsstore.caches[name] = cachefuncs[name](repo)
733 repo.obsstore.caches[name] = cachefuncs[name](repo)
734 return repo.obsstore.caches[name]
734 return repo.obsstore.caches[name]
735
735
736 # To be simple we need to invalidate obsolescence cache when:
736 # To be simple we need to invalidate obsolescence cache when:
737 #
737 #
738 # - new changeset is added:
738 # - new changeset is added:
739 # - public phase is changed
739 # - public phase is changed
740 # - obsolescence marker are added
740 # - obsolescence marker are added
741 # - strip is used a repo
741 # - strip is used a repo
742 def clearobscaches(repo):
742 def clearobscaches(repo):
743 """Remove all obsolescence related cache from a repo
743 """Remove all obsolescence related cache from a repo
744
744
745 This remove all cache in obsstore is the obsstore already exist on the
745 This remove all cache in obsstore is the obsstore already exist on the
746 repo.
746 repo.
747
747
748 (We could be smarter here given the exact event that trigger the cache
748 (We could be smarter here given the exact event that trigger the cache
749 clearing)"""
749 clearing)"""
750 # only clear cache is there is obsstore data in this repo
750 # only clear cache is there is obsstore data in this repo
751 if 'obsstore' in repo._filecache:
751 if 'obsstore' in repo._filecache:
752 repo.obsstore.caches.clear()
752 repo.obsstore.caches.clear()
753
753
754 @cachefor('obsolete')
754 @cachefor('obsolete')
755 def _computeobsoleteset(repo):
755 def _computeobsoleteset(repo):
756 """the set of obsolete revisions"""
756 """the set of obsolete revisions"""
757 obs = set()
757 obs = set()
758 getrev = repo.changelog.nodemap.get
758 getrev = repo.changelog.nodemap.get
759 getphase = repo._phasecache.phase
759 getphase = repo._phasecache.phase
760 for n in repo.obsstore.successors:
760 for n in repo.obsstore.successors:
761 rev = getrev(n)
761 rev = getrev(n)
762 if rev is not None and getphase(repo, rev):
762 if rev is not None and getphase(repo, rev):
763 obs.add(rev)
763 obs.add(rev)
764 return obs
764 return obs
765
765
766 @cachefor('unstable')
766 @cachefor('unstable')
767 def _computeunstableset(repo):
767 def _computeunstableset(repo):
768 """the set of non obsolete revisions with obsolete parents"""
768 """the set of non obsolete revisions with obsolete parents"""
769 # revset is not efficient enough here
769 # revset is not efficient enough here
770 # we do (obsolete()::) - obsolete() by hand
770 # we do (obsolete()::) - obsolete() by hand
771 obs = getrevs(repo, 'obsolete')
771 obs = getrevs(repo, 'obsolete')
772 if not obs:
772 if not obs:
773 return set()
773 return set()
774 cl = repo.changelog
774 cl = repo.changelog
775 return set(r for r in cl.descendants(obs) if r not in obs)
775 return set(r for r in cl.descendants(obs) if r not in obs)
776
776
777 @cachefor('suspended')
777 @cachefor('suspended')
778 def _computesuspendedset(repo):
778 def _computesuspendedset(repo):
779 """the set of obsolete parents with non obsolete descendants"""
779 """the set of obsolete parents with non obsolete descendants"""
780 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
780 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
781 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
781 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
782
782
783 @cachefor('extinct')
783 @cachefor('extinct')
784 def _computeextinctset(repo):
784 def _computeextinctset(repo):
785 """the set of obsolete parents without non obsolete descendants"""
785 """the set of obsolete parents without non obsolete descendants"""
786 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
786 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
787
787
788
788
789 @cachefor('bumped')
789 @cachefor('bumped')
790 def _computebumpedset(repo):
790 def _computebumpedset(repo):
791 """the set of revs trying to obsolete public revisions"""
791 """the set of revs trying to obsolete public revisions"""
792 bumped = set()
792 bumped = set()
793 # util function (avoid attribute lookup in the loop)
793 # util function (avoid attribute lookup in the loop)
794 phase = repo._phasecache.phase # would be faster to grab the full list
794 phase = repo._phasecache.phase # would be faster to grab the full list
795 public = phases.public
795 public = phases.public
796 cl = repo.changelog
796 cl = repo.changelog
797 torev = cl.nodemap.get
797 torev = cl.nodemap.get
798 obs = getrevs(repo, 'obsolete')
798 obs = getrevs(repo, 'obsolete')
799 for rev in repo:
799 for rev in repo:
800 # We only evaluate mutable, non-obsolete revision
800 # We only evaluate mutable, non-obsolete revision
801 if (public < phase(repo, rev)) and (rev not in obs):
801 if (public < phase(repo, rev)) and (rev not in obs):
802 node = cl.node(rev)
802 node = cl.node(rev)
803 # (future) A cache of precursors may worth if split is very common
803 # (future) A cache of precursors may worth if split is very common
804 for pnode in allprecursors(repo.obsstore, [node],
804 for pnode in allprecursors(repo.obsstore, [node],
805 ignoreflags=bumpedfix):
805 ignoreflags=bumpedfix):
806 prev = torev(pnode) # unfiltered! but so is phasecache
806 prev = torev(pnode) # unfiltered! but so is phasecache
807 if (prev is not None) and (phase(repo, prev) <= public):
807 if (prev is not None) and (phase(repo, prev) <= public):
808 # we have a public precursors
808 # we have a public precursors
809 bumped.add(rev)
809 bumped.add(rev)
810 break # Next draft!
810 break # Next draft!
811 return bumped
811 return bumped
812
812
813 @cachefor('divergent')
813 @cachefor('divergent')
814 def _computedivergentset(repo):
814 def _computedivergentset(repo):
815 """the set of rev that compete to be the final successors of some revision.
815 """the set of rev that compete to be the final successors of some revision.
816 """
816 """
817 divergent = set()
817 divergent = set()
818 obsstore = repo.obsstore
818 obsstore = repo.obsstore
819 newermap = {}
819 newermap = {}
820 for ctx in repo.set('(not public()) - obsolete()'):
820 for ctx in repo.set('(not public()) - obsolete()'):
821 mark = obsstore.precursors.get(ctx.node(), ())
821 mark = obsstore.precursors.get(ctx.node(), ())
822 toprocess = set(mark)
822 toprocess = set(mark)
823 while toprocess:
823 while toprocess:
824 prec = toprocess.pop()[0]
824 prec = toprocess.pop()[0]
825 if prec not in newermap:
825 if prec not in newermap:
826 successorssets(repo, prec, newermap)
826 successorssets(repo, prec, newermap)
827 newer = [n for n in newermap[prec] if n]
827 newer = [n for n in newermap[prec] if n]
828 if len(newer) > 1:
828 if len(newer) > 1:
829 divergent.add(ctx.rev())
829 divergent.add(ctx.rev())
830 break
830 break
831 toprocess.update(obsstore.precursors.get(prec, ()))
831 toprocess.update(obsstore.precursors.get(prec, ()))
832 return divergent
832 return divergent
833
833
834
834
835 def createmarkers(repo, relations, flag=0, metadata=None):
835 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
836 """Add obsolete markers between changesets in a repo
836 """Add obsolete markers between changesets in a repo
837
837
838 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
838 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
839 tuple. `old` and `news` are changectx. metadata is an optional dictionary
839 tuple. `old` and `news` are changectx. metadata is an optional dictionary
840 containing metadata for this marker only. It is merged with the global
840 containing metadata for this marker only. It is merged with the global
841 metadata specified through the `metadata` argument of this function,
841 metadata specified through the `metadata` argument of this function,
842
842
843 Trying to obsolete a public changeset will raise an exception.
843 Trying to obsolete a public changeset will raise an exception.
844
844
845 Current user and date are used except if specified otherwise in the
845 Current user and date are used except if specified otherwise in the
846 metadata attribute.
846 metadata attribute.
847
847
848 This function operates within a transaction of its own, but does
848 This function operates within a transaction of its own, but does
849 not take any lock on the repo.
849 not take any lock on the repo.
850 """
850 """
851 # prepare metadata
851 # prepare metadata
852 if metadata is None:
852 if metadata is None:
853 metadata = {}
853 metadata = {}
854 if 'date' not in metadata:
855 metadata['date'] = '%i %i' % util.makedate()
856 if 'user' not in metadata:
854 if 'user' not in metadata:
857 metadata['user'] = repo.ui.username()
855 metadata['user'] = repo.ui.username()
858 tr = repo.transaction('add-obsolescence-marker')
856 tr = repo.transaction('add-obsolescence-marker')
859 try:
857 try:
860 for rel in relations:
858 for rel in relations:
861 prec = rel[0]
859 prec = rel[0]
862 sucs = rel[1]
860 sucs = rel[1]
863 localmetadata = metadata.copy()
861 localmetadata = metadata.copy()
864 if 2 < len(rel):
862 if 2 < len(rel):
865 localmetadata.update(rel[2])
863 localmetadata.update(rel[2])
866
864
867 if not prec.mutable():
865 if not prec.mutable():
868 raise util.Abort("cannot obsolete immutable changeset: %s"
866 raise util.Abort("cannot obsolete immutable changeset: %s"
869 % prec)
867 % prec)
870 nprec = prec.node()
868 nprec = prec.node()
871 nsucs = tuple(s.node() for s in sucs)
869 nsucs = tuple(s.node() for s in sucs)
872 if nprec in nsucs:
870 if nprec in nsucs:
873 raise util.Abort("changeset %s cannot obsolete itself" % prec)
871 raise util.Abort("changeset %s cannot obsolete itself" % prec)
874 repo.obsstore.create(tr, nprec, nsucs, flag,
872 repo.obsstore.create(tr, nprec, nsucs, flag, date=date,
875 metadata=localmetadata)
873 metadata=localmetadata)
876 repo.filteredrevcache.clear()
874 repo.filteredrevcache.clear()
877 tr.close()
875 tr.close()
878 finally:
876 finally:
879 tr.release()
877 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now