##// END OF EJS Templates
obsstore: add relevantmarkers method...
Pierre-Yves David -
r22271:8c69262d default
parent child Browse files
Show More
@@ -1,935 +1,963 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fmversion = 0
102 _fmversion = 0
103 _fmfixed = '>BIB20s'
103 _fmfixed = '>BIB20s'
104 _fmnode = '20s'
104 _fmnode = '20s'
105 _fmfsize = struct.calcsize(_fmfixed)
105 _fmfsize = struct.calcsize(_fmfixed)
106 _fnodesize = struct.calcsize(_fmnode)
106 _fnodesize = struct.calcsize(_fmnode)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion != _fmversion:
145 if diskversion != _fmversion:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148
148
149 # Loop on markers
149 # Loop on markers
150 l = len(data)
150 l = len(data)
151 while off + _fmfsize <= l:
151 while off + _fmfsize <= l:
152 # read fixed part
152 # read fixed part
153 cur = data[off:off + _fmfsize]
153 cur = data[off:off + _fmfsize]
154 off += _fmfsize
154 off += _fmfsize
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
156 # read replacement
156 # read replacement
157 sucs = ()
157 sucs = ()
158 if nbsuc:
158 if nbsuc:
159 s = (_fnodesize * nbsuc)
159 s = (_fnodesize * nbsuc)
160 cur = data[off:off + s]
160 cur = data[off:off + s]
161 sucs = _unpack(_fmnode * nbsuc, cur)
161 sucs = _unpack(_fmnode * nbsuc, cur)
162 off += s
162 off += s
163 # read metadata
163 # read metadata
164 # (metadata will be decoded on demand)
164 # (metadata will be decoded on demand)
165 metadata = data[off:off + mdsize]
165 metadata = data[off:off + mdsize]
166 if len(metadata) != mdsize:
166 if len(metadata) != mdsize:
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
168 'short, %d bytes expected, got %d')
168 'short, %d bytes expected, got %d')
169 % (mdsize, len(metadata)))
169 % (mdsize, len(metadata)))
170 off += mdsize
170 off += mdsize
171 meta = decodemeta(metadata)
171 meta = decodemeta(metadata)
172 try:
172 try:
173 date = util.parsedate(decodemeta(metadata).pop('date', '0 0'))
173 date = util.parsedate(decodemeta(metadata).pop('date', '0 0'))
174 except util.Abort:
174 except util.Abort:
175 date = (0., 0)
175 date = (0., 0)
176 parents = None
176 parents = None
177 if 'p2' in meta:
177 if 'p2' in meta:
178 parents = (meta.pop('p1', None), meta.pop('p2', None))
178 parents = (meta.pop('p1', None), meta.pop('p2', None))
179 elif 'p1' in meta:
179 elif 'p1' in meta:
180 parents = (meta.pop('p1', None),)
180 parents = (meta.pop('p1', None),)
181 elif 'p0' in meta:
181 elif 'p0' in meta:
182 parents = ()
182 parents = ()
183 if parents is not None:
183 if parents is not None:
184 try:
184 try:
185 parents = tuple(node.bin(p) for p in parents)
185 parents = tuple(node.bin(p) for p in parents)
186 # if parent content is not a nodeid, drop the data
186 # if parent content is not a nodeid, drop the data
187 for p in parents:
187 for p in parents:
188 if len(p) != 20:
188 if len(p) != 20:
189 parents = None
189 parents = None
190 break
190 break
191 except TypeError:
191 except TypeError:
192 # if content cannot be translated to nodeid drop the data.
192 # if content cannot be translated to nodeid drop the data.
193 parents = None
193 parents = None
194
194
195 metadata = encodemeta(meta)
195 metadata = encodemeta(meta)
196
196
197 yield (pre, sucs, flags, metadata, date, parents)
197 yield (pre, sucs, flags, metadata, date, parents)
198
198
199 def encodemeta(meta):
199 def encodemeta(meta):
200 """Return encoded metadata string to string mapping.
200 """Return encoded metadata string to string mapping.
201
201
202 Assume no ':' in key and no '\0' in both key and value."""
202 Assume no ':' in key and no '\0' in both key and value."""
203 for key, value in meta.iteritems():
203 for key, value in meta.iteritems():
204 if ':' in key or '\0' in key:
204 if ':' in key or '\0' in key:
205 raise ValueError("':' and '\0' are forbidden in metadata key'")
205 raise ValueError("':' and '\0' are forbidden in metadata key'")
206 if '\0' in value:
206 if '\0' in value:
207 raise ValueError("':' is forbidden in metadata value'")
207 raise ValueError("':' is forbidden in metadata value'")
208 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
208 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
209
209
210 def decodemeta(data):
210 def decodemeta(data):
211 """Return string to string dictionary from encoded version."""
211 """Return string to string dictionary from encoded version."""
212 d = {}
212 d = {}
213 for l in data.split('\0'):
213 for l in data.split('\0'):
214 if l:
214 if l:
215 key, value = l.split(':')
215 key, value = l.split(':')
216 d[key] = value
216 d[key] = value
217 return d
217 return d
218
218
219 class marker(object):
219 class marker(object):
220 """Wrap obsolete marker raw data"""
220 """Wrap obsolete marker raw data"""
221
221
222 def __init__(self, repo, data):
222 def __init__(self, repo, data):
223 # the repo argument will be used to create changectx in later version
223 # the repo argument will be used to create changectx in later version
224 self._repo = repo
224 self._repo = repo
225 self._data = data
225 self._data = data
226 self._decodedmeta = None
226 self._decodedmeta = None
227
227
228 def __hash__(self):
228 def __hash__(self):
229 return hash(self._data)
229 return hash(self._data)
230
230
231 def __eq__(self, other):
231 def __eq__(self, other):
232 if type(other) != type(self):
232 if type(other) != type(self):
233 return False
233 return False
234 return self._data == other._data
234 return self._data == other._data
235
235
236 def precnode(self):
236 def precnode(self):
237 """Precursor changeset node identifier"""
237 """Precursor changeset node identifier"""
238 return self._data[0]
238 return self._data[0]
239
239
240 def succnodes(self):
240 def succnodes(self):
241 """List of successor changesets node identifiers"""
241 """List of successor changesets node identifiers"""
242 return self._data[1]
242 return self._data[1]
243
243
244 def parentnodes(self):
244 def parentnodes(self):
245 """Parents of the precursors (None if not recorded)"""
245 """Parents of the precursors (None if not recorded)"""
246 return self._data[5]
246 return self._data[5]
247
247
248 def metadata(self):
248 def metadata(self):
249 """Decoded metadata dictionary"""
249 """Decoded metadata dictionary"""
250 if self._decodedmeta is None:
250 if self._decodedmeta is None:
251 self._decodedmeta = decodemeta(self._data[3])
251 self._decodedmeta = decodemeta(self._data[3])
252 return self._decodedmeta
252 return self._decodedmeta
253
253
254 def date(self):
254 def date(self):
255 """Creation date as (unixtime, offset)"""
255 """Creation date as (unixtime, offset)"""
256 return self._data[4]
256 return self._data[4]
257
257
258 def flags(self):
258 def flags(self):
259 """The flags field of the marker"""
259 """The flags field of the marker"""
260 return self._data[2]
260 return self._data[2]
261
261
262 class obsstore(object):
262 class obsstore(object):
263 """Store obsolete markers
263 """Store obsolete markers
264
264
265 Markers can be accessed with two mappings:
265 Markers can be accessed with two mappings:
266 - precursors[x] -> set(markers on precursors edges of x)
266 - precursors[x] -> set(markers on precursors edges of x)
267 - successors[x] -> set(markers on successors edges of x)
267 - successors[x] -> set(markers on successors edges of x)
268 - children[x] -> set(markers on precursors edges of children(x)
268 - children[x] -> set(markers on precursors edges of children(x)
269 """
269 """
270
270
271 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
271 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
272 # prec: nodeid, precursor changesets
272 # prec: nodeid, precursor changesets
273 # succs: tuple of nodeid, successor changesets (0-N length)
273 # succs: tuple of nodeid, successor changesets (0-N length)
274 # flag: integer, flag field carrying modifier for the markers (see doc)
274 # flag: integer, flag field carrying modifier for the markers (see doc)
275 # meta: binary blob, encoded metadata dictionary
275 # meta: binary blob, encoded metadata dictionary
276 # date: (float, int) tuple, date of marker creation
276 # date: (float, int) tuple, date of marker creation
277 # parents: (tuple of nodeid) or None, parents of precursors
277 # parents: (tuple of nodeid) or None, parents of precursors
278 # None is used when no data has been recorded
278 # None is used when no data has been recorded
279
279
280 def __init__(self, sopener):
280 def __init__(self, sopener):
281 # caches for various obsolescence related cache
281 # caches for various obsolescence related cache
282 self.caches = {}
282 self.caches = {}
283 self._all = []
283 self._all = []
284 self.precursors = {}
284 self.precursors = {}
285 self.successors = {}
285 self.successors = {}
286 self.children = {}
286 self.children = {}
287 self.sopener = sopener
287 self.sopener = sopener
288 data = sopener.tryread('obsstore')
288 data = sopener.tryread('obsstore')
289 if data:
289 if data:
290 self._load(_readmarkers(data))
290 self._load(_readmarkers(data))
291
291
292 def __iter__(self):
292 def __iter__(self):
293 return iter(self._all)
293 return iter(self._all)
294
294
295 def __len__(self):
295 def __len__(self):
296 return len(self._all)
296 return len(self._all)
297
297
298 def __nonzero__(self):
298 def __nonzero__(self):
299 return bool(self._all)
299 return bool(self._all)
300
300
301 def create(self, transaction, prec, succs=(), flag=0, parents=None,
301 def create(self, transaction, prec, succs=(), flag=0, parents=None,
302 date=None, metadata=None):
302 date=None, metadata=None):
303 """obsolete: add a new obsolete marker
303 """obsolete: add a new obsolete marker
304
304
305 * ensuring it is hashable
305 * ensuring it is hashable
306 * check mandatory metadata
306 * check mandatory metadata
307 * encode metadata
307 * encode metadata
308
308
309 If you are a human writing code creating marker you want to use the
309 If you are a human writing code creating marker you want to use the
310 `createmarkers` function in this module instead.
310 `createmarkers` function in this module instead.
311
311
312 return True if a new marker have been added, False if the markers
312 return True if a new marker have been added, False if the markers
313 already existed (no op).
313 already existed (no op).
314 """
314 """
315 if metadata is None:
315 if metadata is None:
316 metadata = {}
316 metadata = {}
317 if date is None:
317 if date is None:
318 if 'date' in metadata:
318 if 'date' in metadata:
319 # as a courtesy for out-of-tree extensions
319 # as a courtesy for out-of-tree extensions
320 date = util.parsedate(metadata.pop('date'))
320 date = util.parsedate(metadata.pop('date'))
321 else:
321 else:
322 date = util.makedate()
322 date = util.makedate()
323 if len(prec) != 20:
323 if len(prec) != 20:
324 raise ValueError(prec)
324 raise ValueError(prec)
325 for succ in succs:
325 for succ in succs:
326 if len(succ) != 20:
326 if len(succ) != 20:
327 raise ValueError(succ)
327 raise ValueError(succ)
328 if prec in succs:
328 if prec in succs:
329 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
329 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
330 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
330 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
331 date, parents)
331 date, parents)
332 return bool(self.add(transaction, [marker]))
332 return bool(self.add(transaction, [marker]))
333
333
334 def add(self, transaction, markers):
334 def add(self, transaction, markers):
335 """Add new markers to the store
335 """Add new markers to the store
336
336
337 Take care of filtering duplicate.
337 Take care of filtering duplicate.
338 Return the number of new marker."""
338 Return the number of new marker."""
339 if not _enabled:
339 if not _enabled:
340 raise util.Abort('obsolete feature is not enabled on this repo')
340 raise util.Abort('obsolete feature is not enabled on this repo')
341 known = set(self._all)
341 known = set(self._all)
342 new = []
342 new = []
343 for m in markers:
343 for m in markers:
344 if m not in known:
344 if m not in known:
345 known.add(m)
345 known.add(m)
346 new.append(m)
346 new.append(m)
347 if new:
347 if new:
348 f = self.sopener('obsstore', 'ab')
348 f = self.sopener('obsstore', 'ab')
349 try:
349 try:
350 # Whether the file's current position is at the begin or at
350 # Whether the file's current position is at the begin or at
351 # the end after opening a file for appending is implementation
351 # the end after opening a file for appending is implementation
352 # defined. So we must seek to the end before calling tell(),
352 # defined. So we must seek to the end before calling tell(),
353 # or we may get a zero offset for non-zero sized files on
353 # or we may get a zero offset for non-zero sized files on
354 # some platforms (issue3543).
354 # some platforms (issue3543).
355 f.seek(0, _SEEK_END)
355 f.seek(0, _SEEK_END)
356 offset = f.tell()
356 offset = f.tell()
357 transaction.add('obsstore', offset)
357 transaction.add('obsstore', offset)
358 # offset == 0: new file - add the version header
358 # offset == 0: new file - add the version header
359 for bytes in _encodemarkers(new, offset == 0):
359 for bytes in _encodemarkers(new, offset == 0):
360 f.write(bytes)
360 f.write(bytes)
361 finally:
361 finally:
362 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
362 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
363 # call 'filecacheentry.refresh()' here
363 # call 'filecacheentry.refresh()' here
364 f.close()
364 f.close()
365 self._load(new)
365 self._load(new)
366 # new marker *may* have changed several set. invalidate the cache.
366 # new marker *may* have changed several set. invalidate the cache.
367 self.caches.clear()
367 self.caches.clear()
368 return len(new)
368 return len(new)
369
369
370 def mergemarkers(self, transaction, data):
370 def mergemarkers(self, transaction, data):
371 markers = _readmarkers(data)
371 markers = _readmarkers(data)
372 self.add(transaction, markers)
372 self.add(transaction, markers)
373
373
374 def _load(self, markers):
374 def _load(self, markers):
375 for mark in markers:
375 for mark in markers:
376 self._all.append(mark)
376 self._all.append(mark)
377 pre, sucs = mark[:2]
377 pre, sucs = mark[:2]
378 self.successors.setdefault(pre, set()).add(mark)
378 self.successors.setdefault(pre, set()).add(mark)
379 for suc in sucs:
379 for suc in sucs:
380 self.precursors.setdefault(suc, set()).add(mark)
380 self.precursors.setdefault(suc, set()).add(mark)
381 parents = mark[5]
381 parents = mark[5]
382 if parents is not None:
382 if parents is not None:
383 for p in parents:
383 for p in parents:
384 self.children.setdefault(p, set()).add(mark)
384 self.children.setdefault(p, set()).add(mark)
385 if node.nullid in self.precursors:
385 if node.nullid in self.precursors:
386 raise util.Abort(_('bad obsolescence marker detected: '
386 raise util.Abort(_('bad obsolescence marker detected: '
387 'invalid successors nullid'))
387 'invalid successors nullid'))
388 def relevantmarkers(self, nodes):
389 """return a set of all obsolescence markers relevant to a set of nodes.
390
391 "relevant" to a set of nodes mean:
392
393 - marker that use this changeset as successor
394 - prune marker of direct children on this changeset
395 - recursive application of the two rules on precursors of these markers
396
397 It is a set so you cannot rely on order."""
398
399 pendingnodes = set(nodes)
400 seenmarkers = set()
401 seennodes = set(pendingnodes)
402 precursorsmarkers = self.precursors
403 children = self.children
404 while pendingnodes:
405 direct = set()
406 for current in pendingnodes:
407 direct.update(precursorsmarkers.get(current, ()))
408 pruned = [m for m in children.get(current, ()) if not m[1]]
409 direct.update(pruned)
410 direct -= seenmarkers
411 pendingnodes = set([m[0] for m in direct])
412 seenmarkers |= direct
413 pendingnodes -= seennodes
414 seennodes |= pendingnodes
415 return seenmarkers
388
416
389 def _encodemarkers(markers, addheader=False):
417 def _encodemarkers(markers, addheader=False):
390 # Kept separate from flushmarkers(), it will be reused for
418 # Kept separate from flushmarkers(), it will be reused for
391 # markers exchange.
419 # markers exchange.
392 if addheader:
420 if addheader:
393 yield _pack('>B', _fmversion)
421 yield _pack('>B', _fmversion)
394 for marker in markers:
422 for marker in markers:
395 yield _encodeonemarker(marker)
423 yield _encodeonemarker(marker)
396
424
397
425
398 def _encodeonemarker(marker):
426 def _encodeonemarker(marker):
399 pre, sucs, flags, metadata, date, parents = marker
427 pre, sucs, flags, metadata, date, parents = marker
400 metadata = decodemeta(metadata)
428 metadata = decodemeta(metadata)
401 metadata['date'] = '%d %i' % date
429 metadata['date'] = '%d %i' % date
402 if parents is not None:
430 if parents is not None:
403 if not parents:
431 if not parents:
404 # mark that we explicitly recorded no parents
432 # mark that we explicitly recorded no parents
405 metadata['p0'] = ''
433 metadata['p0'] = ''
406 for i, p in enumerate(parents):
434 for i, p in enumerate(parents):
407 metadata['p%i' % (i + 1)] = node.hex(p)
435 metadata['p%i' % (i + 1)] = node.hex(p)
408 metadata = encodemeta(metadata)
436 metadata = encodemeta(metadata)
409 nbsuc = len(sucs)
437 nbsuc = len(sucs)
410 format = _fmfixed + (_fmnode * nbsuc)
438 format = _fmfixed + (_fmnode * nbsuc)
411 data = [nbsuc, len(metadata), flags, pre]
439 data = [nbsuc, len(metadata), flags, pre]
412 data.extend(sucs)
440 data.extend(sucs)
413 return _pack(format, *data) + metadata
441 return _pack(format, *data) + metadata
414
442
415 # arbitrary picked to fit into 8K limit from HTTP server
443 # arbitrary picked to fit into 8K limit from HTTP server
416 # you have to take in account:
444 # you have to take in account:
417 # - the version header
445 # - the version header
418 # - the base85 encoding
446 # - the base85 encoding
419 _maxpayload = 5300
447 _maxpayload = 5300
420
448
421 def _pushkeyescape(markers):
449 def _pushkeyescape(markers):
422 """encode markers into a dict suitable for pushkey exchange
450 """encode markers into a dict suitable for pushkey exchange
423
451
424 - binary data is base85 encoded
452 - binary data is base85 encoded
425 - split in chunks smaller than 5300 bytes"""
453 - split in chunks smaller than 5300 bytes"""
426 keys = {}
454 keys = {}
427 parts = []
455 parts = []
428 currentlen = _maxpayload * 2 # ensure we create a new part
456 currentlen = _maxpayload * 2 # ensure we create a new part
429 for marker in markers:
457 for marker in markers:
430 nextdata = _encodeonemarker(marker)
458 nextdata = _encodeonemarker(marker)
431 if (len(nextdata) + currentlen > _maxpayload):
459 if (len(nextdata) + currentlen > _maxpayload):
432 currentpart = []
460 currentpart = []
433 currentlen = 0
461 currentlen = 0
434 parts.append(currentpart)
462 parts.append(currentpart)
435 currentpart.append(nextdata)
463 currentpart.append(nextdata)
436 currentlen += len(nextdata)
464 currentlen += len(nextdata)
437 for idx, part in enumerate(reversed(parts)):
465 for idx, part in enumerate(reversed(parts)):
438 data = ''.join([_pack('>B', _fmversion)] + part)
466 data = ''.join([_pack('>B', _fmversion)] + part)
439 keys['dump%i' % idx] = base85.b85encode(data)
467 keys['dump%i' % idx] = base85.b85encode(data)
440 return keys
468 return keys
441
469
442 def listmarkers(repo):
470 def listmarkers(repo):
443 """List markers over pushkey"""
471 """List markers over pushkey"""
444 if not repo.obsstore:
472 if not repo.obsstore:
445 return {}
473 return {}
446 return _pushkeyescape(repo.obsstore)
474 return _pushkeyescape(repo.obsstore)
447
475
448 def pushmarker(repo, key, old, new):
476 def pushmarker(repo, key, old, new):
449 """Push markers over pushkey"""
477 """Push markers over pushkey"""
450 if not key.startswith('dump'):
478 if not key.startswith('dump'):
451 repo.ui.warn(_('unknown key: %r') % key)
479 repo.ui.warn(_('unknown key: %r') % key)
452 return 0
480 return 0
453 if old:
481 if old:
454 repo.ui.warn(_('unexpected old value for %r') % key)
482 repo.ui.warn(_('unexpected old value for %r') % key)
455 return 0
483 return 0
456 data = base85.b85decode(new)
484 data = base85.b85decode(new)
457 lock = repo.lock()
485 lock = repo.lock()
458 try:
486 try:
459 tr = repo.transaction('pushkey: obsolete markers')
487 tr = repo.transaction('pushkey: obsolete markers')
460 try:
488 try:
461 repo.obsstore.mergemarkers(tr, data)
489 repo.obsstore.mergemarkers(tr, data)
462 tr.close()
490 tr.close()
463 return 1
491 return 1
464 finally:
492 finally:
465 tr.release()
493 tr.release()
466 finally:
494 finally:
467 lock.release()
495 lock.release()
468
496
469 def allmarkers(repo):
497 def allmarkers(repo):
470 """all obsolete markers known in a repository"""
498 """all obsolete markers known in a repository"""
471 for markerdata in repo.obsstore:
499 for markerdata in repo.obsstore:
472 yield marker(repo, markerdata)
500 yield marker(repo, markerdata)
473
501
474 def precursormarkers(ctx):
502 def precursormarkers(ctx):
475 """obsolete marker marking this changeset as a successors"""
503 """obsolete marker marking this changeset as a successors"""
476 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
504 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
477 yield marker(ctx._repo, data)
505 yield marker(ctx._repo, data)
478
506
479 def successormarkers(ctx):
507 def successormarkers(ctx):
480 """obsolete marker making this changeset obsolete"""
508 """obsolete marker making this changeset obsolete"""
481 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
509 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
482 yield marker(ctx._repo, data)
510 yield marker(ctx._repo, data)
483
511
484 def allsuccessors(obsstore, nodes, ignoreflags=0):
512 def allsuccessors(obsstore, nodes, ignoreflags=0):
485 """Yield node for every successor of <nodes>.
513 """Yield node for every successor of <nodes>.
486
514
487 Some successors may be unknown locally.
515 Some successors may be unknown locally.
488
516
489 This is a linear yield unsuited to detecting split changesets. It includes
517 This is a linear yield unsuited to detecting split changesets. It includes
490 initial nodes too."""
518 initial nodes too."""
491 remaining = set(nodes)
519 remaining = set(nodes)
492 seen = set(remaining)
520 seen = set(remaining)
493 while remaining:
521 while remaining:
494 current = remaining.pop()
522 current = remaining.pop()
495 yield current
523 yield current
496 for mark in obsstore.successors.get(current, ()):
524 for mark in obsstore.successors.get(current, ()):
497 # ignore marker flagged with specified flag
525 # ignore marker flagged with specified flag
498 if mark[2] & ignoreflags:
526 if mark[2] & ignoreflags:
499 continue
527 continue
500 for suc in mark[1]:
528 for suc in mark[1]:
501 if suc not in seen:
529 if suc not in seen:
502 seen.add(suc)
530 seen.add(suc)
503 remaining.add(suc)
531 remaining.add(suc)
504
532
505 def allprecursors(obsstore, nodes, ignoreflags=0):
533 def allprecursors(obsstore, nodes, ignoreflags=0):
506 """Yield node for every precursors of <nodes>.
534 """Yield node for every precursors of <nodes>.
507
535
508 Some precursors may be unknown locally.
536 Some precursors may be unknown locally.
509
537
510 This is a linear yield unsuited to detecting folded changesets. It includes
538 This is a linear yield unsuited to detecting folded changesets. It includes
511 initial nodes too."""
539 initial nodes too."""
512
540
513 remaining = set(nodes)
541 remaining = set(nodes)
514 seen = set(remaining)
542 seen = set(remaining)
515 while remaining:
543 while remaining:
516 current = remaining.pop()
544 current = remaining.pop()
517 yield current
545 yield current
518 for mark in obsstore.precursors.get(current, ()):
546 for mark in obsstore.precursors.get(current, ()):
519 # ignore marker flagged with specified flag
547 # ignore marker flagged with specified flag
520 if mark[2] & ignoreflags:
548 if mark[2] & ignoreflags:
521 continue
549 continue
522 suc = mark[0]
550 suc = mark[0]
523 if suc not in seen:
551 if suc not in seen:
524 seen.add(suc)
552 seen.add(suc)
525 remaining.add(suc)
553 remaining.add(suc)
526
554
527 def foreground(repo, nodes):
555 def foreground(repo, nodes):
528 """return all nodes in the "foreground" of other node
556 """return all nodes in the "foreground" of other node
529
557
530 The foreground of a revision is anything reachable using parent -> children
558 The foreground of a revision is anything reachable using parent -> children
531 or precursor -> successor relation. It is very similar to "descendant" but
559 or precursor -> successor relation. It is very similar to "descendant" but
532 augmented with obsolescence information.
560 augmented with obsolescence information.
533
561
534 Beware that possible obsolescence cycle may result if complex situation.
562 Beware that possible obsolescence cycle may result if complex situation.
535 """
563 """
536 repo = repo.unfiltered()
564 repo = repo.unfiltered()
537 foreground = set(repo.set('%ln::', nodes))
565 foreground = set(repo.set('%ln::', nodes))
538 if repo.obsstore:
566 if repo.obsstore:
539 # We only need this complicated logic if there is obsolescence
567 # We only need this complicated logic if there is obsolescence
540 # XXX will probably deserve an optimised revset.
568 # XXX will probably deserve an optimised revset.
541 nm = repo.changelog.nodemap
569 nm = repo.changelog.nodemap
542 plen = -1
570 plen = -1
543 # compute the whole set of successors or descendants
571 # compute the whole set of successors or descendants
544 while len(foreground) != plen:
572 while len(foreground) != plen:
545 plen = len(foreground)
573 plen = len(foreground)
546 succs = set(c.node() for c in foreground)
574 succs = set(c.node() for c in foreground)
547 mutable = [c.node() for c in foreground if c.mutable()]
575 mutable = [c.node() for c in foreground if c.mutable()]
548 succs.update(allsuccessors(repo.obsstore, mutable))
576 succs.update(allsuccessors(repo.obsstore, mutable))
549 known = (n for n in succs if n in nm)
577 known = (n for n in succs if n in nm)
550 foreground = set(repo.set('%ln::', known))
578 foreground = set(repo.set('%ln::', known))
551 return set(c.node() for c in foreground)
579 return set(c.node() for c in foreground)
552
580
553
581
554 def successorssets(repo, initialnode, cache=None):
582 def successorssets(repo, initialnode, cache=None):
555 """Return all set of successors of initial nodes
583 """Return all set of successors of initial nodes
556
584
557 The successors set of a changeset A are a group of revisions that succeed
585 The successors set of a changeset A are a group of revisions that succeed
558 A. It succeeds A as a consistent whole, each revision being only a partial
586 A. It succeeds A as a consistent whole, each revision being only a partial
559 replacement. The successors set contains non-obsolete changesets only.
587 replacement. The successors set contains non-obsolete changesets only.
560
588
561 This function returns the full list of successor sets which is why it
589 This function returns the full list of successor sets which is why it
562 returns a list of tuples and not just a single tuple. Each tuple is a valid
590 returns a list of tuples and not just a single tuple. Each tuple is a valid
563 successors set. Not that (A,) may be a valid successors set for changeset A
591 successors set. Not that (A,) may be a valid successors set for changeset A
564 (see below).
592 (see below).
565
593
566 In most cases, a changeset A will have a single element (e.g. the changeset
594 In most cases, a changeset A will have a single element (e.g. the changeset
567 A is replaced by A') in its successors set. Though, it is also common for a
595 A is replaced by A') in its successors set. Though, it is also common for a
568 changeset A to have no elements in its successor set (e.g. the changeset
596 changeset A to have no elements in its successor set (e.g. the changeset
569 has been pruned). Therefore, the returned list of successors sets will be
597 has been pruned). Therefore, the returned list of successors sets will be
570 [(A',)] or [], respectively.
598 [(A',)] or [], respectively.
571
599
572 When a changeset A is split into A' and B', however, it will result in a
600 When a changeset A is split into A' and B', however, it will result in a
573 successors set containing more than a single element, i.e. [(A',B')].
601 successors set containing more than a single element, i.e. [(A',B')].
574 Divergent changesets will result in multiple successors sets, i.e. [(A',),
602 Divergent changesets will result in multiple successors sets, i.e. [(A',),
575 (A'')].
603 (A'')].
576
604
577 If a changeset A is not obsolete, then it will conceptually have no
605 If a changeset A is not obsolete, then it will conceptually have no
578 successors set. To distinguish this from a pruned changeset, the successor
606 successors set. To distinguish this from a pruned changeset, the successor
579 set will only contain itself, i.e. [(A,)].
607 set will only contain itself, i.e. [(A,)].
580
608
581 Finally, successors unknown locally are considered to be pruned (obsoleted
609 Finally, successors unknown locally are considered to be pruned (obsoleted
582 without any successors).
610 without any successors).
583
611
584 The optional `cache` parameter is a dictionary that may contain precomputed
612 The optional `cache` parameter is a dictionary that may contain precomputed
585 successors sets. It is meant to reuse the computation of a previous call to
613 successors sets. It is meant to reuse the computation of a previous call to
586 `successorssets` when multiple calls are made at the same time. The cache
614 `successorssets` when multiple calls are made at the same time. The cache
587 dictionary is updated in place. The caller is responsible for its live
615 dictionary is updated in place. The caller is responsible for its live
588 spawn. Code that makes multiple calls to `successorssets` *must* use this
616 spawn. Code that makes multiple calls to `successorssets` *must* use this
589 cache mechanism or suffer terrible performances.
617 cache mechanism or suffer terrible performances.
590
618
591 """
619 """
592
620
593 succmarkers = repo.obsstore.successors
621 succmarkers = repo.obsstore.successors
594
622
595 # Stack of nodes we search successors sets for
623 # Stack of nodes we search successors sets for
596 toproceed = [initialnode]
624 toproceed = [initialnode]
597 # set version of above list for fast loop detection
625 # set version of above list for fast loop detection
598 # element added to "toproceed" must be added here
626 # element added to "toproceed" must be added here
599 stackedset = set(toproceed)
627 stackedset = set(toproceed)
600 if cache is None:
628 if cache is None:
601 cache = {}
629 cache = {}
602
630
603 # This while loop is the flattened version of a recursive search for
631 # This while loop is the flattened version of a recursive search for
604 # successors sets
632 # successors sets
605 #
633 #
606 # def successorssets(x):
634 # def successorssets(x):
607 # successors = directsuccessors(x)
635 # successors = directsuccessors(x)
608 # ss = [[]]
636 # ss = [[]]
609 # for succ in directsuccessors(x):
637 # for succ in directsuccessors(x):
610 # # product as in itertools cartesian product
638 # # product as in itertools cartesian product
611 # ss = product(ss, successorssets(succ))
639 # ss = product(ss, successorssets(succ))
612 # return ss
640 # return ss
613 #
641 #
614 # But we can not use plain recursive calls here:
642 # But we can not use plain recursive calls here:
615 # - that would blow the python call stack
643 # - that would blow the python call stack
616 # - obsolescence markers may have cycles, we need to handle them.
644 # - obsolescence markers may have cycles, we need to handle them.
617 #
645 #
618 # The `toproceed` list act as our call stack. Every node we search
646 # The `toproceed` list act as our call stack. Every node we search
619 # successors set for are stacked there.
647 # successors set for are stacked there.
620 #
648 #
621 # The `stackedset` is set version of this stack used to check if a node is
649 # The `stackedset` is set version of this stack used to check if a node is
622 # already stacked. This check is used to detect cycles and prevent infinite
650 # already stacked. This check is used to detect cycles and prevent infinite
623 # loop.
651 # loop.
624 #
652 #
625 # successors set of all nodes are stored in the `cache` dictionary.
653 # successors set of all nodes are stored in the `cache` dictionary.
626 #
654 #
627 # After this while loop ends we use the cache to return the successors sets
655 # After this while loop ends we use the cache to return the successors sets
628 # for the node requested by the caller.
656 # for the node requested by the caller.
629 while toproceed:
657 while toproceed:
630 # Every iteration tries to compute the successors sets of the topmost
658 # Every iteration tries to compute the successors sets of the topmost
631 # node of the stack: CURRENT.
659 # node of the stack: CURRENT.
632 #
660 #
633 # There are four possible outcomes:
661 # There are four possible outcomes:
634 #
662 #
635 # 1) We already know the successors sets of CURRENT:
663 # 1) We already know the successors sets of CURRENT:
636 # -> mission accomplished, pop it from the stack.
664 # -> mission accomplished, pop it from the stack.
637 # 2) Node is not obsolete:
665 # 2) Node is not obsolete:
638 # -> the node is its own successors sets. Add it to the cache.
666 # -> the node is its own successors sets. Add it to the cache.
639 # 3) We do not know successors set of direct successors of CURRENT:
667 # 3) We do not know successors set of direct successors of CURRENT:
640 # -> We add those successors to the stack.
668 # -> We add those successors to the stack.
641 # 4) We know successors sets of all direct successors of CURRENT:
669 # 4) We know successors sets of all direct successors of CURRENT:
642 # -> We can compute CURRENT successors set and add it to the
670 # -> We can compute CURRENT successors set and add it to the
643 # cache.
671 # cache.
644 #
672 #
645 current = toproceed[-1]
673 current = toproceed[-1]
646 if current in cache:
674 if current in cache:
647 # case (1): We already know the successors sets
675 # case (1): We already know the successors sets
648 stackedset.remove(toproceed.pop())
676 stackedset.remove(toproceed.pop())
649 elif current not in succmarkers:
677 elif current not in succmarkers:
650 # case (2): The node is not obsolete.
678 # case (2): The node is not obsolete.
651 if current in repo:
679 if current in repo:
652 # We have a valid last successors.
680 # We have a valid last successors.
653 cache[current] = [(current,)]
681 cache[current] = [(current,)]
654 else:
682 else:
655 # Final obsolete version is unknown locally.
683 # Final obsolete version is unknown locally.
656 # Do not count that as a valid successors
684 # Do not count that as a valid successors
657 cache[current] = []
685 cache[current] = []
658 else:
686 else:
659 # cases (3) and (4)
687 # cases (3) and (4)
660 #
688 #
661 # We proceed in two phases. Phase 1 aims to distinguish case (3)
689 # We proceed in two phases. Phase 1 aims to distinguish case (3)
662 # from case (4):
690 # from case (4):
663 #
691 #
664 # For each direct successors of CURRENT, we check whether its
692 # For each direct successors of CURRENT, we check whether its
665 # successors sets are known. If they are not, we stack the
693 # successors sets are known. If they are not, we stack the
666 # unknown node and proceed to the next iteration of the while
694 # unknown node and proceed to the next iteration of the while
667 # loop. (case 3)
695 # loop. (case 3)
668 #
696 #
669 # During this step, we may detect obsolescence cycles: a node
697 # During this step, we may detect obsolescence cycles: a node
670 # with unknown successors sets but already in the call stack.
698 # with unknown successors sets but already in the call stack.
671 # In such a situation, we arbitrary set the successors sets of
699 # In such a situation, we arbitrary set the successors sets of
672 # the node to nothing (node pruned) to break the cycle.
700 # the node to nothing (node pruned) to break the cycle.
673 #
701 #
674 # If no break was encountered we proceed to phase 2.
702 # If no break was encountered we proceed to phase 2.
675 #
703 #
676 # Phase 2 computes successors sets of CURRENT (case 4); see details
704 # Phase 2 computes successors sets of CURRENT (case 4); see details
677 # in phase 2 itself.
705 # in phase 2 itself.
678 #
706 #
679 # Note the two levels of iteration in each phase.
707 # Note the two levels of iteration in each phase.
680 # - The first one handles obsolescence markers using CURRENT as
708 # - The first one handles obsolescence markers using CURRENT as
681 # precursor (successors markers of CURRENT).
709 # precursor (successors markers of CURRENT).
682 #
710 #
683 # Having multiple entry here means divergence.
711 # Having multiple entry here means divergence.
684 #
712 #
685 # - The second one handles successors defined in each marker.
713 # - The second one handles successors defined in each marker.
686 #
714 #
687 # Having none means pruned node, multiple successors means split,
715 # Having none means pruned node, multiple successors means split,
688 # single successors are standard replacement.
716 # single successors are standard replacement.
689 #
717 #
690 for mark in sorted(succmarkers[current]):
718 for mark in sorted(succmarkers[current]):
691 for suc in mark[1]:
719 for suc in mark[1]:
692 if suc not in cache:
720 if suc not in cache:
693 if suc in stackedset:
721 if suc in stackedset:
694 # cycle breaking
722 # cycle breaking
695 cache[suc] = []
723 cache[suc] = []
696 else:
724 else:
697 # case (3) If we have not computed successors sets
725 # case (3) If we have not computed successors sets
698 # of one of those successors we add it to the
726 # of one of those successors we add it to the
699 # `toproceed` stack and stop all work for this
727 # `toproceed` stack and stop all work for this
700 # iteration.
728 # iteration.
701 toproceed.append(suc)
729 toproceed.append(suc)
702 stackedset.add(suc)
730 stackedset.add(suc)
703 break
731 break
704 else:
732 else:
705 continue
733 continue
706 break
734 break
707 else:
735 else:
708 # case (4): we know all successors sets of all direct
736 # case (4): we know all successors sets of all direct
709 # successors
737 # successors
710 #
738 #
711 # Successors set contributed by each marker depends on the
739 # Successors set contributed by each marker depends on the
712 # successors sets of all its "successors" node.
740 # successors sets of all its "successors" node.
713 #
741 #
714 # Each different marker is a divergence in the obsolescence
742 # Each different marker is a divergence in the obsolescence
715 # history. It contributes successors sets distinct from other
743 # history. It contributes successors sets distinct from other
716 # markers.
744 # markers.
717 #
745 #
718 # Within a marker, a successor may have divergent successors
746 # Within a marker, a successor may have divergent successors
719 # sets. In such a case, the marker will contribute multiple
747 # sets. In such a case, the marker will contribute multiple
720 # divergent successors sets. If multiple successors have
748 # divergent successors sets. If multiple successors have
721 # divergent successors sets, a Cartesian product is used.
749 # divergent successors sets, a Cartesian product is used.
722 #
750 #
723 # At the end we post-process successors sets to remove
751 # At the end we post-process successors sets to remove
724 # duplicated entry and successors set that are strict subset of
752 # duplicated entry and successors set that are strict subset of
725 # another one.
753 # another one.
726 succssets = []
754 succssets = []
727 for mark in sorted(succmarkers[current]):
755 for mark in sorted(succmarkers[current]):
728 # successors sets contributed by this marker
756 # successors sets contributed by this marker
729 markss = [[]]
757 markss = [[]]
730 for suc in mark[1]:
758 for suc in mark[1]:
731 # cardinal product with previous successors
759 # cardinal product with previous successors
732 productresult = []
760 productresult = []
733 for prefix in markss:
761 for prefix in markss:
734 for suffix in cache[suc]:
762 for suffix in cache[suc]:
735 newss = list(prefix)
763 newss = list(prefix)
736 for part in suffix:
764 for part in suffix:
737 # do not duplicated entry in successors set
765 # do not duplicated entry in successors set
738 # first entry wins.
766 # first entry wins.
739 if part not in newss:
767 if part not in newss:
740 newss.append(part)
768 newss.append(part)
741 productresult.append(newss)
769 productresult.append(newss)
742 markss = productresult
770 markss = productresult
743 succssets.extend(markss)
771 succssets.extend(markss)
744 # remove duplicated and subset
772 # remove duplicated and subset
745 seen = []
773 seen = []
746 final = []
774 final = []
747 candidate = sorted(((set(s), s) for s in succssets if s),
775 candidate = sorted(((set(s), s) for s in succssets if s),
748 key=lambda x: len(x[1]), reverse=True)
776 key=lambda x: len(x[1]), reverse=True)
749 for setversion, listversion in candidate:
777 for setversion, listversion in candidate:
750 for seenset in seen:
778 for seenset in seen:
751 if setversion.issubset(seenset):
779 if setversion.issubset(seenset):
752 break
780 break
753 else:
781 else:
754 final.append(listversion)
782 final.append(listversion)
755 seen.append(setversion)
783 seen.append(setversion)
756 final.reverse() # put small successors set first
784 final.reverse() # put small successors set first
757 cache[current] = final
785 cache[current] = final
758 return cache[initialnode]
786 return cache[initialnode]
759
787
760 def _knownrevs(repo, nodes):
788 def _knownrevs(repo, nodes):
761 """yield revision numbers of known nodes passed in parameters
789 """yield revision numbers of known nodes passed in parameters
762
790
763 Unknown revisions are silently ignored."""
791 Unknown revisions are silently ignored."""
764 torev = repo.changelog.nodemap.get
792 torev = repo.changelog.nodemap.get
765 for n in nodes:
793 for n in nodes:
766 rev = torev(n)
794 rev = torev(n)
767 if rev is not None:
795 if rev is not None:
768 yield rev
796 yield rev
769
797
770 # mapping of 'set-name' -> <function to compute this set>
798 # mapping of 'set-name' -> <function to compute this set>
771 cachefuncs = {}
799 cachefuncs = {}
772 def cachefor(name):
800 def cachefor(name):
773 """Decorator to register a function as computing the cache for a set"""
801 """Decorator to register a function as computing the cache for a set"""
774 def decorator(func):
802 def decorator(func):
775 assert name not in cachefuncs
803 assert name not in cachefuncs
776 cachefuncs[name] = func
804 cachefuncs[name] = func
777 return func
805 return func
778 return decorator
806 return decorator
779
807
780 def getrevs(repo, name):
808 def getrevs(repo, name):
781 """Return the set of revision that belong to the <name> set
809 """Return the set of revision that belong to the <name> set
782
810
783 Such access may compute the set and cache it for future use"""
811 Such access may compute the set and cache it for future use"""
784 repo = repo.unfiltered()
812 repo = repo.unfiltered()
785 if not repo.obsstore:
813 if not repo.obsstore:
786 return ()
814 return ()
787 if name not in repo.obsstore.caches:
815 if name not in repo.obsstore.caches:
788 repo.obsstore.caches[name] = cachefuncs[name](repo)
816 repo.obsstore.caches[name] = cachefuncs[name](repo)
789 return repo.obsstore.caches[name]
817 return repo.obsstore.caches[name]
790
818
791 # To be simple we need to invalidate obsolescence cache when:
819 # To be simple we need to invalidate obsolescence cache when:
792 #
820 #
793 # - new changeset is added:
821 # - new changeset is added:
794 # - public phase is changed
822 # - public phase is changed
795 # - obsolescence marker are added
823 # - obsolescence marker are added
796 # - strip is used a repo
824 # - strip is used a repo
797 def clearobscaches(repo):
825 def clearobscaches(repo):
798 """Remove all obsolescence related cache from a repo
826 """Remove all obsolescence related cache from a repo
799
827
800 This remove all cache in obsstore is the obsstore already exist on the
828 This remove all cache in obsstore is the obsstore already exist on the
801 repo.
829 repo.
802
830
803 (We could be smarter here given the exact event that trigger the cache
831 (We could be smarter here given the exact event that trigger the cache
804 clearing)"""
832 clearing)"""
805 # only clear cache is there is obsstore data in this repo
833 # only clear cache is there is obsstore data in this repo
806 if 'obsstore' in repo._filecache:
834 if 'obsstore' in repo._filecache:
807 repo.obsstore.caches.clear()
835 repo.obsstore.caches.clear()
808
836
809 @cachefor('obsolete')
837 @cachefor('obsolete')
810 def _computeobsoleteset(repo):
838 def _computeobsoleteset(repo):
811 """the set of obsolete revisions"""
839 """the set of obsolete revisions"""
812 obs = set()
840 obs = set()
813 getrev = repo.changelog.nodemap.get
841 getrev = repo.changelog.nodemap.get
814 getphase = repo._phasecache.phase
842 getphase = repo._phasecache.phase
815 for n in repo.obsstore.successors:
843 for n in repo.obsstore.successors:
816 rev = getrev(n)
844 rev = getrev(n)
817 if rev is not None and getphase(repo, rev):
845 if rev is not None and getphase(repo, rev):
818 obs.add(rev)
846 obs.add(rev)
819 return obs
847 return obs
820
848
821 @cachefor('unstable')
849 @cachefor('unstable')
822 def _computeunstableset(repo):
850 def _computeunstableset(repo):
823 """the set of non obsolete revisions with obsolete parents"""
851 """the set of non obsolete revisions with obsolete parents"""
824 # revset is not efficient enough here
852 # revset is not efficient enough here
825 # we do (obsolete()::) - obsolete() by hand
853 # we do (obsolete()::) - obsolete() by hand
826 obs = getrevs(repo, 'obsolete')
854 obs = getrevs(repo, 'obsolete')
827 if not obs:
855 if not obs:
828 return set()
856 return set()
829 cl = repo.changelog
857 cl = repo.changelog
830 return set(r for r in cl.descendants(obs) if r not in obs)
858 return set(r for r in cl.descendants(obs) if r not in obs)
831
859
832 @cachefor('suspended')
860 @cachefor('suspended')
833 def _computesuspendedset(repo):
861 def _computesuspendedset(repo):
834 """the set of obsolete parents with non obsolete descendants"""
862 """the set of obsolete parents with non obsolete descendants"""
835 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
863 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
836 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
864 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
837
865
838 @cachefor('extinct')
866 @cachefor('extinct')
839 def _computeextinctset(repo):
867 def _computeextinctset(repo):
840 """the set of obsolete parents without non obsolete descendants"""
868 """the set of obsolete parents without non obsolete descendants"""
841 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
869 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
842
870
843
871
844 @cachefor('bumped')
872 @cachefor('bumped')
845 def _computebumpedset(repo):
873 def _computebumpedset(repo):
846 """the set of revs trying to obsolete public revisions"""
874 """the set of revs trying to obsolete public revisions"""
847 bumped = set()
875 bumped = set()
848 # util function (avoid attribute lookup in the loop)
876 # util function (avoid attribute lookup in the loop)
849 phase = repo._phasecache.phase # would be faster to grab the full list
877 phase = repo._phasecache.phase # would be faster to grab the full list
850 public = phases.public
878 public = phases.public
851 cl = repo.changelog
879 cl = repo.changelog
852 torev = cl.nodemap.get
880 torev = cl.nodemap.get
853 obs = getrevs(repo, 'obsolete')
881 obs = getrevs(repo, 'obsolete')
854 for rev in repo:
882 for rev in repo:
855 # We only evaluate mutable, non-obsolete revision
883 # We only evaluate mutable, non-obsolete revision
856 if (public < phase(repo, rev)) and (rev not in obs):
884 if (public < phase(repo, rev)) and (rev not in obs):
857 node = cl.node(rev)
885 node = cl.node(rev)
858 # (future) A cache of precursors may worth if split is very common
886 # (future) A cache of precursors may worth if split is very common
859 for pnode in allprecursors(repo.obsstore, [node],
887 for pnode in allprecursors(repo.obsstore, [node],
860 ignoreflags=bumpedfix):
888 ignoreflags=bumpedfix):
861 prev = torev(pnode) # unfiltered! but so is phasecache
889 prev = torev(pnode) # unfiltered! but so is phasecache
862 if (prev is not None) and (phase(repo, prev) <= public):
890 if (prev is not None) and (phase(repo, prev) <= public):
863 # we have a public precursors
891 # we have a public precursors
864 bumped.add(rev)
892 bumped.add(rev)
865 break # Next draft!
893 break # Next draft!
866 return bumped
894 return bumped
867
895
868 @cachefor('divergent')
896 @cachefor('divergent')
869 def _computedivergentset(repo):
897 def _computedivergentset(repo):
870 """the set of rev that compete to be the final successors of some revision.
898 """the set of rev that compete to be the final successors of some revision.
871 """
899 """
872 divergent = set()
900 divergent = set()
873 obsstore = repo.obsstore
901 obsstore = repo.obsstore
874 newermap = {}
902 newermap = {}
875 for ctx in repo.set('(not public()) - obsolete()'):
903 for ctx in repo.set('(not public()) - obsolete()'):
876 mark = obsstore.precursors.get(ctx.node(), ())
904 mark = obsstore.precursors.get(ctx.node(), ())
877 toprocess = set(mark)
905 toprocess = set(mark)
878 while toprocess:
906 while toprocess:
879 prec = toprocess.pop()[0]
907 prec = toprocess.pop()[0]
880 if prec not in newermap:
908 if prec not in newermap:
881 successorssets(repo, prec, newermap)
909 successorssets(repo, prec, newermap)
882 newer = [n for n in newermap[prec] if n]
910 newer = [n for n in newermap[prec] if n]
883 if len(newer) > 1:
911 if len(newer) > 1:
884 divergent.add(ctx.rev())
912 divergent.add(ctx.rev())
885 break
913 break
886 toprocess.update(obsstore.precursors.get(prec, ()))
914 toprocess.update(obsstore.precursors.get(prec, ()))
887 return divergent
915 return divergent
888
916
889
917
890 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
918 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
891 """Add obsolete markers between changesets in a repo
919 """Add obsolete markers between changesets in a repo
892
920
893 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
921 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
894 tuple. `old` and `news` are changectx. metadata is an optional dictionary
922 tuple. `old` and `news` are changectx. metadata is an optional dictionary
895 containing metadata for this marker only. It is merged with the global
923 containing metadata for this marker only. It is merged with the global
896 metadata specified through the `metadata` argument of this function,
924 metadata specified through the `metadata` argument of this function,
897
925
898 Trying to obsolete a public changeset will raise an exception.
926 Trying to obsolete a public changeset will raise an exception.
899
927
900 Current user and date are used except if specified otherwise in the
928 Current user and date are used except if specified otherwise in the
901 metadata attribute.
929 metadata attribute.
902
930
903 This function operates within a transaction of its own, but does
931 This function operates within a transaction of its own, but does
904 not take any lock on the repo.
932 not take any lock on the repo.
905 """
933 """
906 # prepare metadata
934 # prepare metadata
907 if metadata is None:
935 if metadata is None:
908 metadata = {}
936 metadata = {}
909 if 'user' not in metadata:
937 if 'user' not in metadata:
910 metadata['user'] = repo.ui.username()
938 metadata['user'] = repo.ui.username()
911 tr = repo.transaction('add-obsolescence-marker')
939 tr = repo.transaction('add-obsolescence-marker')
912 try:
940 try:
913 for rel in relations:
941 for rel in relations:
914 prec = rel[0]
942 prec = rel[0]
915 sucs = rel[1]
943 sucs = rel[1]
916 localmetadata = metadata.copy()
944 localmetadata = metadata.copy()
917 if 2 < len(rel):
945 if 2 < len(rel):
918 localmetadata.update(rel[2])
946 localmetadata.update(rel[2])
919
947
920 if not prec.mutable():
948 if not prec.mutable():
921 raise util.Abort("cannot obsolete immutable changeset: %s"
949 raise util.Abort("cannot obsolete immutable changeset: %s"
922 % prec)
950 % prec)
923 nprec = prec.node()
951 nprec = prec.node()
924 nsucs = tuple(s.node() for s in sucs)
952 nsucs = tuple(s.node() for s in sucs)
925 npare = None
953 npare = None
926 if not nsucs:
954 if not nsucs:
927 npare = tuple(p.node() for p in prec.parents())
955 npare = tuple(p.node() for p in prec.parents())
928 if nprec in nsucs:
956 if nprec in nsucs:
929 raise util.Abort("changeset %s cannot obsolete itself" % prec)
957 raise util.Abort("changeset %s cannot obsolete itself" % prec)
930 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
958 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
931 date=date, metadata=localmetadata)
959 date=date, metadata=localmetadata)
932 repo.filteredrevcache.clear()
960 repo.filteredrevcache.clear()
933 tr.close()
961 tr.close()
934 finally:
962 finally:
935 tr.release()
963 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now