##// END OF EJS Templates
obsolete: rename all _fm to _fm0...
Pierre-Yves David -
r22327:f737631a default
parent child Browse files
Show More
@@ -1,981 +1,981 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fmversion = 0
102 _fm0version = 0
103 _fmfixed = '>BIB20s'
103 _fm0fixed = '>BIB20s'
104 _fmnode = '20s'
104 _fm0node = '20s'
105 _fmfsize = struct.calcsize(_fmfixed)
105 _fm0fsize = struct.calcsize(_fm0fixed)
106 _fmfnodesize = struct.calcsize(_fmnode)
106 _fm0fnodesize = struct.calcsize(_fm0node)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion != _fmversion:
145 if diskversion != _fm0version:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148
148
149 # Loop on markers
149 # Loop on markers
150 l = len(data)
150 l = len(data)
151 while off + _fmfsize <= l:
151 while off + _fm0fsize <= l:
152 # read fixed part
152 # read fixed part
153 cur = data[off:off + _fmfsize]
153 cur = data[off:off + _fm0fsize]
154 off += _fmfsize
154 off += _fm0fsize
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
155 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
156 # read replacement
156 # read replacement
157 sucs = ()
157 sucs = ()
158 if nbsuc:
158 if nbsuc:
159 s = _fmfnodesize * nbsuc
159 s = (_fm0fnodesize * nbsuc)
160 cur = data[off:off + s]
160 cur = data[off:off + s]
161 sucs = _unpack(_fmnode * nbsuc, cur)
161 sucs = _unpack(_fm0node * nbsuc, cur)
162 off += s
162 off += s
163 # read metadata
163 # read metadata
164 # (metadata will be decoded on demand)
164 # (metadata will be decoded on demand)
165 metadata = data[off:off + mdsize]
165 metadata = data[off:off + mdsize]
166 if len(metadata) != mdsize:
166 if len(metadata) != mdsize:
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
168 'short, %d bytes expected, got %d')
168 'short, %d bytes expected, got %d')
169 % (mdsize, len(metadata)))
169 % (mdsize, len(metadata)))
170 off += mdsize
170 off += mdsize
171 meta = decodemeta(metadata)
171 meta = decodemeta(metadata)
172 try:
172 try:
173 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
173 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
174 date = float(when), int(offset)
174 date = float(when), int(offset)
175 except ValueError:
175 except ValueError:
176 date = (0., 0)
176 date = (0., 0)
177 parents = None
177 parents = None
178 if 'p2' in meta:
178 if 'p2' in meta:
179 parents = (meta.pop('p1', None), meta.pop('p2', None))
179 parents = (meta.pop('p1', None), meta.pop('p2', None))
180 elif 'p1' in meta:
180 elif 'p1' in meta:
181 parents = (meta.pop('p1', None),)
181 parents = (meta.pop('p1', None),)
182 elif 'p0' in meta:
182 elif 'p0' in meta:
183 parents = ()
183 parents = ()
184 if parents is not None:
184 if parents is not None:
185 try:
185 try:
186 parents = tuple(node.bin(p) for p in parents)
186 parents = tuple(node.bin(p) for p in parents)
187 # if parent content is not a nodeid, drop the data
187 # if parent content is not a nodeid, drop the data
188 for p in parents:
188 for p in parents:
189 if len(p) != 20:
189 if len(p) != 20:
190 parents = None
190 parents = None
191 break
191 break
192 except TypeError:
192 except TypeError:
193 # if content cannot be translated to nodeid drop the data.
193 # if content cannot be translated to nodeid drop the data.
194 parents = None
194 parents = None
195
195
196 metadata = encodemeta(meta)
196 metadata = encodemeta(meta)
197
197
198 yield (pre, sucs, flags, metadata, date, parents)
198 yield (pre, sucs, flags, metadata, date, parents)
199
199
200 def encodemeta(meta):
200 def encodemeta(meta):
201 """Return encoded metadata string to string mapping.
201 """Return encoded metadata string to string mapping.
202
202
203 Assume no ':' in key and no '\0' in both key and value."""
203 Assume no ':' in key and no '\0' in both key and value."""
204 for key, value in meta.iteritems():
204 for key, value in meta.iteritems():
205 if ':' in key or '\0' in key:
205 if ':' in key or '\0' in key:
206 raise ValueError("':' and '\0' are forbidden in metadata key'")
206 raise ValueError("':' and '\0' are forbidden in metadata key'")
207 if '\0' in value:
207 if '\0' in value:
208 raise ValueError("':' is forbidden in metadata value'")
208 raise ValueError("':' is forbidden in metadata value'")
209 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
209 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
210
210
211 def decodemeta(data):
211 def decodemeta(data):
212 """Return string to string dictionary from encoded version."""
212 """Return string to string dictionary from encoded version."""
213 d = {}
213 d = {}
214 for l in data.split('\0'):
214 for l in data.split('\0'):
215 if l:
215 if l:
216 key, value = l.split(':')
216 key, value = l.split(':')
217 d[key] = value
217 d[key] = value
218 return d
218 return d
219
219
220 class marker(object):
220 class marker(object):
221 """Wrap obsolete marker raw data"""
221 """Wrap obsolete marker raw data"""
222
222
223 def __init__(self, repo, data):
223 def __init__(self, repo, data):
224 # the repo argument will be used to create changectx in later version
224 # the repo argument will be used to create changectx in later version
225 self._repo = repo
225 self._repo = repo
226 self._data = data
226 self._data = data
227 self._decodedmeta = None
227 self._decodedmeta = None
228
228
229 def __hash__(self):
229 def __hash__(self):
230 return hash(self._data)
230 return hash(self._data)
231
231
232 def __eq__(self, other):
232 def __eq__(self, other):
233 if type(other) != type(self):
233 if type(other) != type(self):
234 return False
234 return False
235 return self._data == other._data
235 return self._data == other._data
236
236
237 def precnode(self):
237 def precnode(self):
238 """Precursor changeset node identifier"""
238 """Precursor changeset node identifier"""
239 return self._data[0]
239 return self._data[0]
240
240
241 def succnodes(self):
241 def succnodes(self):
242 """List of successor changesets node identifiers"""
242 """List of successor changesets node identifiers"""
243 return self._data[1]
243 return self._data[1]
244
244
245 def parentnodes(self):
245 def parentnodes(self):
246 """Parents of the precursors (None if not recorded)"""
246 """Parents of the precursors (None if not recorded)"""
247 return self._data[5]
247 return self._data[5]
248
248
249 def metadata(self):
249 def metadata(self):
250 """Decoded metadata dictionary"""
250 """Decoded metadata dictionary"""
251 if self._decodedmeta is None:
251 if self._decodedmeta is None:
252 self._decodedmeta = decodemeta(self._data[3])
252 self._decodedmeta = decodemeta(self._data[3])
253 return self._decodedmeta
253 return self._decodedmeta
254
254
255 def date(self):
255 def date(self):
256 """Creation date as (unixtime, offset)"""
256 """Creation date as (unixtime, offset)"""
257 return self._data[4]
257 return self._data[4]
258
258
259 def flags(self):
259 def flags(self):
260 """The flags field of the marker"""
260 """The flags field of the marker"""
261 return self._data[2]
261 return self._data[2]
262
262
263 class obsstore(object):
263 class obsstore(object):
264 """Store obsolete markers
264 """Store obsolete markers
265
265
266 Markers can be accessed with two mappings:
266 Markers can be accessed with two mappings:
267 - precursors[x] -> set(markers on precursors edges of x)
267 - precursors[x] -> set(markers on precursors edges of x)
268 - successors[x] -> set(markers on successors edges of x)
268 - successors[x] -> set(markers on successors edges of x)
269 - children[x] -> set(markers on precursors edges of children(x)
269 - children[x] -> set(markers on precursors edges of children(x)
270 """
270 """
271
271
272 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
272 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
273 # prec: nodeid, precursor changesets
273 # prec: nodeid, precursor changesets
274 # succs: tuple of nodeid, successor changesets (0-N length)
274 # succs: tuple of nodeid, successor changesets (0-N length)
275 # flag: integer, flag field carrying modifier for the markers (see doc)
275 # flag: integer, flag field carrying modifier for the markers (see doc)
276 # meta: binary blob, encoded metadata dictionary
276 # meta: binary blob, encoded metadata dictionary
277 # date: (float, int) tuple, date of marker creation
277 # date: (float, int) tuple, date of marker creation
278 # parents: (tuple of nodeid) or None, parents of precursors
278 # parents: (tuple of nodeid) or None, parents of precursors
279 # None is used when no data has been recorded
279 # None is used when no data has been recorded
280
280
281 def __init__(self, sopener):
281 def __init__(self, sopener):
282 # caches for various obsolescence related cache
282 # caches for various obsolescence related cache
283 self.caches = {}
283 self.caches = {}
284 self._all = []
284 self._all = []
285 self.precursors = {}
285 self.precursors = {}
286 self.successors = {}
286 self.successors = {}
287 self.children = {}
287 self.children = {}
288 self.sopener = sopener
288 self.sopener = sopener
289 data = sopener.tryread('obsstore')
289 data = sopener.tryread('obsstore')
290 if data:
290 if data:
291 self._load(_readmarkers(data))
291 self._load(_readmarkers(data))
292
292
293 def __iter__(self):
293 def __iter__(self):
294 return iter(self._all)
294 return iter(self._all)
295
295
296 def __len__(self):
296 def __len__(self):
297 return len(self._all)
297 return len(self._all)
298
298
299 def __nonzero__(self):
299 def __nonzero__(self):
300 return bool(self._all)
300 return bool(self._all)
301
301
302 def create(self, transaction, prec, succs=(), flag=0, parents=None,
302 def create(self, transaction, prec, succs=(), flag=0, parents=None,
303 date=None, metadata=None):
303 date=None, metadata=None):
304 """obsolete: add a new obsolete marker
304 """obsolete: add a new obsolete marker
305
305
306 * ensuring it is hashable
306 * ensuring it is hashable
307 * check mandatory metadata
307 * check mandatory metadata
308 * encode metadata
308 * encode metadata
309
309
310 If you are a human writing code creating marker you want to use the
310 If you are a human writing code creating marker you want to use the
311 `createmarkers` function in this module instead.
311 `createmarkers` function in this module instead.
312
312
313 return True if a new marker have been added, False if the markers
313 return True if a new marker have been added, False if the markers
314 already existed (no op).
314 already existed (no op).
315 """
315 """
316 if metadata is None:
316 if metadata is None:
317 metadata = {}
317 metadata = {}
318 if date is None:
318 if date is None:
319 if 'date' in metadata:
319 if 'date' in metadata:
320 # as a courtesy for out-of-tree extensions
320 # as a courtesy for out-of-tree extensions
321 date = util.parsedate(metadata.pop('date'))
321 date = util.parsedate(metadata.pop('date'))
322 else:
322 else:
323 date = util.makedate()
323 date = util.makedate()
324 if len(prec) != 20:
324 if len(prec) != 20:
325 raise ValueError(prec)
325 raise ValueError(prec)
326 for succ in succs:
326 for succ in succs:
327 if len(succ) != 20:
327 if len(succ) != 20:
328 raise ValueError(succ)
328 raise ValueError(succ)
329 if prec in succs:
329 if prec in succs:
330 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
330 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
331 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
331 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
332 date, parents)
332 date, parents)
333 return bool(self.add(transaction, [marker]))
333 return bool(self.add(transaction, [marker]))
334
334
335 def add(self, transaction, markers):
335 def add(self, transaction, markers):
336 """Add new markers to the store
336 """Add new markers to the store
337
337
338 Take care of filtering duplicate.
338 Take care of filtering duplicate.
339 Return the number of new marker."""
339 Return the number of new marker."""
340 if not _enabled:
340 if not _enabled:
341 raise util.Abort('obsolete feature is not enabled on this repo')
341 raise util.Abort('obsolete feature is not enabled on this repo')
342 known = set(self._all)
342 known = set(self._all)
343 new = []
343 new = []
344 for m in markers:
344 for m in markers:
345 if m not in known:
345 if m not in known:
346 known.add(m)
346 known.add(m)
347 new.append(m)
347 new.append(m)
348 if new:
348 if new:
349 f = self.sopener('obsstore', 'ab')
349 f = self.sopener('obsstore', 'ab')
350 try:
350 try:
351 # Whether the file's current position is at the begin or at
351 # Whether the file's current position is at the begin or at
352 # the end after opening a file for appending is implementation
352 # the end after opening a file for appending is implementation
353 # defined. So we must seek to the end before calling tell(),
353 # defined. So we must seek to the end before calling tell(),
354 # or we may get a zero offset for non-zero sized files on
354 # or we may get a zero offset for non-zero sized files on
355 # some platforms (issue3543).
355 # some platforms (issue3543).
356 f.seek(0, _SEEK_END)
356 f.seek(0, _SEEK_END)
357 offset = f.tell()
357 offset = f.tell()
358 transaction.add('obsstore', offset)
358 transaction.add('obsstore', offset)
359 # offset == 0: new file - add the version header
359 # offset == 0: new file - add the version header
360 for bytes in _encodemarkers(new, offset == 0):
360 for bytes in _encodemarkers(new, offset == 0):
361 f.write(bytes)
361 f.write(bytes)
362 finally:
362 finally:
363 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
363 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
364 # call 'filecacheentry.refresh()' here
364 # call 'filecacheentry.refresh()' here
365 f.close()
365 f.close()
366 self._load(new)
366 self._load(new)
367 # new marker *may* have changed several set. invalidate the cache.
367 # new marker *may* have changed several set. invalidate the cache.
368 self.caches.clear()
368 self.caches.clear()
369 return len(new)
369 return len(new)
370
370
371 def mergemarkers(self, transaction, data):
371 def mergemarkers(self, transaction, data):
372 """merge a binary stream of markers inside the obsstore
372 """merge a binary stream of markers inside the obsstore
373
373
374 Returns the number of new markers added."""
374 Returns the number of new markers added."""
375 markers = _readmarkers(data)
375 markers = _readmarkers(data)
376 return self.add(transaction, markers)
376 return self.add(transaction, markers)
377
377
378 def _load(self, markers):
378 def _load(self, markers):
379 for mark in markers:
379 for mark in markers:
380 self._all.append(mark)
380 self._all.append(mark)
381 pre, sucs = mark[:2]
381 pre, sucs = mark[:2]
382 self.successors.setdefault(pre, set()).add(mark)
382 self.successors.setdefault(pre, set()).add(mark)
383 for suc in sucs:
383 for suc in sucs:
384 self.precursors.setdefault(suc, set()).add(mark)
384 self.precursors.setdefault(suc, set()).add(mark)
385 parents = mark[5]
385 parents = mark[5]
386 if parents is not None:
386 if parents is not None:
387 for p in parents:
387 for p in parents:
388 self.children.setdefault(p, set()).add(mark)
388 self.children.setdefault(p, set()).add(mark)
389 if node.nullid in self.precursors:
389 if node.nullid in self.precursors:
390 raise util.Abort(_('bad obsolescence marker detected: '
390 raise util.Abort(_('bad obsolescence marker detected: '
391 'invalid successors nullid'))
391 'invalid successors nullid'))
392 def relevantmarkers(self, nodes):
392 def relevantmarkers(self, nodes):
393 """return a set of all obsolescence markers relevant to a set of nodes.
393 """return a set of all obsolescence markers relevant to a set of nodes.
394
394
395 "relevant" to a set of nodes mean:
395 "relevant" to a set of nodes mean:
396
396
397 - marker that use this changeset as successor
397 - marker that use this changeset as successor
398 - prune marker of direct children on this changeset
398 - prune marker of direct children on this changeset
399 - recursive application of the two rules on precursors of these markers
399 - recursive application of the two rules on precursors of these markers
400
400
401 It is a set so you cannot rely on order."""
401 It is a set so you cannot rely on order."""
402
402
403 pendingnodes = set(nodes)
403 pendingnodes = set(nodes)
404 seenmarkers = set()
404 seenmarkers = set()
405 seennodes = set(pendingnodes)
405 seennodes = set(pendingnodes)
406 precursorsmarkers = self.precursors
406 precursorsmarkers = self.precursors
407 children = self.children
407 children = self.children
408 while pendingnodes:
408 while pendingnodes:
409 direct = set()
409 direct = set()
410 for current in pendingnodes:
410 for current in pendingnodes:
411 direct.update(precursorsmarkers.get(current, ()))
411 direct.update(precursorsmarkers.get(current, ()))
412 pruned = [m for m in children.get(current, ()) if not m[1]]
412 pruned = [m for m in children.get(current, ()) if not m[1]]
413 direct.update(pruned)
413 direct.update(pruned)
414 direct -= seenmarkers
414 direct -= seenmarkers
415 pendingnodes = set([m[0] for m in direct])
415 pendingnodes = set([m[0] for m in direct])
416 seenmarkers |= direct
416 seenmarkers |= direct
417 pendingnodes -= seennodes
417 pendingnodes -= seennodes
418 seennodes |= pendingnodes
418 seennodes |= pendingnodes
419 return seenmarkers
419 return seenmarkers
420
420
421 def _encodemarkers(markers, addheader=False):
421 def _encodemarkers(markers, addheader=False):
422 # Kept separate from flushmarkers(), it will be reused for
422 # Kept separate from flushmarkers(), it will be reused for
423 # markers exchange.
423 # markers exchange.
424 if addheader:
424 if addheader:
425 yield _pack('>B', _fmversion)
425 yield _pack('>B', _fm0version)
426 for marker in markers:
426 for marker in markers:
427 yield _encodeonemarker(marker)
427 yield _encodeonemarker(marker)
428
428
429
429
430 def _encodeonemarker(marker):
430 def _encodeonemarker(marker):
431 pre, sucs, flags, metadata, date, parents = marker
431 pre, sucs, flags, metadata, date, parents = marker
432 metadata = decodemeta(metadata)
432 metadata = decodemeta(metadata)
433 metadata['date'] = '%d %i' % date
433 metadata['date'] = '%d %i' % date
434 if parents is not None:
434 if parents is not None:
435 if not parents:
435 if not parents:
436 # mark that we explicitly recorded no parents
436 # mark that we explicitly recorded no parents
437 metadata['p0'] = ''
437 metadata['p0'] = ''
438 for i, p in enumerate(parents):
438 for i, p in enumerate(parents):
439 metadata['p%i' % (i + 1)] = node.hex(p)
439 metadata['p%i' % (i + 1)] = node.hex(p)
440 metadata = encodemeta(metadata)
440 metadata = encodemeta(metadata)
441 nbsuc = len(sucs)
441 nbsuc = len(sucs)
442 format = _fmfixed + (_fmnode * nbsuc)
442 format = _fm0fixed + (_fm0node * nbsuc)
443 data = [nbsuc, len(metadata), flags, pre]
443 data = [nbsuc, len(metadata), flags, pre]
444 data.extend(sucs)
444 data.extend(sucs)
445 return _pack(format, *data) + metadata
445 return _pack(format, *data) + metadata
446
446
447 # arbitrary picked to fit into 8K limit from HTTP server
447 # arbitrary picked to fit into 8K limit from HTTP server
448 # you have to take in account:
448 # you have to take in account:
449 # - the version header
449 # - the version header
450 # - the base85 encoding
450 # - the base85 encoding
451 _maxpayload = 5300
451 _maxpayload = 5300
452
452
453 def _pushkeyescape(markers):
453 def _pushkeyescape(markers):
454 """encode markers into a dict suitable for pushkey exchange
454 """encode markers into a dict suitable for pushkey exchange
455
455
456 - binary data is base85 encoded
456 - binary data is base85 encoded
457 - split in chunks smaller than 5300 bytes"""
457 - split in chunks smaller than 5300 bytes"""
458 keys = {}
458 keys = {}
459 parts = []
459 parts = []
460 currentlen = _maxpayload * 2 # ensure we create a new part
460 currentlen = _maxpayload * 2 # ensure we create a new part
461 for marker in markers:
461 for marker in markers:
462 nextdata = _encodeonemarker(marker)
462 nextdata = _encodeonemarker(marker)
463 if (len(nextdata) + currentlen > _maxpayload):
463 if (len(nextdata) + currentlen > _maxpayload):
464 currentpart = []
464 currentpart = []
465 currentlen = 0
465 currentlen = 0
466 parts.append(currentpart)
466 parts.append(currentpart)
467 currentpart.append(nextdata)
467 currentpart.append(nextdata)
468 currentlen += len(nextdata)
468 currentlen += len(nextdata)
469 for idx, part in enumerate(reversed(parts)):
469 for idx, part in enumerate(reversed(parts)):
470 data = ''.join([_pack('>B', _fmversion)] + part)
470 data = ''.join([_pack('>B', _fm0version)] + part)
471 keys['dump%i' % idx] = base85.b85encode(data)
471 keys['dump%i' % idx] = base85.b85encode(data)
472 return keys
472 return keys
473
473
474 def listmarkers(repo):
474 def listmarkers(repo):
475 """List markers over pushkey"""
475 """List markers over pushkey"""
476 if not repo.obsstore:
476 if not repo.obsstore:
477 return {}
477 return {}
478 return _pushkeyescape(repo.obsstore)
478 return _pushkeyescape(repo.obsstore)
479
479
480 def pushmarker(repo, key, old, new):
480 def pushmarker(repo, key, old, new):
481 """Push markers over pushkey"""
481 """Push markers over pushkey"""
482 if not key.startswith('dump'):
482 if not key.startswith('dump'):
483 repo.ui.warn(_('unknown key: %r') % key)
483 repo.ui.warn(_('unknown key: %r') % key)
484 return 0
484 return 0
485 if old:
485 if old:
486 repo.ui.warn(_('unexpected old value for %r') % key)
486 repo.ui.warn(_('unexpected old value for %r') % key)
487 return 0
487 return 0
488 data = base85.b85decode(new)
488 data = base85.b85decode(new)
489 lock = repo.lock()
489 lock = repo.lock()
490 try:
490 try:
491 tr = repo.transaction('pushkey: obsolete markers')
491 tr = repo.transaction('pushkey: obsolete markers')
492 try:
492 try:
493 repo.obsstore.mergemarkers(tr, data)
493 repo.obsstore.mergemarkers(tr, data)
494 tr.close()
494 tr.close()
495 return 1
495 return 1
496 finally:
496 finally:
497 tr.release()
497 tr.release()
498 finally:
498 finally:
499 lock.release()
499 lock.release()
500
500
501 def getmarkers(repo, nodes=None):
501 def getmarkers(repo, nodes=None):
502 """returns markers known in a repository
502 """returns markers known in a repository
503
503
504 If <nodes> is specified, only markers "relevant" to those nodes are are
504 If <nodes> is specified, only markers "relevant" to those nodes are are
505 returned"""
505 returned"""
506 if nodes is None:
506 if nodes is None:
507 rawmarkers = repo.obsstore
507 rawmarkers = repo.obsstore
508 else:
508 else:
509 rawmarkers = repo.obsstore.relevantmarkers(nodes)
509 rawmarkers = repo.obsstore.relevantmarkers(nodes)
510
510
511 for markerdata in rawmarkers:
511 for markerdata in rawmarkers:
512 yield marker(repo, markerdata)
512 yield marker(repo, markerdata)
513
513
514 def relevantmarkers(repo, node):
514 def relevantmarkers(repo, node):
515 """all obsolete markers relevant to some revision"""
515 """all obsolete markers relevant to some revision"""
516 for markerdata in repo.obsstore.relevantmarkers(node):
516 for markerdata in repo.obsstore.relevantmarkers(node):
517 yield marker(repo, markerdata)
517 yield marker(repo, markerdata)
518
518
519
519
520 def precursormarkers(ctx):
520 def precursormarkers(ctx):
521 """obsolete marker marking this changeset as a successors"""
521 """obsolete marker marking this changeset as a successors"""
522 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
522 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
523 yield marker(ctx._repo, data)
523 yield marker(ctx._repo, data)
524
524
525 def successormarkers(ctx):
525 def successormarkers(ctx):
526 """obsolete marker making this changeset obsolete"""
526 """obsolete marker making this changeset obsolete"""
527 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
527 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
528 yield marker(ctx._repo, data)
528 yield marker(ctx._repo, data)
529
529
530 def allsuccessors(obsstore, nodes, ignoreflags=0):
530 def allsuccessors(obsstore, nodes, ignoreflags=0):
531 """Yield node for every successor of <nodes>.
531 """Yield node for every successor of <nodes>.
532
532
533 Some successors may be unknown locally.
533 Some successors may be unknown locally.
534
534
535 This is a linear yield unsuited to detecting split changesets. It includes
535 This is a linear yield unsuited to detecting split changesets. It includes
536 initial nodes too."""
536 initial nodes too."""
537 remaining = set(nodes)
537 remaining = set(nodes)
538 seen = set(remaining)
538 seen = set(remaining)
539 while remaining:
539 while remaining:
540 current = remaining.pop()
540 current = remaining.pop()
541 yield current
541 yield current
542 for mark in obsstore.successors.get(current, ()):
542 for mark in obsstore.successors.get(current, ()):
543 # ignore marker flagged with specified flag
543 # ignore marker flagged with specified flag
544 if mark[2] & ignoreflags:
544 if mark[2] & ignoreflags:
545 continue
545 continue
546 for suc in mark[1]:
546 for suc in mark[1]:
547 if suc not in seen:
547 if suc not in seen:
548 seen.add(suc)
548 seen.add(suc)
549 remaining.add(suc)
549 remaining.add(suc)
550
550
551 def allprecursors(obsstore, nodes, ignoreflags=0):
551 def allprecursors(obsstore, nodes, ignoreflags=0):
552 """Yield node for every precursors of <nodes>.
552 """Yield node for every precursors of <nodes>.
553
553
554 Some precursors may be unknown locally.
554 Some precursors may be unknown locally.
555
555
556 This is a linear yield unsuited to detecting folded changesets. It includes
556 This is a linear yield unsuited to detecting folded changesets. It includes
557 initial nodes too."""
557 initial nodes too."""
558
558
559 remaining = set(nodes)
559 remaining = set(nodes)
560 seen = set(remaining)
560 seen = set(remaining)
561 while remaining:
561 while remaining:
562 current = remaining.pop()
562 current = remaining.pop()
563 yield current
563 yield current
564 for mark in obsstore.precursors.get(current, ()):
564 for mark in obsstore.precursors.get(current, ()):
565 # ignore marker flagged with specified flag
565 # ignore marker flagged with specified flag
566 if mark[2] & ignoreflags:
566 if mark[2] & ignoreflags:
567 continue
567 continue
568 suc = mark[0]
568 suc = mark[0]
569 if suc not in seen:
569 if suc not in seen:
570 seen.add(suc)
570 seen.add(suc)
571 remaining.add(suc)
571 remaining.add(suc)
572
572
573 def foreground(repo, nodes):
573 def foreground(repo, nodes):
574 """return all nodes in the "foreground" of other node
574 """return all nodes in the "foreground" of other node
575
575
576 The foreground of a revision is anything reachable using parent -> children
576 The foreground of a revision is anything reachable using parent -> children
577 or precursor -> successor relation. It is very similar to "descendant" but
577 or precursor -> successor relation. It is very similar to "descendant" but
578 augmented with obsolescence information.
578 augmented with obsolescence information.
579
579
580 Beware that possible obsolescence cycle may result if complex situation.
580 Beware that possible obsolescence cycle may result if complex situation.
581 """
581 """
582 repo = repo.unfiltered()
582 repo = repo.unfiltered()
583 foreground = set(repo.set('%ln::', nodes))
583 foreground = set(repo.set('%ln::', nodes))
584 if repo.obsstore:
584 if repo.obsstore:
585 # We only need this complicated logic if there is obsolescence
585 # We only need this complicated logic if there is obsolescence
586 # XXX will probably deserve an optimised revset.
586 # XXX will probably deserve an optimised revset.
587 nm = repo.changelog.nodemap
587 nm = repo.changelog.nodemap
588 plen = -1
588 plen = -1
589 # compute the whole set of successors or descendants
589 # compute the whole set of successors or descendants
590 while len(foreground) != plen:
590 while len(foreground) != plen:
591 plen = len(foreground)
591 plen = len(foreground)
592 succs = set(c.node() for c in foreground)
592 succs = set(c.node() for c in foreground)
593 mutable = [c.node() for c in foreground if c.mutable()]
593 mutable = [c.node() for c in foreground if c.mutable()]
594 succs.update(allsuccessors(repo.obsstore, mutable))
594 succs.update(allsuccessors(repo.obsstore, mutable))
595 known = (n for n in succs if n in nm)
595 known = (n for n in succs if n in nm)
596 foreground = set(repo.set('%ln::', known))
596 foreground = set(repo.set('%ln::', known))
597 return set(c.node() for c in foreground)
597 return set(c.node() for c in foreground)
598
598
599
599
600 def successorssets(repo, initialnode, cache=None):
600 def successorssets(repo, initialnode, cache=None):
601 """Return all set of successors of initial nodes
601 """Return all set of successors of initial nodes
602
602
603 The successors set of a changeset A are a group of revisions that succeed
603 The successors set of a changeset A are a group of revisions that succeed
604 A. It succeeds A as a consistent whole, each revision being only a partial
604 A. It succeeds A as a consistent whole, each revision being only a partial
605 replacement. The successors set contains non-obsolete changesets only.
605 replacement. The successors set contains non-obsolete changesets only.
606
606
607 This function returns the full list of successor sets which is why it
607 This function returns the full list of successor sets which is why it
608 returns a list of tuples and not just a single tuple. Each tuple is a valid
608 returns a list of tuples and not just a single tuple. Each tuple is a valid
609 successors set. Not that (A,) may be a valid successors set for changeset A
609 successors set. Not that (A,) may be a valid successors set for changeset A
610 (see below).
610 (see below).
611
611
612 In most cases, a changeset A will have a single element (e.g. the changeset
612 In most cases, a changeset A will have a single element (e.g. the changeset
613 A is replaced by A') in its successors set. Though, it is also common for a
613 A is replaced by A') in its successors set. Though, it is also common for a
614 changeset A to have no elements in its successor set (e.g. the changeset
614 changeset A to have no elements in its successor set (e.g. the changeset
615 has been pruned). Therefore, the returned list of successors sets will be
615 has been pruned). Therefore, the returned list of successors sets will be
616 [(A',)] or [], respectively.
616 [(A',)] or [], respectively.
617
617
618 When a changeset A is split into A' and B', however, it will result in a
618 When a changeset A is split into A' and B', however, it will result in a
619 successors set containing more than a single element, i.e. [(A',B')].
619 successors set containing more than a single element, i.e. [(A',B')].
620 Divergent changesets will result in multiple successors sets, i.e. [(A',),
620 Divergent changesets will result in multiple successors sets, i.e. [(A',),
621 (A'')].
621 (A'')].
622
622
623 If a changeset A is not obsolete, then it will conceptually have no
623 If a changeset A is not obsolete, then it will conceptually have no
624 successors set. To distinguish this from a pruned changeset, the successor
624 successors set. To distinguish this from a pruned changeset, the successor
625 set will only contain itself, i.e. [(A,)].
625 set will only contain itself, i.e. [(A,)].
626
626
627 Finally, successors unknown locally are considered to be pruned (obsoleted
627 Finally, successors unknown locally are considered to be pruned (obsoleted
628 without any successors).
628 without any successors).
629
629
630 The optional `cache` parameter is a dictionary that may contain precomputed
630 The optional `cache` parameter is a dictionary that may contain precomputed
631 successors sets. It is meant to reuse the computation of a previous call to
631 successors sets. It is meant to reuse the computation of a previous call to
632 `successorssets` when multiple calls are made at the same time. The cache
632 `successorssets` when multiple calls are made at the same time. The cache
633 dictionary is updated in place. The caller is responsible for its live
633 dictionary is updated in place. The caller is responsible for its live
634 spawn. Code that makes multiple calls to `successorssets` *must* use this
634 spawn. Code that makes multiple calls to `successorssets` *must* use this
635 cache mechanism or suffer terrible performances.
635 cache mechanism or suffer terrible performances.
636
636
637 """
637 """
638
638
639 succmarkers = repo.obsstore.successors
639 succmarkers = repo.obsstore.successors
640
640
641 # Stack of nodes we search successors sets for
641 # Stack of nodes we search successors sets for
642 toproceed = [initialnode]
642 toproceed = [initialnode]
643 # set version of above list for fast loop detection
643 # set version of above list for fast loop detection
644 # element added to "toproceed" must be added here
644 # element added to "toproceed" must be added here
645 stackedset = set(toproceed)
645 stackedset = set(toproceed)
646 if cache is None:
646 if cache is None:
647 cache = {}
647 cache = {}
648
648
649 # This while loop is the flattened version of a recursive search for
649 # This while loop is the flattened version of a recursive search for
650 # successors sets
650 # successors sets
651 #
651 #
652 # def successorssets(x):
652 # def successorssets(x):
653 # successors = directsuccessors(x)
653 # successors = directsuccessors(x)
654 # ss = [[]]
654 # ss = [[]]
655 # for succ in directsuccessors(x):
655 # for succ in directsuccessors(x):
656 # # product as in itertools cartesian product
656 # # product as in itertools cartesian product
657 # ss = product(ss, successorssets(succ))
657 # ss = product(ss, successorssets(succ))
658 # return ss
658 # return ss
659 #
659 #
660 # But we can not use plain recursive calls here:
660 # But we can not use plain recursive calls here:
661 # - that would blow the python call stack
661 # - that would blow the python call stack
662 # - obsolescence markers may have cycles, we need to handle them.
662 # - obsolescence markers may have cycles, we need to handle them.
663 #
663 #
664 # The `toproceed` list act as our call stack. Every node we search
664 # The `toproceed` list act as our call stack. Every node we search
665 # successors set for are stacked there.
665 # successors set for are stacked there.
666 #
666 #
667 # The `stackedset` is set version of this stack used to check if a node is
667 # The `stackedset` is set version of this stack used to check if a node is
668 # already stacked. This check is used to detect cycles and prevent infinite
668 # already stacked. This check is used to detect cycles and prevent infinite
669 # loop.
669 # loop.
670 #
670 #
671 # successors set of all nodes are stored in the `cache` dictionary.
671 # successors set of all nodes are stored in the `cache` dictionary.
672 #
672 #
673 # After this while loop ends we use the cache to return the successors sets
673 # After this while loop ends we use the cache to return the successors sets
674 # for the node requested by the caller.
674 # for the node requested by the caller.
675 while toproceed:
675 while toproceed:
676 # Every iteration tries to compute the successors sets of the topmost
676 # Every iteration tries to compute the successors sets of the topmost
677 # node of the stack: CURRENT.
677 # node of the stack: CURRENT.
678 #
678 #
679 # There are four possible outcomes:
679 # There are four possible outcomes:
680 #
680 #
681 # 1) We already know the successors sets of CURRENT:
681 # 1) We already know the successors sets of CURRENT:
682 # -> mission accomplished, pop it from the stack.
682 # -> mission accomplished, pop it from the stack.
683 # 2) Node is not obsolete:
683 # 2) Node is not obsolete:
684 # -> the node is its own successors sets. Add it to the cache.
684 # -> the node is its own successors sets. Add it to the cache.
685 # 3) We do not know successors set of direct successors of CURRENT:
685 # 3) We do not know successors set of direct successors of CURRENT:
686 # -> We add those successors to the stack.
686 # -> We add those successors to the stack.
687 # 4) We know successors sets of all direct successors of CURRENT:
687 # 4) We know successors sets of all direct successors of CURRENT:
688 # -> We can compute CURRENT successors set and add it to the
688 # -> We can compute CURRENT successors set and add it to the
689 # cache.
689 # cache.
690 #
690 #
691 current = toproceed[-1]
691 current = toproceed[-1]
692 if current in cache:
692 if current in cache:
693 # case (1): We already know the successors sets
693 # case (1): We already know the successors sets
694 stackedset.remove(toproceed.pop())
694 stackedset.remove(toproceed.pop())
695 elif current not in succmarkers:
695 elif current not in succmarkers:
696 # case (2): The node is not obsolete.
696 # case (2): The node is not obsolete.
697 if current in repo:
697 if current in repo:
698 # We have a valid last successors.
698 # We have a valid last successors.
699 cache[current] = [(current,)]
699 cache[current] = [(current,)]
700 else:
700 else:
701 # Final obsolete version is unknown locally.
701 # Final obsolete version is unknown locally.
702 # Do not count that as a valid successors
702 # Do not count that as a valid successors
703 cache[current] = []
703 cache[current] = []
704 else:
704 else:
705 # cases (3) and (4)
705 # cases (3) and (4)
706 #
706 #
707 # We proceed in two phases. Phase 1 aims to distinguish case (3)
707 # We proceed in two phases. Phase 1 aims to distinguish case (3)
708 # from case (4):
708 # from case (4):
709 #
709 #
710 # For each direct successors of CURRENT, we check whether its
710 # For each direct successors of CURRENT, we check whether its
711 # successors sets are known. If they are not, we stack the
711 # successors sets are known. If they are not, we stack the
712 # unknown node and proceed to the next iteration of the while
712 # unknown node and proceed to the next iteration of the while
713 # loop. (case 3)
713 # loop. (case 3)
714 #
714 #
715 # During this step, we may detect obsolescence cycles: a node
715 # During this step, we may detect obsolescence cycles: a node
716 # with unknown successors sets but already in the call stack.
716 # with unknown successors sets but already in the call stack.
717 # In such a situation, we arbitrary set the successors sets of
717 # In such a situation, we arbitrary set the successors sets of
718 # the node to nothing (node pruned) to break the cycle.
718 # the node to nothing (node pruned) to break the cycle.
719 #
719 #
720 # If no break was encountered we proceed to phase 2.
720 # If no break was encountered we proceed to phase 2.
721 #
721 #
722 # Phase 2 computes successors sets of CURRENT (case 4); see details
722 # Phase 2 computes successors sets of CURRENT (case 4); see details
723 # in phase 2 itself.
723 # in phase 2 itself.
724 #
724 #
725 # Note the two levels of iteration in each phase.
725 # Note the two levels of iteration in each phase.
726 # - The first one handles obsolescence markers using CURRENT as
726 # - The first one handles obsolescence markers using CURRENT as
727 # precursor (successors markers of CURRENT).
727 # precursor (successors markers of CURRENT).
728 #
728 #
729 # Having multiple entry here means divergence.
729 # Having multiple entry here means divergence.
730 #
730 #
731 # - The second one handles successors defined in each marker.
731 # - The second one handles successors defined in each marker.
732 #
732 #
733 # Having none means pruned node, multiple successors means split,
733 # Having none means pruned node, multiple successors means split,
734 # single successors are standard replacement.
734 # single successors are standard replacement.
735 #
735 #
736 for mark in sorted(succmarkers[current]):
736 for mark in sorted(succmarkers[current]):
737 for suc in mark[1]:
737 for suc in mark[1]:
738 if suc not in cache:
738 if suc not in cache:
739 if suc in stackedset:
739 if suc in stackedset:
740 # cycle breaking
740 # cycle breaking
741 cache[suc] = []
741 cache[suc] = []
742 else:
742 else:
743 # case (3) If we have not computed successors sets
743 # case (3) If we have not computed successors sets
744 # of one of those successors we add it to the
744 # of one of those successors we add it to the
745 # `toproceed` stack and stop all work for this
745 # `toproceed` stack and stop all work for this
746 # iteration.
746 # iteration.
747 toproceed.append(suc)
747 toproceed.append(suc)
748 stackedset.add(suc)
748 stackedset.add(suc)
749 break
749 break
750 else:
750 else:
751 continue
751 continue
752 break
752 break
753 else:
753 else:
754 # case (4): we know all successors sets of all direct
754 # case (4): we know all successors sets of all direct
755 # successors
755 # successors
756 #
756 #
757 # Successors set contributed by each marker depends on the
757 # Successors set contributed by each marker depends on the
758 # successors sets of all its "successors" node.
758 # successors sets of all its "successors" node.
759 #
759 #
760 # Each different marker is a divergence in the obsolescence
760 # Each different marker is a divergence in the obsolescence
761 # history. It contributes successors sets distinct from other
761 # history. It contributes successors sets distinct from other
762 # markers.
762 # markers.
763 #
763 #
764 # Within a marker, a successor may have divergent successors
764 # Within a marker, a successor may have divergent successors
765 # sets. In such a case, the marker will contribute multiple
765 # sets. In such a case, the marker will contribute multiple
766 # divergent successors sets. If multiple successors have
766 # divergent successors sets. If multiple successors have
767 # divergent successors sets, a Cartesian product is used.
767 # divergent successors sets, a Cartesian product is used.
768 #
768 #
769 # At the end we post-process successors sets to remove
769 # At the end we post-process successors sets to remove
770 # duplicated entry and successors set that are strict subset of
770 # duplicated entry and successors set that are strict subset of
771 # another one.
771 # another one.
772 succssets = []
772 succssets = []
773 for mark in sorted(succmarkers[current]):
773 for mark in sorted(succmarkers[current]):
774 # successors sets contributed by this marker
774 # successors sets contributed by this marker
775 markss = [[]]
775 markss = [[]]
776 for suc in mark[1]:
776 for suc in mark[1]:
777 # cardinal product with previous successors
777 # cardinal product with previous successors
778 productresult = []
778 productresult = []
779 for prefix in markss:
779 for prefix in markss:
780 for suffix in cache[suc]:
780 for suffix in cache[suc]:
781 newss = list(prefix)
781 newss = list(prefix)
782 for part in suffix:
782 for part in suffix:
783 # do not duplicated entry in successors set
783 # do not duplicated entry in successors set
784 # first entry wins.
784 # first entry wins.
785 if part not in newss:
785 if part not in newss:
786 newss.append(part)
786 newss.append(part)
787 productresult.append(newss)
787 productresult.append(newss)
788 markss = productresult
788 markss = productresult
789 succssets.extend(markss)
789 succssets.extend(markss)
790 # remove duplicated and subset
790 # remove duplicated and subset
791 seen = []
791 seen = []
792 final = []
792 final = []
793 candidate = sorted(((set(s), s) for s in succssets if s),
793 candidate = sorted(((set(s), s) for s in succssets if s),
794 key=lambda x: len(x[1]), reverse=True)
794 key=lambda x: len(x[1]), reverse=True)
795 for setversion, listversion in candidate:
795 for setversion, listversion in candidate:
796 for seenset in seen:
796 for seenset in seen:
797 if setversion.issubset(seenset):
797 if setversion.issubset(seenset):
798 break
798 break
799 else:
799 else:
800 final.append(listversion)
800 final.append(listversion)
801 seen.append(setversion)
801 seen.append(setversion)
802 final.reverse() # put small successors set first
802 final.reverse() # put small successors set first
803 cache[current] = final
803 cache[current] = final
804 return cache[initialnode]
804 return cache[initialnode]
805
805
806 def _knownrevs(repo, nodes):
806 def _knownrevs(repo, nodes):
807 """yield revision numbers of known nodes passed in parameters
807 """yield revision numbers of known nodes passed in parameters
808
808
809 Unknown revisions are silently ignored."""
809 Unknown revisions are silently ignored."""
810 torev = repo.changelog.nodemap.get
810 torev = repo.changelog.nodemap.get
811 for n in nodes:
811 for n in nodes:
812 rev = torev(n)
812 rev = torev(n)
813 if rev is not None:
813 if rev is not None:
814 yield rev
814 yield rev
815
815
816 # mapping of 'set-name' -> <function to compute this set>
816 # mapping of 'set-name' -> <function to compute this set>
817 cachefuncs = {}
817 cachefuncs = {}
818 def cachefor(name):
818 def cachefor(name):
819 """Decorator to register a function as computing the cache for a set"""
819 """Decorator to register a function as computing the cache for a set"""
820 def decorator(func):
820 def decorator(func):
821 assert name not in cachefuncs
821 assert name not in cachefuncs
822 cachefuncs[name] = func
822 cachefuncs[name] = func
823 return func
823 return func
824 return decorator
824 return decorator
825
825
826 def getrevs(repo, name):
826 def getrevs(repo, name):
827 """Return the set of revision that belong to the <name> set
827 """Return the set of revision that belong to the <name> set
828
828
829 Such access may compute the set and cache it for future use"""
829 Such access may compute the set and cache it for future use"""
830 repo = repo.unfiltered()
830 repo = repo.unfiltered()
831 if not repo.obsstore:
831 if not repo.obsstore:
832 return ()
832 return ()
833 if name not in repo.obsstore.caches:
833 if name not in repo.obsstore.caches:
834 repo.obsstore.caches[name] = cachefuncs[name](repo)
834 repo.obsstore.caches[name] = cachefuncs[name](repo)
835 return repo.obsstore.caches[name]
835 return repo.obsstore.caches[name]
836
836
837 # To be simple we need to invalidate obsolescence cache when:
837 # To be simple we need to invalidate obsolescence cache when:
838 #
838 #
839 # - new changeset is added:
839 # - new changeset is added:
840 # - public phase is changed
840 # - public phase is changed
841 # - obsolescence marker are added
841 # - obsolescence marker are added
842 # - strip is used a repo
842 # - strip is used a repo
843 def clearobscaches(repo):
843 def clearobscaches(repo):
844 """Remove all obsolescence related cache from a repo
844 """Remove all obsolescence related cache from a repo
845
845
846 This remove all cache in obsstore is the obsstore already exist on the
846 This remove all cache in obsstore is the obsstore already exist on the
847 repo.
847 repo.
848
848
849 (We could be smarter here given the exact event that trigger the cache
849 (We could be smarter here given the exact event that trigger the cache
850 clearing)"""
850 clearing)"""
851 # only clear cache is there is obsstore data in this repo
851 # only clear cache is there is obsstore data in this repo
852 if 'obsstore' in repo._filecache:
852 if 'obsstore' in repo._filecache:
853 repo.obsstore.caches.clear()
853 repo.obsstore.caches.clear()
854
854
855 @cachefor('obsolete')
855 @cachefor('obsolete')
856 def _computeobsoleteset(repo):
856 def _computeobsoleteset(repo):
857 """the set of obsolete revisions"""
857 """the set of obsolete revisions"""
858 obs = set()
858 obs = set()
859 getrev = repo.changelog.nodemap.get
859 getrev = repo.changelog.nodemap.get
860 getphase = repo._phasecache.phase
860 getphase = repo._phasecache.phase
861 for n in repo.obsstore.successors:
861 for n in repo.obsstore.successors:
862 rev = getrev(n)
862 rev = getrev(n)
863 if rev is not None and getphase(repo, rev):
863 if rev is not None and getphase(repo, rev):
864 obs.add(rev)
864 obs.add(rev)
865 return obs
865 return obs
866
866
867 @cachefor('unstable')
867 @cachefor('unstable')
868 def _computeunstableset(repo):
868 def _computeunstableset(repo):
869 """the set of non obsolete revisions with obsolete parents"""
869 """the set of non obsolete revisions with obsolete parents"""
870 # revset is not efficient enough here
870 # revset is not efficient enough here
871 # we do (obsolete()::) - obsolete() by hand
871 # we do (obsolete()::) - obsolete() by hand
872 obs = getrevs(repo, 'obsolete')
872 obs = getrevs(repo, 'obsolete')
873 if not obs:
873 if not obs:
874 return set()
874 return set()
875 cl = repo.changelog
875 cl = repo.changelog
876 return set(r for r in cl.descendants(obs) if r not in obs)
876 return set(r for r in cl.descendants(obs) if r not in obs)
877
877
878 @cachefor('suspended')
878 @cachefor('suspended')
879 def _computesuspendedset(repo):
879 def _computesuspendedset(repo):
880 """the set of obsolete parents with non obsolete descendants"""
880 """the set of obsolete parents with non obsolete descendants"""
881 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
881 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
882 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
882 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
883
883
884 @cachefor('extinct')
884 @cachefor('extinct')
885 def _computeextinctset(repo):
885 def _computeextinctset(repo):
886 """the set of obsolete parents without non obsolete descendants"""
886 """the set of obsolete parents without non obsolete descendants"""
887 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
887 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
888
888
889
889
890 @cachefor('bumped')
890 @cachefor('bumped')
891 def _computebumpedset(repo):
891 def _computebumpedset(repo):
892 """the set of revs trying to obsolete public revisions"""
892 """the set of revs trying to obsolete public revisions"""
893 bumped = set()
893 bumped = set()
894 # util function (avoid attribute lookup in the loop)
894 # util function (avoid attribute lookup in the loop)
895 phase = repo._phasecache.phase # would be faster to grab the full list
895 phase = repo._phasecache.phase # would be faster to grab the full list
896 public = phases.public
896 public = phases.public
897 cl = repo.changelog
897 cl = repo.changelog
898 torev = cl.nodemap.get
898 torev = cl.nodemap.get
899 obs = getrevs(repo, 'obsolete')
899 obs = getrevs(repo, 'obsolete')
900 for rev in repo:
900 for rev in repo:
901 # We only evaluate mutable, non-obsolete revision
901 # We only evaluate mutable, non-obsolete revision
902 if (public < phase(repo, rev)) and (rev not in obs):
902 if (public < phase(repo, rev)) and (rev not in obs):
903 node = cl.node(rev)
903 node = cl.node(rev)
904 # (future) A cache of precursors may worth if split is very common
904 # (future) A cache of precursors may worth if split is very common
905 for pnode in allprecursors(repo.obsstore, [node],
905 for pnode in allprecursors(repo.obsstore, [node],
906 ignoreflags=bumpedfix):
906 ignoreflags=bumpedfix):
907 prev = torev(pnode) # unfiltered! but so is phasecache
907 prev = torev(pnode) # unfiltered! but so is phasecache
908 if (prev is not None) and (phase(repo, prev) <= public):
908 if (prev is not None) and (phase(repo, prev) <= public):
909 # we have a public precursors
909 # we have a public precursors
910 bumped.add(rev)
910 bumped.add(rev)
911 break # Next draft!
911 break # Next draft!
912 return bumped
912 return bumped
913
913
914 @cachefor('divergent')
914 @cachefor('divergent')
915 def _computedivergentset(repo):
915 def _computedivergentset(repo):
916 """the set of rev that compete to be the final successors of some revision.
916 """the set of rev that compete to be the final successors of some revision.
917 """
917 """
918 divergent = set()
918 divergent = set()
919 obsstore = repo.obsstore
919 obsstore = repo.obsstore
920 newermap = {}
920 newermap = {}
921 for ctx in repo.set('(not public()) - obsolete()'):
921 for ctx in repo.set('(not public()) - obsolete()'):
922 mark = obsstore.precursors.get(ctx.node(), ())
922 mark = obsstore.precursors.get(ctx.node(), ())
923 toprocess = set(mark)
923 toprocess = set(mark)
924 while toprocess:
924 while toprocess:
925 prec = toprocess.pop()[0]
925 prec = toprocess.pop()[0]
926 if prec not in newermap:
926 if prec not in newermap:
927 successorssets(repo, prec, newermap)
927 successorssets(repo, prec, newermap)
928 newer = [n for n in newermap[prec] if n]
928 newer = [n for n in newermap[prec] if n]
929 if len(newer) > 1:
929 if len(newer) > 1:
930 divergent.add(ctx.rev())
930 divergent.add(ctx.rev())
931 break
931 break
932 toprocess.update(obsstore.precursors.get(prec, ()))
932 toprocess.update(obsstore.precursors.get(prec, ()))
933 return divergent
933 return divergent
934
934
935
935
936 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
936 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
937 """Add obsolete markers between changesets in a repo
937 """Add obsolete markers between changesets in a repo
938
938
939 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
939 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
940 tuple. `old` and `news` are changectx. metadata is an optional dictionary
940 tuple. `old` and `news` are changectx. metadata is an optional dictionary
941 containing metadata for this marker only. It is merged with the global
941 containing metadata for this marker only. It is merged with the global
942 metadata specified through the `metadata` argument of this function,
942 metadata specified through the `metadata` argument of this function,
943
943
944 Trying to obsolete a public changeset will raise an exception.
944 Trying to obsolete a public changeset will raise an exception.
945
945
946 Current user and date are used except if specified otherwise in the
946 Current user and date are used except if specified otherwise in the
947 metadata attribute.
947 metadata attribute.
948
948
949 This function operates within a transaction of its own, but does
949 This function operates within a transaction of its own, but does
950 not take any lock on the repo.
950 not take any lock on the repo.
951 """
951 """
952 # prepare metadata
952 # prepare metadata
953 if metadata is None:
953 if metadata is None:
954 metadata = {}
954 metadata = {}
955 if 'user' not in metadata:
955 if 'user' not in metadata:
956 metadata['user'] = repo.ui.username()
956 metadata['user'] = repo.ui.username()
957 tr = repo.transaction('add-obsolescence-marker')
957 tr = repo.transaction('add-obsolescence-marker')
958 try:
958 try:
959 for rel in relations:
959 for rel in relations:
960 prec = rel[0]
960 prec = rel[0]
961 sucs = rel[1]
961 sucs = rel[1]
962 localmetadata = metadata.copy()
962 localmetadata = metadata.copy()
963 if 2 < len(rel):
963 if 2 < len(rel):
964 localmetadata.update(rel[2])
964 localmetadata.update(rel[2])
965
965
966 if not prec.mutable():
966 if not prec.mutable():
967 raise util.Abort("cannot obsolete immutable changeset: %s"
967 raise util.Abort("cannot obsolete immutable changeset: %s"
968 % prec)
968 % prec)
969 nprec = prec.node()
969 nprec = prec.node()
970 nsucs = tuple(s.node() for s in sucs)
970 nsucs = tuple(s.node() for s in sucs)
971 npare = None
971 npare = None
972 if not nsucs:
972 if not nsucs:
973 npare = tuple(p.node() for p in prec.parents())
973 npare = tuple(p.node() for p in prec.parents())
974 if nprec in nsucs:
974 if nprec in nsucs:
975 raise util.Abort("changeset %s cannot obsolete itself" % prec)
975 raise util.Abort("changeset %s cannot obsolete itself" % prec)
976 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
976 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
977 date=date, metadata=localmetadata)
977 date=date, metadata=localmetadata)
978 repo.filteredrevcache.clear()
978 repo.filteredrevcache.clear()
979 tr.close()
979 tr.close()
980 finally:
980 finally:
981 tr.release()
981 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now