##// END OF EJS Templates
obsstore: also store the 'parents' field on disk...
Pierre-Yves David -
r22258:cf414981 default
parent child Browse files
Show More
@@ -1,901 +1,926
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fmversion = 0
102 _fmversion = 0
103 _fmfixed = '>BIB20s'
103 _fmfixed = '>BIB20s'
104 _fmnode = '20s'
104 _fmnode = '20s'
105 _fmfsize = struct.calcsize(_fmfixed)
105 _fmfsize = struct.calcsize(_fmfixed)
106 _fnodesize = struct.calcsize(_fmnode)
106 _fnodesize = struct.calcsize(_fmnode)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion != _fmversion:
145 if diskversion != _fmversion:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148
148
149 # Loop on markers
149 # Loop on markers
150 l = len(data)
150 l = len(data)
151 while off + _fmfsize <= l:
151 while off + _fmfsize <= l:
152 # read fixed part
152 # read fixed part
153 cur = data[off:off + _fmfsize]
153 cur = data[off:off + _fmfsize]
154 off += _fmfsize
154 off += _fmfsize
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
155 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
156 # read replacement
156 # read replacement
157 sucs = ()
157 sucs = ()
158 if nbsuc:
158 if nbsuc:
159 s = (_fnodesize * nbsuc)
159 s = (_fnodesize * nbsuc)
160 cur = data[off:off + s]
160 cur = data[off:off + s]
161 sucs = _unpack(_fmnode * nbsuc, cur)
161 sucs = _unpack(_fmnode * nbsuc, cur)
162 off += s
162 off += s
163 # read metadata
163 # read metadata
164 # (metadata will be decoded on demand)
164 # (metadata will be decoded on demand)
165 metadata = data[off:off + mdsize]
165 metadata = data[off:off + mdsize]
166 if len(metadata) != mdsize:
166 if len(metadata) != mdsize:
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
167 raise util.Abort(_('parsing obsolete marker: metadata is too '
168 'short, %d bytes expected, got %d')
168 'short, %d bytes expected, got %d')
169 % (mdsize, len(metadata)))
169 % (mdsize, len(metadata)))
170 off += mdsize
170 off += mdsize
171 meta = decodemeta(metadata)
171 meta = decodemeta(metadata)
172 try:
172 try:
173 date = util.parsedate(decodemeta(metadata).pop('date', '0 0'))
173 date = util.parsedate(decodemeta(metadata).pop('date', '0 0'))
174 except util.Abort:
174 except util.Abort:
175 date = (0., 0)
175 date = (0., 0)
176 parents = None
177 if 'p2' in meta:
178 parents = (meta.pop('p1', None), meta.pop('p2', None))
179 elif 'p1' in meta:
180 parents = (meta.pop('p1', None),)
181 elif 'p0' in meta:
182 parents = ()
183 if parents is not None:
184 try:
185 parents = tuple(node.bin(p) for p in parents)
186 # if parent content is not a nodeid, drop the data
187 for p in parents:
188 if len(p) != 20:
189 parents = None
190 break
191 except TypeError:
192 # if content cannot be translated to nodeid drop the data.
193 parents = None
194
176 metadata = encodemeta(meta)
195 metadata = encodemeta(meta)
177
196
178 yield (pre, sucs, flags, metadata, date, None)
197 yield (pre, sucs, flags, metadata, date, parents)
179
198
180 def encodemeta(meta):
199 def encodemeta(meta):
181 """Return encoded metadata string to string mapping.
200 """Return encoded metadata string to string mapping.
182
201
183 Assume no ':' in key and no '\0' in both key and value."""
202 Assume no ':' in key and no '\0' in both key and value."""
184 for key, value in meta.iteritems():
203 for key, value in meta.iteritems():
185 if ':' in key or '\0' in key:
204 if ':' in key or '\0' in key:
186 raise ValueError("':' and '\0' are forbidden in metadata key'")
205 raise ValueError("':' and '\0' are forbidden in metadata key'")
187 if '\0' in value:
206 if '\0' in value:
188 raise ValueError("':' is forbidden in metadata value'")
207 raise ValueError("':' is forbidden in metadata value'")
189 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
208 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
190
209
191 def decodemeta(data):
210 def decodemeta(data):
192 """Return string to string dictionary from encoded version."""
211 """Return string to string dictionary from encoded version."""
193 d = {}
212 d = {}
194 for l in data.split('\0'):
213 for l in data.split('\0'):
195 if l:
214 if l:
196 key, value = l.split(':')
215 key, value = l.split(':')
197 d[key] = value
216 d[key] = value
198 return d
217 return d
199
218
200 class marker(object):
219 class marker(object):
201 """Wrap obsolete marker raw data"""
220 """Wrap obsolete marker raw data"""
202
221
203 def __init__(self, repo, data):
222 def __init__(self, repo, data):
204 # the repo argument will be used to create changectx in later version
223 # the repo argument will be used to create changectx in later version
205 self._repo = repo
224 self._repo = repo
206 self._data = data
225 self._data = data
207 self._decodedmeta = None
226 self._decodedmeta = None
208
227
209 def __hash__(self):
228 def __hash__(self):
210 return hash(self._data)
229 return hash(self._data)
211
230
212 def __eq__(self, other):
231 def __eq__(self, other):
213 if type(other) != type(self):
232 if type(other) != type(self):
214 return False
233 return False
215 return self._data == other._data
234 return self._data == other._data
216
235
217 def precnode(self):
236 def precnode(self):
218 """Precursor changeset node identifier"""
237 """Precursor changeset node identifier"""
219 return self._data[0]
238 return self._data[0]
220
239
221 def succnodes(self):
240 def succnodes(self):
222 """List of successor changesets node identifiers"""
241 """List of successor changesets node identifiers"""
223 return self._data[1]
242 return self._data[1]
224
243
225 def metadata(self):
244 def metadata(self):
226 """Decoded metadata dictionary"""
245 """Decoded metadata dictionary"""
227 if self._decodedmeta is None:
246 if self._decodedmeta is None:
228 self._decodedmeta = decodemeta(self._data[3])
247 self._decodedmeta = decodemeta(self._data[3])
229 return self._decodedmeta
248 return self._decodedmeta
230
249
231 def date(self):
250 def date(self):
232 """Creation date as (unixtime, offset)"""
251 """Creation date as (unixtime, offset)"""
233 return self._data[4]
252 return self._data[4]
234
253
235 def flags(self):
254 def flags(self):
236 """The flags field of the marker"""
255 """The flags field of the marker"""
237 return self._data[2]
256 return self._data[2]
238
257
239 class obsstore(object):
258 class obsstore(object):
240 """Store obsolete markers
259 """Store obsolete markers
241
260
242 Markers can be accessed with two mappings:
261 Markers can be accessed with two mappings:
243 - precursors[x] -> set(markers on precursors edges of x)
262 - precursors[x] -> set(markers on precursors edges of x)
244 - successors[x] -> set(markers on successors edges of x)
263 - successors[x] -> set(markers on successors edges of x)
245 """
264 """
246
265
247 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
266 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
248 # prec: nodeid, precursor changesets
267 # prec: nodeid, precursor changesets
249 # succs: tuple of nodeid, successor changesets (0-N length)
268 # succs: tuple of nodeid, successor changesets (0-N length)
250 # flag: integer, flag field carrying modifier for the markers (see doc)
269 # flag: integer, flag field carrying modifier for the markers (see doc)
251 # meta: binary blob, encoded metadata dictionary
270 # meta: binary blob, encoded metadata dictionary
252 # date: (float, int) tuple, date of marker creation
271 # date: (float, int) tuple, date of marker creation
253 # parents: (tuple of nodeid) or None, parents of precursors
272 # parents: (tuple of nodeid) or None, parents of precursors
254 # None is used when no data has been recorded
273 # None is used when no data has been recorded
255
274
256 def __init__(self, sopener):
275 def __init__(self, sopener):
257 # caches for various obsolescence related cache
276 # caches for various obsolescence related cache
258 self.caches = {}
277 self.caches = {}
259 self._all = []
278 self._all = []
260 # new markers to serialize
279 # new markers to serialize
261 self.precursors = {}
280 self.precursors = {}
262 self.successors = {}
281 self.successors = {}
263 self.sopener = sopener
282 self.sopener = sopener
264 data = sopener.tryread('obsstore')
283 data = sopener.tryread('obsstore')
265 if data:
284 if data:
266 self._load(_readmarkers(data))
285 self._load(_readmarkers(data))
267
286
268 def __iter__(self):
287 def __iter__(self):
269 return iter(self._all)
288 return iter(self._all)
270
289
271 def __len__(self):
290 def __len__(self):
272 return len(self._all)
291 return len(self._all)
273
292
274 def __nonzero__(self):
293 def __nonzero__(self):
275 return bool(self._all)
294 return bool(self._all)
276
295
277 def create(self, transaction, prec, succs=(), flag=0, parents=None,
296 def create(self, transaction, prec, succs=(), flag=0, parents=None,
278 date=None, metadata=None):
297 date=None, metadata=None):
279 """obsolete: add a new obsolete marker
298 """obsolete: add a new obsolete marker
280
299
281 * ensuring it is hashable
300 * ensuring it is hashable
282 * check mandatory metadata
301 * check mandatory metadata
283 * encode metadata
302 * encode metadata
284
303
285 If you are a human writing code creating marker you want to use the
304 If you are a human writing code creating marker you want to use the
286 `createmarkers` function in this module instead.
305 `createmarkers` function in this module instead.
287
306
288 return True if a new marker have been added, False if the markers
307 return True if a new marker have been added, False if the markers
289 already existed (no op).
308 already existed (no op).
290 """
309 """
291 if metadata is None:
310 if metadata is None:
292 metadata = {}
311 metadata = {}
293 if date is None:
312 if date is None:
294 if 'date' in metadata:
313 if 'date' in metadata:
295 # as a courtesy for out-of-tree extensions
314 # as a courtesy for out-of-tree extensions
296 date = util.parsedate(metadata.pop('date'))
315 date = util.parsedate(metadata.pop('date'))
297 else:
316 else:
298 date = util.makedate()
317 date = util.makedate()
299 if len(prec) != 20:
318 if len(prec) != 20:
300 raise ValueError(prec)
319 raise ValueError(prec)
301 for succ in succs:
320 for succ in succs:
302 if len(succ) != 20:
321 if len(succ) != 20:
303 raise ValueError(succ)
322 raise ValueError(succ)
304 if prec in succs:
323 if prec in succs:
305 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
324 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
306 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
325 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
307 date, parents)
326 date, parents)
308 return bool(self.add(transaction, [marker]))
327 return bool(self.add(transaction, [marker]))
309
328
310 def add(self, transaction, markers):
329 def add(self, transaction, markers):
311 """Add new markers to the store
330 """Add new markers to the store
312
331
313 Take care of filtering duplicate.
332 Take care of filtering duplicate.
314 Return the number of new marker."""
333 Return the number of new marker."""
315 if not _enabled:
334 if not _enabled:
316 raise util.Abort('obsolete feature is not enabled on this repo')
335 raise util.Abort('obsolete feature is not enabled on this repo')
317 known = set(self._all)
336 known = set(self._all)
318 new = []
337 new = []
319 for m in markers:
338 for m in markers:
320 if m not in known:
339 if m not in known:
321 known.add(m)
340 known.add(m)
322 new.append(m)
341 new.append(m)
323 if new:
342 if new:
324 f = self.sopener('obsstore', 'ab')
343 f = self.sopener('obsstore', 'ab')
325 try:
344 try:
326 # Whether the file's current position is at the begin or at
345 # Whether the file's current position is at the begin or at
327 # the end after opening a file for appending is implementation
346 # the end after opening a file for appending is implementation
328 # defined. So we must seek to the end before calling tell(),
347 # defined. So we must seek to the end before calling tell(),
329 # or we may get a zero offset for non-zero sized files on
348 # or we may get a zero offset for non-zero sized files on
330 # some platforms (issue3543).
349 # some platforms (issue3543).
331 f.seek(0, _SEEK_END)
350 f.seek(0, _SEEK_END)
332 offset = f.tell()
351 offset = f.tell()
333 transaction.add('obsstore', offset)
352 transaction.add('obsstore', offset)
334 # offset == 0: new file - add the version header
353 # offset == 0: new file - add the version header
335 for bytes in _encodemarkers(new, offset == 0):
354 for bytes in _encodemarkers(new, offset == 0):
336 f.write(bytes)
355 f.write(bytes)
337 finally:
356 finally:
338 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
357 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
339 # call 'filecacheentry.refresh()' here
358 # call 'filecacheentry.refresh()' here
340 f.close()
359 f.close()
341 self._load(new)
360 self._load(new)
342 # new marker *may* have changed several set. invalidate the cache.
361 # new marker *may* have changed several set. invalidate the cache.
343 self.caches.clear()
362 self.caches.clear()
344 return len(new)
363 return len(new)
345
364
346 def mergemarkers(self, transaction, data):
365 def mergemarkers(self, transaction, data):
347 markers = _readmarkers(data)
366 markers = _readmarkers(data)
348 self.add(transaction, markers)
367 self.add(transaction, markers)
349
368
350 def _load(self, markers):
369 def _load(self, markers):
351 for mark in markers:
370 for mark in markers:
352 self._all.append(mark)
371 self._all.append(mark)
353 pre, sucs = mark[:2]
372 pre, sucs = mark[:2]
354 self.successors.setdefault(pre, set()).add(mark)
373 self.successors.setdefault(pre, set()).add(mark)
355 for suc in sucs:
374 for suc in sucs:
356 self.precursors.setdefault(suc, set()).add(mark)
375 self.precursors.setdefault(suc, set()).add(mark)
357 if node.nullid in self.precursors:
376 if node.nullid in self.precursors:
358 raise util.Abort(_('bad obsolescence marker detected: '
377 raise util.Abort(_('bad obsolescence marker detected: '
359 'invalid successors nullid'))
378 'invalid successors nullid'))
360
379
361 def _encodemarkers(markers, addheader=False):
380 def _encodemarkers(markers, addheader=False):
362 # Kept separate from flushmarkers(), it will be reused for
381 # Kept separate from flushmarkers(), it will be reused for
363 # markers exchange.
382 # markers exchange.
364 if addheader:
383 if addheader:
365 yield _pack('>B', _fmversion)
384 yield _pack('>B', _fmversion)
366 for marker in markers:
385 for marker in markers:
367 yield _encodeonemarker(marker)
386 yield _encodeonemarker(marker)
368
387
369
388
370 def _encodeonemarker(marker):
389 def _encodeonemarker(marker):
371 pre, sucs, flags, metadata, date, parents = marker
390 pre, sucs, flags, metadata, date, parents = marker
372 metadata = decodemeta(metadata)
391 metadata = decodemeta(metadata)
373 metadata['date'] = '%d %i' % date
392 metadata['date'] = '%d %i' % date
393 if parents is not None:
394 if not parents:
395 # mark that we explicitly recorded no parents
396 metadata['p0'] = ''
397 for i, p in enumerate(parents, 1):
398 metadata['p%i' % i] = node.hex(p)
374 metadata = encodemeta(metadata)
399 metadata = encodemeta(metadata)
375 nbsuc = len(sucs)
400 nbsuc = len(sucs)
376 format = _fmfixed + (_fmnode * nbsuc)
401 format = _fmfixed + (_fmnode * nbsuc)
377 data = [nbsuc, len(metadata), flags, pre]
402 data = [nbsuc, len(metadata), flags, pre]
378 data.extend(sucs)
403 data.extend(sucs)
379 return _pack(format, *data) + metadata
404 return _pack(format, *data) + metadata
380
405
381 # arbitrary picked to fit into 8K limit from HTTP server
406 # arbitrary picked to fit into 8K limit from HTTP server
382 # you have to take in account:
407 # you have to take in account:
383 # - the version header
408 # - the version header
384 # - the base85 encoding
409 # - the base85 encoding
385 _maxpayload = 5300
410 _maxpayload = 5300
386
411
387 def _pushkeyescape(markers):
412 def _pushkeyescape(markers):
388 """encode markers into a dict suitable for pushkey exchange
413 """encode markers into a dict suitable for pushkey exchange
389
414
390 - binary data is base85 encoded
415 - binary data is base85 encoded
391 - split in chunks smaller than 5300 bytes"""
416 - split in chunks smaller than 5300 bytes"""
392 keys = {}
417 keys = {}
393 parts = []
418 parts = []
394 currentlen = _maxpayload * 2 # ensure we create a new part
419 currentlen = _maxpayload * 2 # ensure we create a new part
395 for marker in markers:
420 for marker in markers:
396 nextdata = _encodeonemarker(marker)
421 nextdata = _encodeonemarker(marker)
397 if (len(nextdata) + currentlen > _maxpayload):
422 if (len(nextdata) + currentlen > _maxpayload):
398 currentpart = []
423 currentpart = []
399 currentlen = 0
424 currentlen = 0
400 parts.append(currentpart)
425 parts.append(currentpart)
401 currentpart.append(nextdata)
426 currentpart.append(nextdata)
402 currentlen += len(nextdata)
427 currentlen += len(nextdata)
403 for idx, part in enumerate(reversed(parts)):
428 for idx, part in enumerate(reversed(parts)):
404 data = ''.join([_pack('>B', _fmversion)] + part)
429 data = ''.join([_pack('>B', _fmversion)] + part)
405 keys['dump%i' % idx] = base85.b85encode(data)
430 keys['dump%i' % idx] = base85.b85encode(data)
406 return keys
431 return keys
407
432
408 def listmarkers(repo):
433 def listmarkers(repo):
409 """List markers over pushkey"""
434 """List markers over pushkey"""
410 if not repo.obsstore:
435 if not repo.obsstore:
411 return {}
436 return {}
412 return _pushkeyescape(repo.obsstore)
437 return _pushkeyescape(repo.obsstore)
413
438
414 def pushmarker(repo, key, old, new):
439 def pushmarker(repo, key, old, new):
415 """Push markers over pushkey"""
440 """Push markers over pushkey"""
416 if not key.startswith('dump'):
441 if not key.startswith('dump'):
417 repo.ui.warn(_('unknown key: %r') % key)
442 repo.ui.warn(_('unknown key: %r') % key)
418 return 0
443 return 0
419 if old:
444 if old:
420 repo.ui.warn(_('unexpected old value for %r') % key)
445 repo.ui.warn(_('unexpected old value for %r') % key)
421 return 0
446 return 0
422 data = base85.b85decode(new)
447 data = base85.b85decode(new)
423 lock = repo.lock()
448 lock = repo.lock()
424 try:
449 try:
425 tr = repo.transaction('pushkey: obsolete markers')
450 tr = repo.transaction('pushkey: obsolete markers')
426 try:
451 try:
427 repo.obsstore.mergemarkers(tr, data)
452 repo.obsstore.mergemarkers(tr, data)
428 tr.close()
453 tr.close()
429 return 1
454 return 1
430 finally:
455 finally:
431 tr.release()
456 tr.release()
432 finally:
457 finally:
433 lock.release()
458 lock.release()
434
459
435 def allmarkers(repo):
460 def allmarkers(repo):
436 """all obsolete markers known in a repository"""
461 """all obsolete markers known in a repository"""
437 for markerdata in repo.obsstore:
462 for markerdata in repo.obsstore:
438 yield marker(repo, markerdata)
463 yield marker(repo, markerdata)
439
464
440 def precursormarkers(ctx):
465 def precursormarkers(ctx):
441 """obsolete marker marking this changeset as a successors"""
466 """obsolete marker marking this changeset as a successors"""
442 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
467 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
443 yield marker(ctx._repo, data)
468 yield marker(ctx._repo, data)
444
469
445 def successormarkers(ctx):
470 def successormarkers(ctx):
446 """obsolete marker making this changeset obsolete"""
471 """obsolete marker making this changeset obsolete"""
447 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
472 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
448 yield marker(ctx._repo, data)
473 yield marker(ctx._repo, data)
449
474
450 def allsuccessors(obsstore, nodes, ignoreflags=0):
475 def allsuccessors(obsstore, nodes, ignoreflags=0):
451 """Yield node for every successor of <nodes>.
476 """Yield node for every successor of <nodes>.
452
477
453 Some successors may be unknown locally.
478 Some successors may be unknown locally.
454
479
455 This is a linear yield unsuited to detecting split changesets. It includes
480 This is a linear yield unsuited to detecting split changesets. It includes
456 initial nodes too."""
481 initial nodes too."""
457 remaining = set(nodes)
482 remaining = set(nodes)
458 seen = set(remaining)
483 seen = set(remaining)
459 while remaining:
484 while remaining:
460 current = remaining.pop()
485 current = remaining.pop()
461 yield current
486 yield current
462 for mark in obsstore.successors.get(current, ()):
487 for mark in obsstore.successors.get(current, ()):
463 # ignore marker flagged with specified flag
488 # ignore marker flagged with specified flag
464 if mark[2] & ignoreflags:
489 if mark[2] & ignoreflags:
465 continue
490 continue
466 for suc in mark[1]:
491 for suc in mark[1]:
467 if suc not in seen:
492 if suc not in seen:
468 seen.add(suc)
493 seen.add(suc)
469 remaining.add(suc)
494 remaining.add(suc)
470
495
471 def allprecursors(obsstore, nodes, ignoreflags=0):
496 def allprecursors(obsstore, nodes, ignoreflags=0):
472 """Yield node for every precursors of <nodes>.
497 """Yield node for every precursors of <nodes>.
473
498
474 Some precursors may be unknown locally.
499 Some precursors may be unknown locally.
475
500
476 This is a linear yield unsuited to detecting folded changesets. It includes
501 This is a linear yield unsuited to detecting folded changesets. It includes
477 initial nodes too."""
502 initial nodes too."""
478
503
479 remaining = set(nodes)
504 remaining = set(nodes)
480 seen = set(remaining)
505 seen = set(remaining)
481 while remaining:
506 while remaining:
482 current = remaining.pop()
507 current = remaining.pop()
483 yield current
508 yield current
484 for mark in obsstore.precursors.get(current, ()):
509 for mark in obsstore.precursors.get(current, ()):
485 # ignore marker flagged with specified flag
510 # ignore marker flagged with specified flag
486 if mark[2] & ignoreflags:
511 if mark[2] & ignoreflags:
487 continue
512 continue
488 suc = mark[0]
513 suc = mark[0]
489 if suc not in seen:
514 if suc not in seen:
490 seen.add(suc)
515 seen.add(suc)
491 remaining.add(suc)
516 remaining.add(suc)
492
517
493 def foreground(repo, nodes):
518 def foreground(repo, nodes):
494 """return all nodes in the "foreground" of other node
519 """return all nodes in the "foreground" of other node
495
520
496 The foreground of a revision is anything reachable using parent -> children
521 The foreground of a revision is anything reachable using parent -> children
497 or precursor -> successor relation. It is very similar to "descendant" but
522 or precursor -> successor relation. It is very similar to "descendant" but
498 augmented with obsolescence information.
523 augmented with obsolescence information.
499
524
500 Beware that possible obsolescence cycle may result if complex situation.
525 Beware that possible obsolescence cycle may result if complex situation.
501 """
526 """
502 repo = repo.unfiltered()
527 repo = repo.unfiltered()
503 foreground = set(repo.set('%ln::', nodes))
528 foreground = set(repo.set('%ln::', nodes))
504 if repo.obsstore:
529 if repo.obsstore:
505 # We only need this complicated logic if there is obsolescence
530 # We only need this complicated logic if there is obsolescence
506 # XXX will probably deserve an optimised revset.
531 # XXX will probably deserve an optimised revset.
507 nm = repo.changelog.nodemap
532 nm = repo.changelog.nodemap
508 plen = -1
533 plen = -1
509 # compute the whole set of successors or descendants
534 # compute the whole set of successors or descendants
510 while len(foreground) != plen:
535 while len(foreground) != plen:
511 plen = len(foreground)
536 plen = len(foreground)
512 succs = set(c.node() for c in foreground)
537 succs = set(c.node() for c in foreground)
513 mutable = [c.node() for c in foreground if c.mutable()]
538 mutable = [c.node() for c in foreground if c.mutable()]
514 succs.update(allsuccessors(repo.obsstore, mutable))
539 succs.update(allsuccessors(repo.obsstore, mutable))
515 known = (n for n in succs if n in nm)
540 known = (n for n in succs if n in nm)
516 foreground = set(repo.set('%ln::', known))
541 foreground = set(repo.set('%ln::', known))
517 return set(c.node() for c in foreground)
542 return set(c.node() for c in foreground)
518
543
519
544
520 def successorssets(repo, initialnode, cache=None):
545 def successorssets(repo, initialnode, cache=None):
521 """Return all set of successors of initial nodes
546 """Return all set of successors of initial nodes
522
547
523 The successors set of a changeset A are a group of revisions that succeed
548 The successors set of a changeset A are a group of revisions that succeed
524 A. It succeeds A as a consistent whole, each revision being only a partial
549 A. It succeeds A as a consistent whole, each revision being only a partial
525 replacement. The successors set contains non-obsolete changesets only.
550 replacement. The successors set contains non-obsolete changesets only.
526
551
527 This function returns the full list of successor sets which is why it
552 This function returns the full list of successor sets which is why it
528 returns a list of tuples and not just a single tuple. Each tuple is a valid
553 returns a list of tuples and not just a single tuple. Each tuple is a valid
529 successors set. Not that (A,) may be a valid successors set for changeset A
554 successors set. Not that (A,) may be a valid successors set for changeset A
530 (see below).
555 (see below).
531
556
532 In most cases, a changeset A will have a single element (e.g. the changeset
557 In most cases, a changeset A will have a single element (e.g. the changeset
533 A is replaced by A') in its successors set. Though, it is also common for a
558 A is replaced by A') in its successors set. Though, it is also common for a
534 changeset A to have no elements in its successor set (e.g. the changeset
559 changeset A to have no elements in its successor set (e.g. the changeset
535 has been pruned). Therefore, the returned list of successors sets will be
560 has been pruned). Therefore, the returned list of successors sets will be
536 [(A',)] or [], respectively.
561 [(A',)] or [], respectively.
537
562
538 When a changeset A is split into A' and B', however, it will result in a
563 When a changeset A is split into A' and B', however, it will result in a
539 successors set containing more than a single element, i.e. [(A',B')].
564 successors set containing more than a single element, i.e. [(A',B')].
540 Divergent changesets will result in multiple successors sets, i.e. [(A',),
565 Divergent changesets will result in multiple successors sets, i.e. [(A',),
541 (A'')].
566 (A'')].
542
567
543 If a changeset A is not obsolete, then it will conceptually have no
568 If a changeset A is not obsolete, then it will conceptually have no
544 successors set. To distinguish this from a pruned changeset, the successor
569 successors set. To distinguish this from a pruned changeset, the successor
545 set will only contain itself, i.e. [(A,)].
570 set will only contain itself, i.e. [(A,)].
546
571
547 Finally, successors unknown locally are considered to be pruned (obsoleted
572 Finally, successors unknown locally are considered to be pruned (obsoleted
548 without any successors).
573 without any successors).
549
574
550 The optional `cache` parameter is a dictionary that may contain precomputed
575 The optional `cache` parameter is a dictionary that may contain precomputed
551 successors sets. It is meant to reuse the computation of a previous call to
576 successors sets. It is meant to reuse the computation of a previous call to
552 `successorssets` when multiple calls are made at the same time. The cache
577 `successorssets` when multiple calls are made at the same time. The cache
553 dictionary is updated in place. The caller is responsible for its live
578 dictionary is updated in place. The caller is responsible for its live
554 spawn. Code that makes multiple calls to `successorssets` *must* use this
579 spawn. Code that makes multiple calls to `successorssets` *must* use this
555 cache mechanism or suffer terrible performances.
580 cache mechanism or suffer terrible performances.
556
581
557 """
582 """
558
583
559 succmarkers = repo.obsstore.successors
584 succmarkers = repo.obsstore.successors
560
585
561 # Stack of nodes we search successors sets for
586 # Stack of nodes we search successors sets for
562 toproceed = [initialnode]
587 toproceed = [initialnode]
563 # set version of above list for fast loop detection
588 # set version of above list for fast loop detection
564 # element added to "toproceed" must be added here
589 # element added to "toproceed" must be added here
565 stackedset = set(toproceed)
590 stackedset = set(toproceed)
566 if cache is None:
591 if cache is None:
567 cache = {}
592 cache = {}
568
593
569 # This while loop is the flattened version of a recursive search for
594 # This while loop is the flattened version of a recursive search for
570 # successors sets
595 # successors sets
571 #
596 #
572 # def successorssets(x):
597 # def successorssets(x):
573 # successors = directsuccessors(x)
598 # successors = directsuccessors(x)
574 # ss = [[]]
599 # ss = [[]]
575 # for succ in directsuccessors(x):
600 # for succ in directsuccessors(x):
576 # # product as in itertools cartesian product
601 # # product as in itertools cartesian product
577 # ss = product(ss, successorssets(succ))
602 # ss = product(ss, successorssets(succ))
578 # return ss
603 # return ss
579 #
604 #
580 # But we can not use plain recursive calls here:
605 # But we can not use plain recursive calls here:
581 # - that would blow the python call stack
606 # - that would blow the python call stack
582 # - obsolescence markers may have cycles, we need to handle them.
607 # - obsolescence markers may have cycles, we need to handle them.
583 #
608 #
584 # The `toproceed` list act as our call stack. Every node we search
609 # The `toproceed` list act as our call stack. Every node we search
585 # successors set for are stacked there.
610 # successors set for are stacked there.
586 #
611 #
587 # The `stackedset` is set version of this stack used to check if a node is
612 # The `stackedset` is set version of this stack used to check if a node is
588 # already stacked. This check is used to detect cycles and prevent infinite
613 # already stacked. This check is used to detect cycles and prevent infinite
589 # loop.
614 # loop.
590 #
615 #
591 # successors set of all nodes are stored in the `cache` dictionary.
616 # successors set of all nodes are stored in the `cache` dictionary.
592 #
617 #
593 # After this while loop ends we use the cache to return the successors sets
618 # After this while loop ends we use the cache to return the successors sets
594 # for the node requested by the caller.
619 # for the node requested by the caller.
595 while toproceed:
620 while toproceed:
596 # Every iteration tries to compute the successors sets of the topmost
621 # Every iteration tries to compute the successors sets of the topmost
597 # node of the stack: CURRENT.
622 # node of the stack: CURRENT.
598 #
623 #
599 # There are four possible outcomes:
624 # There are four possible outcomes:
600 #
625 #
601 # 1) We already know the successors sets of CURRENT:
626 # 1) We already know the successors sets of CURRENT:
602 # -> mission accomplished, pop it from the stack.
627 # -> mission accomplished, pop it from the stack.
603 # 2) Node is not obsolete:
628 # 2) Node is not obsolete:
604 # -> the node is its own successors sets. Add it to the cache.
629 # -> the node is its own successors sets. Add it to the cache.
605 # 3) We do not know successors set of direct successors of CURRENT:
630 # 3) We do not know successors set of direct successors of CURRENT:
606 # -> We add those successors to the stack.
631 # -> We add those successors to the stack.
607 # 4) We know successors sets of all direct successors of CURRENT:
632 # 4) We know successors sets of all direct successors of CURRENT:
608 # -> We can compute CURRENT successors set and add it to the
633 # -> We can compute CURRENT successors set and add it to the
609 # cache.
634 # cache.
610 #
635 #
611 current = toproceed[-1]
636 current = toproceed[-1]
612 if current in cache:
637 if current in cache:
613 # case (1): We already know the successors sets
638 # case (1): We already know the successors sets
614 stackedset.remove(toproceed.pop())
639 stackedset.remove(toproceed.pop())
615 elif current not in succmarkers:
640 elif current not in succmarkers:
616 # case (2): The node is not obsolete.
641 # case (2): The node is not obsolete.
617 if current in repo:
642 if current in repo:
618 # We have a valid last successors.
643 # We have a valid last successors.
619 cache[current] = [(current,)]
644 cache[current] = [(current,)]
620 else:
645 else:
621 # Final obsolete version is unknown locally.
646 # Final obsolete version is unknown locally.
622 # Do not count that as a valid successors
647 # Do not count that as a valid successors
623 cache[current] = []
648 cache[current] = []
624 else:
649 else:
625 # cases (3) and (4)
650 # cases (3) and (4)
626 #
651 #
627 # We proceed in two phases. Phase 1 aims to distinguish case (3)
652 # We proceed in two phases. Phase 1 aims to distinguish case (3)
628 # from case (4):
653 # from case (4):
629 #
654 #
630 # For each direct successors of CURRENT, we check whether its
655 # For each direct successors of CURRENT, we check whether its
631 # successors sets are known. If they are not, we stack the
656 # successors sets are known. If they are not, we stack the
632 # unknown node and proceed to the next iteration of the while
657 # unknown node and proceed to the next iteration of the while
633 # loop. (case 3)
658 # loop. (case 3)
634 #
659 #
635 # During this step, we may detect obsolescence cycles: a node
660 # During this step, we may detect obsolescence cycles: a node
636 # with unknown successors sets but already in the call stack.
661 # with unknown successors sets but already in the call stack.
637 # In such a situation, we arbitrary set the successors sets of
662 # In such a situation, we arbitrary set the successors sets of
638 # the node to nothing (node pruned) to break the cycle.
663 # the node to nothing (node pruned) to break the cycle.
639 #
664 #
640 # If no break was encountered we proceed to phase 2.
665 # If no break was encountered we proceed to phase 2.
641 #
666 #
642 # Phase 2 computes successors sets of CURRENT (case 4); see details
667 # Phase 2 computes successors sets of CURRENT (case 4); see details
643 # in phase 2 itself.
668 # in phase 2 itself.
644 #
669 #
645 # Note the two levels of iteration in each phase.
670 # Note the two levels of iteration in each phase.
646 # - The first one handles obsolescence markers using CURRENT as
671 # - The first one handles obsolescence markers using CURRENT as
647 # precursor (successors markers of CURRENT).
672 # precursor (successors markers of CURRENT).
648 #
673 #
649 # Having multiple entry here means divergence.
674 # Having multiple entry here means divergence.
650 #
675 #
651 # - The second one handles successors defined in each marker.
676 # - The second one handles successors defined in each marker.
652 #
677 #
653 # Having none means pruned node, multiple successors means split,
678 # Having none means pruned node, multiple successors means split,
654 # single successors are standard replacement.
679 # single successors are standard replacement.
655 #
680 #
656 for mark in sorted(succmarkers[current]):
681 for mark in sorted(succmarkers[current]):
657 for suc in mark[1]:
682 for suc in mark[1]:
658 if suc not in cache:
683 if suc not in cache:
659 if suc in stackedset:
684 if suc in stackedset:
660 # cycle breaking
685 # cycle breaking
661 cache[suc] = []
686 cache[suc] = []
662 else:
687 else:
663 # case (3) If we have not computed successors sets
688 # case (3) If we have not computed successors sets
664 # of one of those successors we add it to the
689 # of one of those successors we add it to the
665 # `toproceed` stack and stop all work for this
690 # `toproceed` stack and stop all work for this
666 # iteration.
691 # iteration.
667 toproceed.append(suc)
692 toproceed.append(suc)
668 stackedset.add(suc)
693 stackedset.add(suc)
669 break
694 break
670 else:
695 else:
671 continue
696 continue
672 break
697 break
673 else:
698 else:
674 # case (4): we know all successors sets of all direct
699 # case (4): we know all successors sets of all direct
675 # successors
700 # successors
676 #
701 #
677 # Successors set contributed by each marker depends on the
702 # Successors set contributed by each marker depends on the
678 # successors sets of all its "successors" node.
703 # successors sets of all its "successors" node.
679 #
704 #
680 # Each different marker is a divergence in the obsolescence
705 # Each different marker is a divergence in the obsolescence
681 # history. It contributes successors sets distinct from other
706 # history. It contributes successors sets distinct from other
682 # markers.
707 # markers.
683 #
708 #
684 # Within a marker, a successor may have divergent successors
709 # Within a marker, a successor may have divergent successors
685 # sets. In such a case, the marker will contribute multiple
710 # sets. In such a case, the marker will contribute multiple
686 # divergent successors sets. If multiple successors have
711 # divergent successors sets. If multiple successors have
687 # divergent successors sets, a Cartesian product is used.
712 # divergent successors sets, a Cartesian product is used.
688 #
713 #
689 # At the end we post-process successors sets to remove
714 # At the end we post-process successors sets to remove
690 # duplicated entry and successors set that are strict subset of
715 # duplicated entry and successors set that are strict subset of
691 # another one.
716 # another one.
692 succssets = []
717 succssets = []
693 for mark in sorted(succmarkers[current]):
718 for mark in sorted(succmarkers[current]):
694 # successors sets contributed by this marker
719 # successors sets contributed by this marker
695 markss = [[]]
720 markss = [[]]
696 for suc in mark[1]:
721 for suc in mark[1]:
697 # cardinal product with previous successors
722 # cardinal product with previous successors
698 productresult = []
723 productresult = []
699 for prefix in markss:
724 for prefix in markss:
700 for suffix in cache[suc]:
725 for suffix in cache[suc]:
701 newss = list(prefix)
726 newss = list(prefix)
702 for part in suffix:
727 for part in suffix:
703 # do not duplicated entry in successors set
728 # do not duplicated entry in successors set
704 # first entry wins.
729 # first entry wins.
705 if part not in newss:
730 if part not in newss:
706 newss.append(part)
731 newss.append(part)
707 productresult.append(newss)
732 productresult.append(newss)
708 markss = productresult
733 markss = productresult
709 succssets.extend(markss)
734 succssets.extend(markss)
710 # remove duplicated and subset
735 # remove duplicated and subset
711 seen = []
736 seen = []
712 final = []
737 final = []
713 candidate = sorted(((set(s), s) for s in succssets if s),
738 candidate = sorted(((set(s), s) for s in succssets if s),
714 key=lambda x: len(x[1]), reverse=True)
739 key=lambda x: len(x[1]), reverse=True)
715 for setversion, listversion in candidate:
740 for setversion, listversion in candidate:
716 for seenset in seen:
741 for seenset in seen:
717 if setversion.issubset(seenset):
742 if setversion.issubset(seenset):
718 break
743 break
719 else:
744 else:
720 final.append(listversion)
745 final.append(listversion)
721 seen.append(setversion)
746 seen.append(setversion)
722 final.reverse() # put small successors set first
747 final.reverse() # put small successors set first
723 cache[current] = final
748 cache[current] = final
724 return cache[initialnode]
749 return cache[initialnode]
725
750
726 def _knownrevs(repo, nodes):
751 def _knownrevs(repo, nodes):
727 """yield revision numbers of known nodes passed in parameters
752 """yield revision numbers of known nodes passed in parameters
728
753
729 Unknown revisions are silently ignored."""
754 Unknown revisions are silently ignored."""
730 torev = repo.changelog.nodemap.get
755 torev = repo.changelog.nodemap.get
731 for n in nodes:
756 for n in nodes:
732 rev = torev(n)
757 rev = torev(n)
733 if rev is not None:
758 if rev is not None:
734 yield rev
759 yield rev
735
760
736 # mapping of 'set-name' -> <function to compute this set>
761 # mapping of 'set-name' -> <function to compute this set>
737 cachefuncs = {}
762 cachefuncs = {}
738 def cachefor(name):
763 def cachefor(name):
739 """Decorator to register a function as computing the cache for a set"""
764 """Decorator to register a function as computing the cache for a set"""
740 def decorator(func):
765 def decorator(func):
741 assert name not in cachefuncs
766 assert name not in cachefuncs
742 cachefuncs[name] = func
767 cachefuncs[name] = func
743 return func
768 return func
744 return decorator
769 return decorator
745
770
746 def getrevs(repo, name):
771 def getrevs(repo, name):
747 """Return the set of revision that belong to the <name> set
772 """Return the set of revision that belong to the <name> set
748
773
749 Such access may compute the set and cache it for future use"""
774 Such access may compute the set and cache it for future use"""
750 repo = repo.unfiltered()
775 repo = repo.unfiltered()
751 if not repo.obsstore:
776 if not repo.obsstore:
752 return ()
777 return ()
753 if name not in repo.obsstore.caches:
778 if name not in repo.obsstore.caches:
754 repo.obsstore.caches[name] = cachefuncs[name](repo)
779 repo.obsstore.caches[name] = cachefuncs[name](repo)
755 return repo.obsstore.caches[name]
780 return repo.obsstore.caches[name]
756
781
757 # To be simple we need to invalidate obsolescence cache when:
782 # To be simple we need to invalidate obsolescence cache when:
758 #
783 #
759 # - new changeset is added:
784 # - new changeset is added:
760 # - public phase is changed
785 # - public phase is changed
761 # - obsolescence marker are added
786 # - obsolescence marker are added
762 # - strip is used a repo
787 # - strip is used a repo
763 def clearobscaches(repo):
788 def clearobscaches(repo):
764 """Remove all obsolescence related cache from a repo
789 """Remove all obsolescence related cache from a repo
765
790
766 This remove all cache in obsstore is the obsstore already exist on the
791 This remove all cache in obsstore is the obsstore already exist on the
767 repo.
792 repo.
768
793
769 (We could be smarter here given the exact event that trigger the cache
794 (We could be smarter here given the exact event that trigger the cache
770 clearing)"""
795 clearing)"""
771 # only clear cache is there is obsstore data in this repo
796 # only clear cache is there is obsstore data in this repo
772 if 'obsstore' in repo._filecache:
797 if 'obsstore' in repo._filecache:
773 repo.obsstore.caches.clear()
798 repo.obsstore.caches.clear()
774
799
775 @cachefor('obsolete')
800 @cachefor('obsolete')
776 def _computeobsoleteset(repo):
801 def _computeobsoleteset(repo):
777 """the set of obsolete revisions"""
802 """the set of obsolete revisions"""
778 obs = set()
803 obs = set()
779 getrev = repo.changelog.nodemap.get
804 getrev = repo.changelog.nodemap.get
780 getphase = repo._phasecache.phase
805 getphase = repo._phasecache.phase
781 for n in repo.obsstore.successors:
806 for n in repo.obsstore.successors:
782 rev = getrev(n)
807 rev = getrev(n)
783 if rev is not None and getphase(repo, rev):
808 if rev is not None and getphase(repo, rev):
784 obs.add(rev)
809 obs.add(rev)
785 return obs
810 return obs
786
811
787 @cachefor('unstable')
812 @cachefor('unstable')
788 def _computeunstableset(repo):
813 def _computeunstableset(repo):
789 """the set of non obsolete revisions with obsolete parents"""
814 """the set of non obsolete revisions with obsolete parents"""
790 # revset is not efficient enough here
815 # revset is not efficient enough here
791 # we do (obsolete()::) - obsolete() by hand
816 # we do (obsolete()::) - obsolete() by hand
792 obs = getrevs(repo, 'obsolete')
817 obs = getrevs(repo, 'obsolete')
793 if not obs:
818 if not obs:
794 return set()
819 return set()
795 cl = repo.changelog
820 cl = repo.changelog
796 return set(r for r in cl.descendants(obs) if r not in obs)
821 return set(r for r in cl.descendants(obs) if r not in obs)
797
822
798 @cachefor('suspended')
823 @cachefor('suspended')
799 def _computesuspendedset(repo):
824 def _computesuspendedset(repo):
800 """the set of obsolete parents with non obsolete descendants"""
825 """the set of obsolete parents with non obsolete descendants"""
801 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
826 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
802 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
827 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
803
828
804 @cachefor('extinct')
829 @cachefor('extinct')
805 def _computeextinctset(repo):
830 def _computeextinctset(repo):
806 """the set of obsolete parents without non obsolete descendants"""
831 """the set of obsolete parents without non obsolete descendants"""
807 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
832 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
808
833
809
834
810 @cachefor('bumped')
835 @cachefor('bumped')
811 def _computebumpedset(repo):
836 def _computebumpedset(repo):
812 """the set of revs trying to obsolete public revisions"""
837 """the set of revs trying to obsolete public revisions"""
813 bumped = set()
838 bumped = set()
814 # util function (avoid attribute lookup in the loop)
839 # util function (avoid attribute lookup in the loop)
815 phase = repo._phasecache.phase # would be faster to grab the full list
840 phase = repo._phasecache.phase # would be faster to grab the full list
816 public = phases.public
841 public = phases.public
817 cl = repo.changelog
842 cl = repo.changelog
818 torev = cl.nodemap.get
843 torev = cl.nodemap.get
819 obs = getrevs(repo, 'obsolete')
844 obs = getrevs(repo, 'obsolete')
820 for rev in repo:
845 for rev in repo:
821 # We only evaluate mutable, non-obsolete revision
846 # We only evaluate mutable, non-obsolete revision
822 if (public < phase(repo, rev)) and (rev not in obs):
847 if (public < phase(repo, rev)) and (rev not in obs):
823 node = cl.node(rev)
848 node = cl.node(rev)
824 # (future) A cache of precursors may worth if split is very common
849 # (future) A cache of precursors may worth if split is very common
825 for pnode in allprecursors(repo.obsstore, [node],
850 for pnode in allprecursors(repo.obsstore, [node],
826 ignoreflags=bumpedfix):
851 ignoreflags=bumpedfix):
827 prev = torev(pnode) # unfiltered! but so is phasecache
852 prev = torev(pnode) # unfiltered! but so is phasecache
828 if (prev is not None) and (phase(repo, prev) <= public):
853 if (prev is not None) and (phase(repo, prev) <= public):
829 # we have a public precursors
854 # we have a public precursors
830 bumped.add(rev)
855 bumped.add(rev)
831 break # Next draft!
856 break # Next draft!
832 return bumped
857 return bumped
833
858
834 @cachefor('divergent')
859 @cachefor('divergent')
835 def _computedivergentset(repo):
860 def _computedivergentset(repo):
836 """the set of rev that compete to be the final successors of some revision.
861 """the set of rev that compete to be the final successors of some revision.
837 """
862 """
838 divergent = set()
863 divergent = set()
839 obsstore = repo.obsstore
864 obsstore = repo.obsstore
840 newermap = {}
865 newermap = {}
841 for ctx in repo.set('(not public()) - obsolete()'):
866 for ctx in repo.set('(not public()) - obsolete()'):
842 mark = obsstore.precursors.get(ctx.node(), ())
867 mark = obsstore.precursors.get(ctx.node(), ())
843 toprocess = set(mark)
868 toprocess = set(mark)
844 while toprocess:
869 while toprocess:
845 prec = toprocess.pop()[0]
870 prec = toprocess.pop()[0]
846 if prec not in newermap:
871 if prec not in newermap:
847 successorssets(repo, prec, newermap)
872 successorssets(repo, prec, newermap)
848 newer = [n for n in newermap[prec] if n]
873 newer = [n for n in newermap[prec] if n]
849 if len(newer) > 1:
874 if len(newer) > 1:
850 divergent.add(ctx.rev())
875 divergent.add(ctx.rev())
851 break
876 break
852 toprocess.update(obsstore.precursors.get(prec, ()))
877 toprocess.update(obsstore.precursors.get(prec, ()))
853 return divergent
878 return divergent
854
879
855
880
856 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
881 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
857 """Add obsolete markers between changesets in a repo
882 """Add obsolete markers between changesets in a repo
858
883
859 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
884 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
860 tuple. `old` and `news` are changectx. metadata is an optional dictionary
885 tuple. `old` and `news` are changectx. metadata is an optional dictionary
861 containing metadata for this marker only. It is merged with the global
886 containing metadata for this marker only. It is merged with the global
862 metadata specified through the `metadata` argument of this function,
887 metadata specified through the `metadata` argument of this function,
863
888
864 Trying to obsolete a public changeset will raise an exception.
889 Trying to obsolete a public changeset will raise an exception.
865
890
866 Current user and date are used except if specified otherwise in the
891 Current user and date are used except if specified otherwise in the
867 metadata attribute.
892 metadata attribute.
868
893
869 This function operates within a transaction of its own, but does
894 This function operates within a transaction of its own, but does
870 not take any lock on the repo.
895 not take any lock on the repo.
871 """
896 """
872 # prepare metadata
897 # prepare metadata
873 if metadata is None:
898 if metadata is None:
874 metadata = {}
899 metadata = {}
875 if 'user' not in metadata:
900 if 'user' not in metadata:
876 metadata['user'] = repo.ui.username()
901 metadata['user'] = repo.ui.username()
877 tr = repo.transaction('add-obsolescence-marker')
902 tr = repo.transaction('add-obsolescence-marker')
878 try:
903 try:
879 for rel in relations:
904 for rel in relations:
880 prec = rel[0]
905 prec = rel[0]
881 sucs = rel[1]
906 sucs = rel[1]
882 localmetadata = metadata.copy()
907 localmetadata = metadata.copy()
883 if 2 < len(rel):
908 if 2 < len(rel):
884 localmetadata.update(rel[2])
909 localmetadata.update(rel[2])
885
910
886 if not prec.mutable():
911 if not prec.mutable():
887 raise util.Abort("cannot obsolete immutable changeset: %s"
912 raise util.Abort("cannot obsolete immutable changeset: %s"
888 % prec)
913 % prec)
889 nprec = prec.node()
914 nprec = prec.node()
890 nsucs = tuple(s.node() for s in sucs)
915 nsucs = tuple(s.node() for s in sucs)
891 npare = None
916 npare = None
892 if not nsucs:
917 if not nsucs:
893 npare = tuple(p.node() for p in prec.parents())
918 npare = tuple(p.node() for p in prec.parents())
894 if nprec in nsucs:
919 if nprec in nsucs:
895 raise util.Abort("changeset %s cannot obsolete itself" % prec)
920 raise util.Abort("changeset %s cannot obsolete itself" % prec)
896 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
921 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
897 date=date, metadata=localmetadata)
922 date=date, metadata=localmetadata)
898 repo.filteredrevcache.clear()
923 repo.filteredrevcache.clear()
899 tr.close()
924 tr.close()
900 finally:
925 finally:
901 tr.release()
926 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now