##// END OF EJS Templates
obsolete: move _fm0encodeonemarker next to _fm0readmarkers
Pierre-Yves David -
r22330:e74f8a65 default
parent child Browse files
Show More
@@ -1,984 +1,984 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fm0version = 0
102 _fm0version = 0
103 _fm0fixed = '>BIB20s'
103 _fm0fixed = '>BIB20s'
104 _fm0node = '20s'
104 _fm0node = '20s'
105 _fm0fsize = struct.calcsize(_fm0fixed)
105 _fm0fsize = struct.calcsize(_fm0fixed)
106 _fm0fnodesize = struct.calcsize(_fm0node)
106 _fm0fnodesize = struct.calcsize(_fm0node)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion != _fm0version:
145 if diskversion != _fm0version:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148 return _fm0readmarkers(data, off)
148 return _fm0readmarkers(data, off)
149
149
150 def _fm0readmarkers(data, off=0):
150 def _fm0readmarkers(data, off=0):
151 """Read and enumerate markers from raw data in format version 0"""
151 """Read and enumerate markers from raw data in format version 0"""
152 # Loop on markers
152 # Loop on markers
153 l = len(data)
153 l = len(data)
154 while off + _fm0fsize <= l:
154 while off + _fm0fsize <= l:
155 # read fixed part
155 # read fixed part
156 cur = data[off:off + _fm0fsize]
156 cur = data[off:off + _fm0fsize]
157 off += _fm0fsize
157 off += _fm0fsize
158 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
158 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
159 # read replacement
159 # read replacement
160 sucs = ()
160 sucs = ()
161 if nbsuc:
161 if nbsuc:
162 s = (_fm0fnodesize * nbsuc)
162 s = (_fm0fnodesize * nbsuc)
163 cur = data[off:off + s]
163 cur = data[off:off + s]
164 sucs = _unpack(_fm0node * nbsuc, cur)
164 sucs = _unpack(_fm0node * nbsuc, cur)
165 off += s
165 off += s
166 # read metadata
166 # read metadata
167 # (metadata will be decoded on demand)
167 # (metadata will be decoded on demand)
168 metadata = data[off:off + mdsize]
168 metadata = data[off:off + mdsize]
169 if len(metadata) != mdsize:
169 if len(metadata) != mdsize:
170 raise util.Abort(_('parsing obsolete marker: metadata is too '
170 raise util.Abort(_('parsing obsolete marker: metadata is too '
171 'short, %d bytes expected, got %d')
171 'short, %d bytes expected, got %d')
172 % (mdsize, len(metadata)))
172 % (mdsize, len(metadata)))
173 off += mdsize
173 off += mdsize
174 meta = decodemeta(metadata)
174 meta = decodemeta(metadata)
175 try:
175 try:
176 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
176 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
177 date = float(when), int(offset)
177 date = float(when), int(offset)
178 except ValueError:
178 except ValueError:
179 date = (0., 0)
179 date = (0., 0)
180 parents = None
180 parents = None
181 if 'p2' in meta:
181 if 'p2' in meta:
182 parents = (meta.pop('p1', None), meta.pop('p2', None))
182 parents = (meta.pop('p1', None), meta.pop('p2', None))
183 elif 'p1' in meta:
183 elif 'p1' in meta:
184 parents = (meta.pop('p1', None),)
184 parents = (meta.pop('p1', None),)
185 elif 'p0' in meta:
185 elif 'p0' in meta:
186 parents = ()
186 parents = ()
187 if parents is not None:
187 if parents is not None:
188 try:
188 try:
189 parents = tuple(node.bin(p) for p in parents)
189 parents = tuple(node.bin(p) for p in parents)
190 # if parent content is not a nodeid, drop the data
190 # if parent content is not a nodeid, drop the data
191 for p in parents:
191 for p in parents:
192 if len(p) != 20:
192 if len(p) != 20:
193 parents = None
193 parents = None
194 break
194 break
195 except TypeError:
195 except TypeError:
196 # if content cannot be translated to nodeid drop the data.
196 # if content cannot be translated to nodeid drop the data.
197 parents = None
197 parents = None
198
198
199 metadata = encodemeta(meta)
199 metadata = encodemeta(meta)
200
200
201 yield (pre, sucs, flags, metadata, date, parents)
201 yield (pre, sucs, flags, metadata, date, parents)
202
202
203 def _fm0encodeonemarker(marker):
204 pre, sucs, flags, metadata, date, parents = marker
205 metadata = decodemeta(metadata)
206 metadata['date'] = '%d %i' % date
207 if parents is not None:
208 if not parents:
209 # mark that we explicitly recorded no parents
210 metadata['p0'] = ''
211 for i, p in enumerate(parents):
212 metadata['p%i' % (i + 1)] = node.hex(p)
213 metadata = encodemeta(metadata)
214 nbsuc = len(sucs)
215 format = _fm0fixed + (_fm0node * nbsuc)
216 data = [nbsuc, len(metadata), flags, pre]
217 data.extend(sucs)
218 return _pack(format, *data) + metadata
219
203 def encodemeta(meta):
220 def encodemeta(meta):
204 """Return encoded metadata string to string mapping.
221 """Return encoded metadata string to string mapping.
205
222
206 Assume no ':' in key and no '\0' in both key and value."""
223 Assume no ':' in key and no '\0' in both key and value."""
207 for key, value in meta.iteritems():
224 for key, value in meta.iteritems():
208 if ':' in key or '\0' in key:
225 if ':' in key or '\0' in key:
209 raise ValueError("':' and '\0' are forbidden in metadata key'")
226 raise ValueError("':' and '\0' are forbidden in metadata key'")
210 if '\0' in value:
227 if '\0' in value:
211 raise ValueError("':' is forbidden in metadata value'")
228 raise ValueError("':' is forbidden in metadata value'")
212 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
229 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
213
230
214 def decodemeta(data):
231 def decodemeta(data):
215 """Return string to string dictionary from encoded version."""
232 """Return string to string dictionary from encoded version."""
216 d = {}
233 d = {}
217 for l in data.split('\0'):
234 for l in data.split('\0'):
218 if l:
235 if l:
219 key, value = l.split(':')
236 key, value = l.split(':')
220 d[key] = value
237 d[key] = value
221 return d
238 return d
222
239
223 class marker(object):
240 class marker(object):
224 """Wrap obsolete marker raw data"""
241 """Wrap obsolete marker raw data"""
225
242
226 def __init__(self, repo, data):
243 def __init__(self, repo, data):
227 # the repo argument will be used to create changectx in later version
244 # the repo argument will be used to create changectx in later version
228 self._repo = repo
245 self._repo = repo
229 self._data = data
246 self._data = data
230 self._decodedmeta = None
247 self._decodedmeta = None
231
248
232 def __hash__(self):
249 def __hash__(self):
233 return hash(self._data)
250 return hash(self._data)
234
251
235 def __eq__(self, other):
252 def __eq__(self, other):
236 if type(other) != type(self):
253 if type(other) != type(self):
237 return False
254 return False
238 return self._data == other._data
255 return self._data == other._data
239
256
240 def precnode(self):
257 def precnode(self):
241 """Precursor changeset node identifier"""
258 """Precursor changeset node identifier"""
242 return self._data[0]
259 return self._data[0]
243
260
244 def succnodes(self):
261 def succnodes(self):
245 """List of successor changesets node identifiers"""
262 """List of successor changesets node identifiers"""
246 return self._data[1]
263 return self._data[1]
247
264
248 def parentnodes(self):
265 def parentnodes(self):
249 """Parents of the precursors (None if not recorded)"""
266 """Parents of the precursors (None if not recorded)"""
250 return self._data[5]
267 return self._data[5]
251
268
252 def metadata(self):
269 def metadata(self):
253 """Decoded metadata dictionary"""
270 """Decoded metadata dictionary"""
254 if self._decodedmeta is None:
271 if self._decodedmeta is None:
255 self._decodedmeta = decodemeta(self._data[3])
272 self._decodedmeta = decodemeta(self._data[3])
256 return self._decodedmeta
273 return self._decodedmeta
257
274
258 def date(self):
275 def date(self):
259 """Creation date as (unixtime, offset)"""
276 """Creation date as (unixtime, offset)"""
260 return self._data[4]
277 return self._data[4]
261
278
262 def flags(self):
279 def flags(self):
263 """The flags field of the marker"""
280 """The flags field of the marker"""
264 return self._data[2]
281 return self._data[2]
265
282
266 class obsstore(object):
283 class obsstore(object):
267 """Store obsolete markers
284 """Store obsolete markers
268
285
269 Markers can be accessed with two mappings:
286 Markers can be accessed with two mappings:
270 - precursors[x] -> set(markers on precursors edges of x)
287 - precursors[x] -> set(markers on precursors edges of x)
271 - successors[x] -> set(markers on successors edges of x)
288 - successors[x] -> set(markers on successors edges of x)
272 - children[x] -> set(markers on precursors edges of children(x)
289 - children[x] -> set(markers on precursors edges of children(x)
273 """
290 """
274
291
275 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
292 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
276 # prec: nodeid, precursor changesets
293 # prec: nodeid, precursor changesets
277 # succs: tuple of nodeid, successor changesets (0-N length)
294 # succs: tuple of nodeid, successor changesets (0-N length)
278 # flag: integer, flag field carrying modifier for the markers (see doc)
295 # flag: integer, flag field carrying modifier for the markers (see doc)
279 # meta: binary blob, encoded metadata dictionary
296 # meta: binary blob, encoded metadata dictionary
280 # date: (float, int) tuple, date of marker creation
297 # date: (float, int) tuple, date of marker creation
281 # parents: (tuple of nodeid) or None, parents of precursors
298 # parents: (tuple of nodeid) or None, parents of precursors
282 # None is used when no data has been recorded
299 # None is used when no data has been recorded
283
300
284 def __init__(self, sopener):
301 def __init__(self, sopener):
285 # caches for various obsolescence related cache
302 # caches for various obsolescence related cache
286 self.caches = {}
303 self.caches = {}
287 self._all = []
304 self._all = []
288 self.precursors = {}
305 self.precursors = {}
289 self.successors = {}
306 self.successors = {}
290 self.children = {}
307 self.children = {}
291 self.sopener = sopener
308 self.sopener = sopener
292 data = sopener.tryread('obsstore')
309 data = sopener.tryread('obsstore')
293 if data:
310 if data:
294 self._load(_readmarkers(data))
311 self._load(_readmarkers(data))
295
312
296 def __iter__(self):
313 def __iter__(self):
297 return iter(self._all)
314 return iter(self._all)
298
315
299 def __len__(self):
316 def __len__(self):
300 return len(self._all)
317 return len(self._all)
301
318
302 def __nonzero__(self):
319 def __nonzero__(self):
303 return bool(self._all)
320 return bool(self._all)
304
321
305 def create(self, transaction, prec, succs=(), flag=0, parents=None,
322 def create(self, transaction, prec, succs=(), flag=0, parents=None,
306 date=None, metadata=None):
323 date=None, metadata=None):
307 """obsolete: add a new obsolete marker
324 """obsolete: add a new obsolete marker
308
325
309 * ensuring it is hashable
326 * ensuring it is hashable
310 * check mandatory metadata
327 * check mandatory metadata
311 * encode metadata
328 * encode metadata
312
329
313 If you are a human writing code creating marker you want to use the
330 If you are a human writing code creating marker you want to use the
314 `createmarkers` function in this module instead.
331 `createmarkers` function in this module instead.
315
332
316 return True if a new marker have been added, False if the markers
333 return True if a new marker have been added, False if the markers
317 already existed (no op).
334 already existed (no op).
318 """
335 """
319 if metadata is None:
336 if metadata is None:
320 metadata = {}
337 metadata = {}
321 if date is None:
338 if date is None:
322 if 'date' in metadata:
339 if 'date' in metadata:
323 # as a courtesy for out-of-tree extensions
340 # as a courtesy for out-of-tree extensions
324 date = util.parsedate(metadata.pop('date'))
341 date = util.parsedate(metadata.pop('date'))
325 else:
342 else:
326 date = util.makedate()
343 date = util.makedate()
327 if len(prec) != 20:
344 if len(prec) != 20:
328 raise ValueError(prec)
345 raise ValueError(prec)
329 for succ in succs:
346 for succ in succs:
330 if len(succ) != 20:
347 if len(succ) != 20:
331 raise ValueError(succ)
348 raise ValueError(succ)
332 if prec in succs:
349 if prec in succs:
333 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
350 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
334 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
351 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
335 date, parents)
352 date, parents)
336 return bool(self.add(transaction, [marker]))
353 return bool(self.add(transaction, [marker]))
337
354
338 def add(self, transaction, markers):
355 def add(self, transaction, markers):
339 """Add new markers to the store
356 """Add new markers to the store
340
357
341 Take care of filtering duplicate.
358 Take care of filtering duplicate.
342 Return the number of new marker."""
359 Return the number of new marker."""
343 if not _enabled:
360 if not _enabled:
344 raise util.Abort('obsolete feature is not enabled on this repo')
361 raise util.Abort('obsolete feature is not enabled on this repo')
345 known = set(self._all)
362 known = set(self._all)
346 new = []
363 new = []
347 for m in markers:
364 for m in markers:
348 if m not in known:
365 if m not in known:
349 known.add(m)
366 known.add(m)
350 new.append(m)
367 new.append(m)
351 if new:
368 if new:
352 f = self.sopener('obsstore', 'ab')
369 f = self.sopener('obsstore', 'ab')
353 try:
370 try:
354 # Whether the file's current position is at the begin or at
371 # Whether the file's current position is at the begin or at
355 # the end after opening a file for appending is implementation
372 # the end after opening a file for appending is implementation
356 # defined. So we must seek to the end before calling tell(),
373 # defined. So we must seek to the end before calling tell(),
357 # or we may get a zero offset for non-zero sized files on
374 # or we may get a zero offset for non-zero sized files on
358 # some platforms (issue3543).
375 # some platforms (issue3543).
359 f.seek(0, _SEEK_END)
376 f.seek(0, _SEEK_END)
360 offset = f.tell()
377 offset = f.tell()
361 transaction.add('obsstore', offset)
378 transaction.add('obsstore', offset)
362 # offset == 0: new file - add the version header
379 # offset == 0: new file - add the version header
363 for bytes in _encodemarkers(new, offset == 0):
380 for bytes in _encodemarkers(new, offset == 0):
364 f.write(bytes)
381 f.write(bytes)
365 finally:
382 finally:
366 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
383 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
367 # call 'filecacheentry.refresh()' here
384 # call 'filecacheentry.refresh()' here
368 f.close()
385 f.close()
369 self._load(new)
386 self._load(new)
370 # new marker *may* have changed several set. invalidate the cache.
387 # new marker *may* have changed several set. invalidate the cache.
371 self.caches.clear()
388 self.caches.clear()
372 return len(new)
389 return len(new)
373
390
374 def mergemarkers(self, transaction, data):
391 def mergemarkers(self, transaction, data):
375 """merge a binary stream of markers inside the obsstore
392 """merge a binary stream of markers inside the obsstore
376
393
377 Returns the number of new markers added."""
394 Returns the number of new markers added."""
378 markers = _readmarkers(data)
395 markers = _readmarkers(data)
379 return self.add(transaction, markers)
396 return self.add(transaction, markers)
380
397
381 def _load(self, markers):
398 def _load(self, markers):
382 for mark in markers:
399 for mark in markers:
383 self._all.append(mark)
400 self._all.append(mark)
384 pre, sucs = mark[:2]
401 pre, sucs = mark[:2]
385 self.successors.setdefault(pre, set()).add(mark)
402 self.successors.setdefault(pre, set()).add(mark)
386 for suc in sucs:
403 for suc in sucs:
387 self.precursors.setdefault(suc, set()).add(mark)
404 self.precursors.setdefault(suc, set()).add(mark)
388 parents = mark[5]
405 parents = mark[5]
389 if parents is not None:
406 if parents is not None:
390 for p in parents:
407 for p in parents:
391 self.children.setdefault(p, set()).add(mark)
408 self.children.setdefault(p, set()).add(mark)
392 if node.nullid in self.precursors:
409 if node.nullid in self.precursors:
393 raise util.Abort(_('bad obsolescence marker detected: '
410 raise util.Abort(_('bad obsolescence marker detected: '
394 'invalid successors nullid'))
411 'invalid successors nullid'))
395 def relevantmarkers(self, nodes):
412 def relevantmarkers(self, nodes):
396 """return a set of all obsolescence markers relevant to a set of nodes.
413 """return a set of all obsolescence markers relevant to a set of nodes.
397
414
398 "relevant" to a set of nodes mean:
415 "relevant" to a set of nodes mean:
399
416
400 - marker that use this changeset as successor
417 - marker that use this changeset as successor
401 - prune marker of direct children on this changeset
418 - prune marker of direct children on this changeset
402 - recursive application of the two rules on precursors of these markers
419 - recursive application of the two rules on precursors of these markers
403
420
404 It is a set so you cannot rely on order."""
421 It is a set so you cannot rely on order."""
405
422
406 pendingnodes = set(nodes)
423 pendingnodes = set(nodes)
407 seenmarkers = set()
424 seenmarkers = set()
408 seennodes = set(pendingnodes)
425 seennodes = set(pendingnodes)
409 precursorsmarkers = self.precursors
426 precursorsmarkers = self.precursors
410 children = self.children
427 children = self.children
411 while pendingnodes:
428 while pendingnodes:
412 direct = set()
429 direct = set()
413 for current in pendingnodes:
430 for current in pendingnodes:
414 direct.update(precursorsmarkers.get(current, ()))
431 direct.update(precursorsmarkers.get(current, ()))
415 pruned = [m for m in children.get(current, ()) if not m[1]]
432 pruned = [m for m in children.get(current, ()) if not m[1]]
416 direct.update(pruned)
433 direct.update(pruned)
417 direct -= seenmarkers
434 direct -= seenmarkers
418 pendingnodes = set([m[0] for m in direct])
435 pendingnodes = set([m[0] for m in direct])
419 seenmarkers |= direct
436 seenmarkers |= direct
420 pendingnodes -= seennodes
437 pendingnodes -= seennodes
421 seennodes |= pendingnodes
438 seennodes |= pendingnodes
422 return seenmarkers
439 return seenmarkers
423
440
424 def _encodemarkers(markers, addheader=False):
441 def _encodemarkers(markers, addheader=False):
425 # Kept separate from flushmarkers(), it will be reused for
442 # Kept separate from flushmarkers(), it will be reused for
426 # markers exchange.
443 # markers exchange.
427 if addheader:
444 if addheader:
428 yield _pack('>B', _fm0version)
445 yield _pack('>B', _fm0version)
429 for marker in markers:
446 for marker in markers:
430 yield _fm0encodeonemarker(marker)
447 yield _fm0encodeonemarker(marker)
431
448
432
449
433 def _fm0encodeonemarker(marker):
434 pre, sucs, flags, metadata, date, parents = marker
435 metadata = decodemeta(metadata)
436 metadata['date'] = '%d %i' % date
437 if parents is not None:
438 if not parents:
439 # mark that we explicitly recorded no parents
440 metadata['p0'] = ''
441 for i, p in enumerate(parents):
442 metadata['p%i' % (i + 1)] = node.hex(p)
443 metadata = encodemeta(metadata)
444 nbsuc = len(sucs)
445 format = _fm0fixed + (_fm0node * nbsuc)
446 data = [nbsuc, len(metadata), flags, pre]
447 data.extend(sucs)
448 return _pack(format, *data) + metadata
449
450 # arbitrary picked to fit into 8K limit from HTTP server
450 # arbitrary picked to fit into 8K limit from HTTP server
451 # you have to take in account:
451 # you have to take in account:
452 # - the version header
452 # - the version header
453 # - the base85 encoding
453 # - the base85 encoding
454 _maxpayload = 5300
454 _maxpayload = 5300
455
455
456 def _pushkeyescape(markers):
456 def _pushkeyescape(markers):
457 """encode markers into a dict suitable for pushkey exchange
457 """encode markers into a dict suitable for pushkey exchange
458
458
459 - binary data is base85 encoded
459 - binary data is base85 encoded
460 - split in chunks smaller than 5300 bytes"""
460 - split in chunks smaller than 5300 bytes"""
461 keys = {}
461 keys = {}
462 parts = []
462 parts = []
463 currentlen = _maxpayload * 2 # ensure we create a new part
463 currentlen = _maxpayload * 2 # ensure we create a new part
464 for marker in markers:
464 for marker in markers:
465 nextdata = _fm0encodeonemarker(marker)
465 nextdata = _fm0encodeonemarker(marker)
466 if (len(nextdata) + currentlen > _maxpayload):
466 if (len(nextdata) + currentlen > _maxpayload):
467 currentpart = []
467 currentpart = []
468 currentlen = 0
468 currentlen = 0
469 parts.append(currentpart)
469 parts.append(currentpart)
470 currentpart.append(nextdata)
470 currentpart.append(nextdata)
471 currentlen += len(nextdata)
471 currentlen += len(nextdata)
472 for idx, part in enumerate(reversed(parts)):
472 for idx, part in enumerate(reversed(parts)):
473 data = ''.join([_pack('>B', _fm0version)] + part)
473 data = ''.join([_pack('>B', _fm0version)] + part)
474 keys['dump%i' % idx] = base85.b85encode(data)
474 keys['dump%i' % idx] = base85.b85encode(data)
475 return keys
475 return keys
476
476
477 def listmarkers(repo):
477 def listmarkers(repo):
478 """List markers over pushkey"""
478 """List markers over pushkey"""
479 if not repo.obsstore:
479 if not repo.obsstore:
480 return {}
480 return {}
481 return _pushkeyescape(repo.obsstore)
481 return _pushkeyescape(repo.obsstore)
482
482
483 def pushmarker(repo, key, old, new):
483 def pushmarker(repo, key, old, new):
484 """Push markers over pushkey"""
484 """Push markers over pushkey"""
485 if not key.startswith('dump'):
485 if not key.startswith('dump'):
486 repo.ui.warn(_('unknown key: %r') % key)
486 repo.ui.warn(_('unknown key: %r') % key)
487 return 0
487 return 0
488 if old:
488 if old:
489 repo.ui.warn(_('unexpected old value for %r') % key)
489 repo.ui.warn(_('unexpected old value for %r') % key)
490 return 0
490 return 0
491 data = base85.b85decode(new)
491 data = base85.b85decode(new)
492 lock = repo.lock()
492 lock = repo.lock()
493 try:
493 try:
494 tr = repo.transaction('pushkey: obsolete markers')
494 tr = repo.transaction('pushkey: obsolete markers')
495 try:
495 try:
496 repo.obsstore.mergemarkers(tr, data)
496 repo.obsstore.mergemarkers(tr, data)
497 tr.close()
497 tr.close()
498 return 1
498 return 1
499 finally:
499 finally:
500 tr.release()
500 tr.release()
501 finally:
501 finally:
502 lock.release()
502 lock.release()
503
503
504 def getmarkers(repo, nodes=None):
504 def getmarkers(repo, nodes=None):
505 """returns markers known in a repository
505 """returns markers known in a repository
506
506
507 If <nodes> is specified, only markers "relevant" to those nodes are are
507 If <nodes> is specified, only markers "relevant" to those nodes are are
508 returned"""
508 returned"""
509 if nodes is None:
509 if nodes is None:
510 rawmarkers = repo.obsstore
510 rawmarkers = repo.obsstore
511 else:
511 else:
512 rawmarkers = repo.obsstore.relevantmarkers(nodes)
512 rawmarkers = repo.obsstore.relevantmarkers(nodes)
513
513
514 for markerdata in rawmarkers:
514 for markerdata in rawmarkers:
515 yield marker(repo, markerdata)
515 yield marker(repo, markerdata)
516
516
517 def relevantmarkers(repo, node):
517 def relevantmarkers(repo, node):
518 """all obsolete markers relevant to some revision"""
518 """all obsolete markers relevant to some revision"""
519 for markerdata in repo.obsstore.relevantmarkers(node):
519 for markerdata in repo.obsstore.relevantmarkers(node):
520 yield marker(repo, markerdata)
520 yield marker(repo, markerdata)
521
521
522
522
523 def precursormarkers(ctx):
523 def precursormarkers(ctx):
524 """obsolete marker marking this changeset as a successors"""
524 """obsolete marker marking this changeset as a successors"""
525 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
525 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
526 yield marker(ctx._repo, data)
526 yield marker(ctx._repo, data)
527
527
528 def successormarkers(ctx):
528 def successormarkers(ctx):
529 """obsolete marker making this changeset obsolete"""
529 """obsolete marker making this changeset obsolete"""
530 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
530 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
531 yield marker(ctx._repo, data)
531 yield marker(ctx._repo, data)
532
532
533 def allsuccessors(obsstore, nodes, ignoreflags=0):
533 def allsuccessors(obsstore, nodes, ignoreflags=0):
534 """Yield node for every successor of <nodes>.
534 """Yield node for every successor of <nodes>.
535
535
536 Some successors may be unknown locally.
536 Some successors may be unknown locally.
537
537
538 This is a linear yield unsuited to detecting split changesets. It includes
538 This is a linear yield unsuited to detecting split changesets. It includes
539 initial nodes too."""
539 initial nodes too."""
540 remaining = set(nodes)
540 remaining = set(nodes)
541 seen = set(remaining)
541 seen = set(remaining)
542 while remaining:
542 while remaining:
543 current = remaining.pop()
543 current = remaining.pop()
544 yield current
544 yield current
545 for mark in obsstore.successors.get(current, ()):
545 for mark in obsstore.successors.get(current, ()):
546 # ignore marker flagged with specified flag
546 # ignore marker flagged with specified flag
547 if mark[2] & ignoreflags:
547 if mark[2] & ignoreflags:
548 continue
548 continue
549 for suc in mark[1]:
549 for suc in mark[1]:
550 if suc not in seen:
550 if suc not in seen:
551 seen.add(suc)
551 seen.add(suc)
552 remaining.add(suc)
552 remaining.add(suc)
553
553
554 def allprecursors(obsstore, nodes, ignoreflags=0):
554 def allprecursors(obsstore, nodes, ignoreflags=0):
555 """Yield node for every precursors of <nodes>.
555 """Yield node for every precursors of <nodes>.
556
556
557 Some precursors may be unknown locally.
557 Some precursors may be unknown locally.
558
558
559 This is a linear yield unsuited to detecting folded changesets. It includes
559 This is a linear yield unsuited to detecting folded changesets. It includes
560 initial nodes too."""
560 initial nodes too."""
561
561
562 remaining = set(nodes)
562 remaining = set(nodes)
563 seen = set(remaining)
563 seen = set(remaining)
564 while remaining:
564 while remaining:
565 current = remaining.pop()
565 current = remaining.pop()
566 yield current
566 yield current
567 for mark in obsstore.precursors.get(current, ()):
567 for mark in obsstore.precursors.get(current, ()):
568 # ignore marker flagged with specified flag
568 # ignore marker flagged with specified flag
569 if mark[2] & ignoreflags:
569 if mark[2] & ignoreflags:
570 continue
570 continue
571 suc = mark[0]
571 suc = mark[0]
572 if suc not in seen:
572 if suc not in seen:
573 seen.add(suc)
573 seen.add(suc)
574 remaining.add(suc)
574 remaining.add(suc)
575
575
576 def foreground(repo, nodes):
576 def foreground(repo, nodes):
577 """return all nodes in the "foreground" of other node
577 """return all nodes in the "foreground" of other node
578
578
579 The foreground of a revision is anything reachable using parent -> children
579 The foreground of a revision is anything reachable using parent -> children
580 or precursor -> successor relation. It is very similar to "descendant" but
580 or precursor -> successor relation. It is very similar to "descendant" but
581 augmented with obsolescence information.
581 augmented with obsolescence information.
582
582
583 Beware that possible obsolescence cycle may result if complex situation.
583 Beware that possible obsolescence cycle may result if complex situation.
584 """
584 """
585 repo = repo.unfiltered()
585 repo = repo.unfiltered()
586 foreground = set(repo.set('%ln::', nodes))
586 foreground = set(repo.set('%ln::', nodes))
587 if repo.obsstore:
587 if repo.obsstore:
588 # We only need this complicated logic if there is obsolescence
588 # We only need this complicated logic if there is obsolescence
589 # XXX will probably deserve an optimised revset.
589 # XXX will probably deserve an optimised revset.
590 nm = repo.changelog.nodemap
590 nm = repo.changelog.nodemap
591 plen = -1
591 plen = -1
592 # compute the whole set of successors or descendants
592 # compute the whole set of successors or descendants
593 while len(foreground) != plen:
593 while len(foreground) != plen:
594 plen = len(foreground)
594 plen = len(foreground)
595 succs = set(c.node() for c in foreground)
595 succs = set(c.node() for c in foreground)
596 mutable = [c.node() for c in foreground if c.mutable()]
596 mutable = [c.node() for c in foreground if c.mutable()]
597 succs.update(allsuccessors(repo.obsstore, mutable))
597 succs.update(allsuccessors(repo.obsstore, mutable))
598 known = (n for n in succs if n in nm)
598 known = (n for n in succs if n in nm)
599 foreground = set(repo.set('%ln::', known))
599 foreground = set(repo.set('%ln::', known))
600 return set(c.node() for c in foreground)
600 return set(c.node() for c in foreground)
601
601
602
602
603 def successorssets(repo, initialnode, cache=None):
603 def successorssets(repo, initialnode, cache=None):
604 """Return all set of successors of initial nodes
604 """Return all set of successors of initial nodes
605
605
606 The successors set of a changeset A are a group of revisions that succeed
606 The successors set of a changeset A are a group of revisions that succeed
607 A. It succeeds A as a consistent whole, each revision being only a partial
607 A. It succeeds A as a consistent whole, each revision being only a partial
608 replacement. The successors set contains non-obsolete changesets only.
608 replacement. The successors set contains non-obsolete changesets only.
609
609
610 This function returns the full list of successor sets which is why it
610 This function returns the full list of successor sets which is why it
611 returns a list of tuples and not just a single tuple. Each tuple is a valid
611 returns a list of tuples and not just a single tuple. Each tuple is a valid
612 successors set. Not that (A,) may be a valid successors set for changeset A
612 successors set. Not that (A,) may be a valid successors set for changeset A
613 (see below).
613 (see below).
614
614
615 In most cases, a changeset A will have a single element (e.g. the changeset
615 In most cases, a changeset A will have a single element (e.g. the changeset
616 A is replaced by A') in its successors set. Though, it is also common for a
616 A is replaced by A') in its successors set. Though, it is also common for a
617 changeset A to have no elements in its successor set (e.g. the changeset
617 changeset A to have no elements in its successor set (e.g. the changeset
618 has been pruned). Therefore, the returned list of successors sets will be
618 has been pruned). Therefore, the returned list of successors sets will be
619 [(A',)] or [], respectively.
619 [(A',)] or [], respectively.
620
620
621 When a changeset A is split into A' and B', however, it will result in a
621 When a changeset A is split into A' and B', however, it will result in a
622 successors set containing more than a single element, i.e. [(A',B')].
622 successors set containing more than a single element, i.e. [(A',B')].
623 Divergent changesets will result in multiple successors sets, i.e. [(A',),
623 Divergent changesets will result in multiple successors sets, i.e. [(A',),
624 (A'')].
624 (A'')].
625
625
626 If a changeset A is not obsolete, then it will conceptually have no
626 If a changeset A is not obsolete, then it will conceptually have no
627 successors set. To distinguish this from a pruned changeset, the successor
627 successors set. To distinguish this from a pruned changeset, the successor
628 set will only contain itself, i.e. [(A,)].
628 set will only contain itself, i.e. [(A,)].
629
629
630 Finally, successors unknown locally are considered to be pruned (obsoleted
630 Finally, successors unknown locally are considered to be pruned (obsoleted
631 without any successors).
631 without any successors).
632
632
633 The optional `cache` parameter is a dictionary that may contain precomputed
633 The optional `cache` parameter is a dictionary that may contain precomputed
634 successors sets. It is meant to reuse the computation of a previous call to
634 successors sets. It is meant to reuse the computation of a previous call to
635 `successorssets` when multiple calls are made at the same time. The cache
635 `successorssets` when multiple calls are made at the same time. The cache
636 dictionary is updated in place. The caller is responsible for its live
636 dictionary is updated in place. The caller is responsible for its live
637 spawn. Code that makes multiple calls to `successorssets` *must* use this
637 spawn. Code that makes multiple calls to `successorssets` *must* use this
638 cache mechanism or suffer terrible performances.
638 cache mechanism or suffer terrible performances.
639
639
640 """
640 """
641
641
642 succmarkers = repo.obsstore.successors
642 succmarkers = repo.obsstore.successors
643
643
644 # Stack of nodes we search successors sets for
644 # Stack of nodes we search successors sets for
645 toproceed = [initialnode]
645 toproceed = [initialnode]
646 # set version of above list for fast loop detection
646 # set version of above list for fast loop detection
647 # element added to "toproceed" must be added here
647 # element added to "toproceed" must be added here
648 stackedset = set(toproceed)
648 stackedset = set(toproceed)
649 if cache is None:
649 if cache is None:
650 cache = {}
650 cache = {}
651
651
652 # This while loop is the flattened version of a recursive search for
652 # This while loop is the flattened version of a recursive search for
653 # successors sets
653 # successors sets
654 #
654 #
655 # def successorssets(x):
655 # def successorssets(x):
656 # successors = directsuccessors(x)
656 # successors = directsuccessors(x)
657 # ss = [[]]
657 # ss = [[]]
658 # for succ in directsuccessors(x):
658 # for succ in directsuccessors(x):
659 # # product as in itertools cartesian product
659 # # product as in itertools cartesian product
660 # ss = product(ss, successorssets(succ))
660 # ss = product(ss, successorssets(succ))
661 # return ss
661 # return ss
662 #
662 #
663 # But we can not use plain recursive calls here:
663 # But we can not use plain recursive calls here:
664 # - that would blow the python call stack
664 # - that would blow the python call stack
665 # - obsolescence markers may have cycles, we need to handle them.
665 # - obsolescence markers may have cycles, we need to handle them.
666 #
666 #
667 # The `toproceed` list act as our call stack. Every node we search
667 # The `toproceed` list act as our call stack. Every node we search
668 # successors set for are stacked there.
668 # successors set for are stacked there.
669 #
669 #
670 # The `stackedset` is set version of this stack used to check if a node is
670 # The `stackedset` is set version of this stack used to check if a node is
671 # already stacked. This check is used to detect cycles and prevent infinite
671 # already stacked. This check is used to detect cycles and prevent infinite
672 # loop.
672 # loop.
673 #
673 #
674 # successors set of all nodes are stored in the `cache` dictionary.
674 # successors set of all nodes are stored in the `cache` dictionary.
675 #
675 #
676 # After this while loop ends we use the cache to return the successors sets
676 # After this while loop ends we use the cache to return the successors sets
677 # for the node requested by the caller.
677 # for the node requested by the caller.
678 while toproceed:
678 while toproceed:
679 # Every iteration tries to compute the successors sets of the topmost
679 # Every iteration tries to compute the successors sets of the topmost
680 # node of the stack: CURRENT.
680 # node of the stack: CURRENT.
681 #
681 #
682 # There are four possible outcomes:
682 # There are four possible outcomes:
683 #
683 #
684 # 1) We already know the successors sets of CURRENT:
684 # 1) We already know the successors sets of CURRENT:
685 # -> mission accomplished, pop it from the stack.
685 # -> mission accomplished, pop it from the stack.
686 # 2) Node is not obsolete:
686 # 2) Node is not obsolete:
687 # -> the node is its own successors sets. Add it to the cache.
687 # -> the node is its own successors sets. Add it to the cache.
688 # 3) We do not know successors set of direct successors of CURRENT:
688 # 3) We do not know successors set of direct successors of CURRENT:
689 # -> We add those successors to the stack.
689 # -> We add those successors to the stack.
690 # 4) We know successors sets of all direct successors of CURRENT:
690 # 4) We know successors sets of all direct successors of CURRENT:
691 # -> We can compute CURRENT successors set and add it to the
691 # -> We can compute CURRENT successors set and add it to the
692 # cache.
692 # cache.
693 #
693 #
694 current = toproceed[-1]
694 current = toproceed[-1]
695 if current in cache:
695 if current in cache:
696 # case (1): We already know the successors sets
696 # case (1): We already know the successors sets
697 stackedset.remove(toproceed.pop())
697 stackedset.remove(toproceed.pop())
698 elif current not in succmarkers:
698 elif current not in succmarkers:
699 # case (2): The node is not obsolete.
699 # case (2): The node is not obsolete.
700 if current in repo:
700 if current in repo:
701 # We have a valid last successors.
701 # We have a valid last successors.
702 cache[current] = [(current,)]
702 cache[current] = [(current,)]
703 else:
703 else:
704 # Final obsolete version is unknown locally.
704 # Final obsolete version is unknown locally.
705 # Do not count that as a valid successors
705 # Do not count that as a valid successors
706 cache[current] = []
706 cache[current] = []
707 else:
707 else:
708 # cases (3) and (4)
708 # cases (3) and (4)
709 #
709 #
710 # We proceed in two phases. Phase 1 aims to distinguish case (3)
710 # We proceed in two phases. Phase 1 aims to distinguish case (3)
711 # from case (4):
711 # from case (4):
712 #
712 #
713 # For each direct successors of CURRENT, we check whether its
713 # For each direct successors of CURRENT, we check whether its
714 # successors sets are known. If they are not, we stack the
714 # successors sets are known. If they are not, we stack the
715 # unknown node and proceed to the next iteration of the while
715 # unknown node and proceed to the next iteration of the while
716 # loop. (case 3)
716 # loop. (case 3)
717 #
717 #
718 # During this step, we may detect obsolescence cycles: a node
718 # During this step, we may detect obsolescence cycles: a node
719 # with unknown successors sets but already in the call stack.
719 # with unknown successors sets but already in the call stack.
720 # In such a situation, we arbitrary set the successors sets of
720 # In such a situation, we arbitrary set the successors sets of
721 # the node to nothing (node pruned) to break the cycle.
721 # the node to nothing (node pruned) to break the cycle.
722 #
722 #
723 # If no break was encountered we proceed to phase 2.
723 # If no break was encountered we proceed to phase 2.
724 #
724 #
725 # Phase 2 computes successors sets of CURRENT (case 4); see details
725 # Phase 2 computes successors sets of CURRENT (case 4); see details
726 # in phase 2 itself.
726 # in phase 2 itself.
727 #
727 #
728 # Note the two levels of iteration in each phase.
728 # Note the two levels of iteration in each phase.
729 # - The first one handles obsolescence markers using CURRENT as
729 # - The first one handles obsolescence markers using CURRENT as
730 # precursor (successors markers of CURRENT).
730 # precursor (successors markers of CURRENT).
731 #
731 #
732 # Having multiple entry here means divergence.
732 # Having multiple entry here means divergence.
733 #
733 #
734 # - The second one handles successors defined in each marker.
734 # - The second one handles successors defined in each marker.
735 #
735 #
736 # Having none means pruned node, multiple successors means split,
736 # Having none means pruned node, multiple successors means split,
737 # single successors are standard replacement.
737 # single successors are standard replacement.
738 #
738 #
739 for mark in sorted(succmarkers[current]):
739 for mark in sorted(succmarkers[current]):
740 for suc in mark[1]:
740 for suc in mark[1]:
741 if suc not in cache:
741 if suc not in cache:
742 if suc in stackedset:
742 if suc in stackedset:
743 # cycle breaking
743 # cycle breaking
744 cache[suc] = []
744 cache[suc] = []
745 else:
745 else:
746 # case (3) If we have not computed successors sets
746 # case (3) If we have not computed successors sets
747 # of one of those successors we add it to the
747 # of one of those successors we add it to the
748 # `toproceed` stack and stop all work for this
748 # `toproceed` stack and stop all work for this
749 # iteration.
749 # iteration.
750 toproceed.append(suc)
750 toproceed.append(suc)
751 stackedset.add(suc)
751 stackedset.add(suc)
752 break
752 break
753 else:
753 else:
754 continue
754 continue
755 break
755 break
756 else:
756 else:
757 # case (4): we know all successors sets of all direct
757 # case (4): we know all successors sets of all direct
758 # successors
758 # successors
759 #
759 #
760 # Successors set contributed by each marker depends on the
760 # Successors set contributed by each marker depends on the
761 # successors sets of all its "successors" node.
761 # successors sets of all its "successors" node.
762 #
762 #
763 # Each different marker is a divergence in the obsolescence
763 # Each different marker is a divergence in the obsolescence
764 # history. It contributes successors sets distinct from other
764 # history. It contributes successors sets distinct from other
765 # markers.
765 # markers.
766 #
766 #
767 # Within a marker, a successor may have divergent successors
767 # Within a marker, a successor may have divergent successors
768 # sets. In such a case, the marker will contribute multiple
768 # sets. In such a case, the marker will contribute multiple
769 # divergent successors sets. If multiple successors have
769 # divergent successors sets. If multiple successors have
770 # divergent successors sets, a Cartesian product is used.
770 # divergent successors sets, a Cartesian product is used.
771 #
771 #
772 # At the end we post-process successors sets to remove
772 # At the end we post-process successors sets to remove
773 # duplicated entry and successors set that are strict subset of
773 # duplicated entry and successors set that are strict subset of
774 # another one.
774 # another one.
775 succssets = []
775 succssets = []
776 for mark in sorted(succmarkers[current]):
776 for mark in sorted(succmarkers[current]):
777 # successors sets contributed by this marker
777 # successors sets contributed by this marker
778 markss = [[]]
778 markss = [[]]
779 for suc in mark[1]:
779 for suc in mark[1]:
780 # cardinal product with previous successors
780 # cardinal product with previous successors
781 productresult = []
781 productresult = []
782 for prefix in markss:
782 for prefix in markss:
783 for suffix in cache[suc]:
783 for suffix in cache[suc]:
784 newss = list(prefix)
784 newss = list(prefix)
785 for part in suffix:
785 for part in suffix:
786 # do not duplicated entry in successors set
786 # do not duplicated entry in successors set
787 # first entry wins.
787 # first entry wins.
788 if part not in newss:
788 if part not in newss:
789 newss.append(part)
789 newss.append(part)
790 productresult.append(newss)
790 productresult.append(newss)
791 markss = productresult
791 markss = productresult
792 succssets.extend(markss)
792 succssets.extend(markss)
793 # remove duplicated and subset
793 # remove duplicated and subset
794 seen = []
794 seen = []
795 final = []
795 final = []
796 candidate = sorted(((set(s), s) for s in succssets if s),
796 candidate = sorted(((set(s), s) for s in succssets if s),
797 key=lambda x: len(x[1]), reverse=True)
797 key=lambda x: len(x[1]), reverse=True)
798 for setversion, listversion in candidate:
798 for setversion, listversion in candidate:
799 for seenset in seen:
799 for seenset in seen:
800 if setversion.issubset(seenset):
800 if setversion.issubset(seenset):
801 break
801 break
802 else:
802 else:
803 final.append(listversion)
803 final.append(listversion)
804 seen.append(setversion)
804 seen.append(setversion)
805 final.reverse() # put small successors set first
805 final.reverse() # put small successors set first
806 cache[current] = final
806 cache[current] = final
807 return cache[initialnode]
807 return cache[initialnode]
808
808
809 def _knownrevs(repo, nodes):
809 def _knownrevs(repo, nodes):
810 """yield revision numbers of known nodes passed in parameters
810 """yield revision numbers of known nodes passed in parameters
811
811
812 Unknown revisions are silently ignored."""
812 Unknown revisions are silently ignored."""
813 torev = repo.changelog.nodemap.get
813 torev = repo.changelog.nodemap.get
814 for n in nodes:
814 for n in nodes:
815 rev = torev(n)
815 rev = torev(n)
816 if rev is not None:
816 if rev is not None:
817 yield rev
817 yield rev
818
818
819 # mapping of 'set-name' -> <function to compute this set>
819 # mapping of 'set-name' -> <function to compute this set>
820 cachefuncs = {}
820 cachefuncs = {}
821 def cachefor(name):
821 def cachefor(name):
822 """Decorator to register a function as computing the cache for a set"""
822 """Decorator to register a function as computing the cache for a set"""
823 def decorator(func):
823 def decorator(func):
824 assert name not in cachefuncs
824 assert name not in cachefuncs
825 cachefuncs[name] = func
825 cachefuncs[name] = func
826 return func
826 return func
827 return decorator
827 return decorator
828
828
829 def getrevs(repo, name):
829 def getrevs(repo, name):
830 """Return the set of revision that belong to the <name> set
830 """Return the set of revision that belong to the <name> set
831
831
832 Such access may compute the set and cache it for future use"""
832 Such access may compute the set and cache it for future use"""
833 repo = repo.unfiltered()
833 repo = repo.unfiltered()
834 if not repo.obsstore:
834 if not repo.obsstore:
835 return ()
835 return ()
836 if name not in repo.obsstore.caches:
836 if name not in repo.obsstore.caches:
837 repo.obsstore.caches[name] = cachefuncs[name](repo)
837 repo.obsstore.caches[name] = cachefuncs[name](repo)
838 return repo.obsstore.caches[name]
838 return repo.obsstore.caches[name]
839
839
840 # To be simple we need to invalidate obsolescence cache when:
840 # To be simple we need to invalidate obsolescence cache when:
841 #
841 #
842 # - new changeset is added:
842 # - new changeset is added:
843 # - public phase is changed
843 # - public phase is changed
844 # - obsolescence marker are added
844 # - obsolescence marker are added
845 # - strip is used a repo
845 # - strip is used a repo
846 def clearobscaches(repo):
846 def clearobscaches(repo):
847 """Remove all obsolescence related cache from a repo
847 """Remove all obsolescence related cache from a repo
848
848
849 This remove all cache in obsstore is the obsstore already exist on the
849 This remove all cache in obsstore is the obsstore already exist on the
850 repo.
850 repo.
851
851
852 (We could be smarter here given the exact event that trigger the cache
852 (We could be smarter here given the exact event that trigger the cache
853 clearing)"""
853 clearing)"""
854 # only clear cache is there is obsstore data in this repo
854 # only clear cache is there is obsstore data in this repo
855 if 'obsstore' in repo._filecache:
855 if 'obsstore' in repo._filecache:
856 repo.obsstore.caches.clear()
856 repo.obsstore.caches.clear()
857
857
858 @cachefor('obsolete')
858 @cachefor('obsolete')
859 def _computeobsoleteset(repo):
859 def _computeobsoleteset(repo):
860 """the set of obsolete revisions"""
860 """the set of obsolete revisions"""
861 obs = set()
861 obs = set()
862 getrev = repo.changelog.nodemap.get
862 getrev = repo.changelog.nodemap.get
863 getphase = repo._phasecache.phase
863 getphase = repo._phasecache.phase
864 for n in repo.obsstore.successors:
864 for n in repo.obsstore.successors:
865 rev = getrev(n)
865 rev = getrev(n)
866 if rev is not None and getphase(repo, rev):
866 if rev is not None and getphase(repo, rev):
867 obs.add(rev)
867 obs.add(rev)
868 return obs
868 return obs
869
869
870 @cachefor('unstable')
870 @cachefor('unstable')
871 def _computeunstableset(repo):
871 def _computeunstableset(repo):
872 """the set of non obsolete revisions with obsolete parents"""
872 """the set of non obsolete revisions with obsolete parents"""
873 # revset is not efficient enough here
873 # revset is not efficient enough here
874 # we do (obsolete()::) - obsolete() by hand
874 # we do (obsolete()::) - obsolete() by hand
875 obs = getrevs(repo, 'obsolete')
875 obs = getrevs(repo, 'obsolete')
876 if not obs:
876 if not obs:
877 return set()
877 return set()
878 cl = repo.changelog
878 cl = repo.changelog
879 return set(r for r in cl.descendants(obs) if r not in obs)
879 return set(r for r in cl.descendants(obs) if r not in obs)
880
880
881 @cachefor('suspended')
881 @cachefor('suspended')
882 def _computesuspendedset(repo):
882 def _computesuspendedset(repo):
883 """the set of obsolete parents with non obsolete descendants"""
883 """the set of obsolete parents with non obsolete descendants"""
884 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
884 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
885 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
885 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
886
886
887 @cachefor('extinct')
887 @cachefor('extinct')
888 def _computeextinctset(repo):
888 def _computeextinctset(repo):
889 """the set of obsolete parents without non obsolete descendants"""
889 """the set of obsolete parents without non obsolete descendants"""
890 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
890 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
891
891
892
892
893 @cachefor('bumped')
893 @cachefor('bumped')
894 def _computebumpedset(repo):
894 def _computebumpedset(repo):
895 """the set of revs trying to obsolete public revisions"""
895 """the set of revs trying to obsolete public revisions"""
896 bumped = set()
896 bumped = set()
897 # util function (avoid attribute lookup in the loop)
897 # util function (avoid attribute lookup in the loop)
898 phase = repo._phasecache.phase # would be faster to grab the full list
898 phase = repo._phasecache.phase # would be faster to grab the full list
899 public = phases.public
899 public = phases.public
900 cl = repo.changelog
900 cl = repo.changelog
901 torev = cl.nodemap.get
901 torev = cl.nodemap.get
902 obs = getrevs(repo, 'obsolete')
902 obs = getrevs(repo, 'obsolete')
903 for rev in repo:
903 for rev in repo:
904 # We only evaluate mutable, non-obsolete revision
904 # We only evaluate mutable, non-obsolete revision
905 if (public < phase(repo, rev)) and (rev not in obs):
905 if (public < phase(repo, rev)) and (rev not in obs):
906 node = cl.node(rev)
906 node = cl.node(rev)
907 # (future) A cache of precursors may worth if split is very common
907 # (future) A cache of precursors may worth if split is very common
908 for pnode in allprecursors(repo.obsstore, [node],
908 for pnode in allprecursors(repo.obsstore, [node],
909 ignoreflags=bumpedfix):
909 ignoreflags=bumpedfix):
910 prev = torev(pnode) # unfiltered! but so is phasecache
910 prev = torev(pnode) # unfiltered! but so is phasecache
911 if (prev is not None) and (phase(repo, prev) <= public):
911 if (prev is not None) and (phase(repo, prev) <= public):
912 # we have a public precursors
912 # we have a public precursors
913 bumped.add(rev)
913 bumped.add(rev)
914 break # Next draft!
914 break # Next draft!
915 return bumped
915 return bumped
916
916
917 @cachefor('divergent')
917 @cachefor('divergent')
918 def _computedivergentset(repo):
918 def _computedivergentset(repo):
919 """the set of rev that compete to be the final successors of some revision.
919 """the set of rev that compete to be the final successors of some revision.
920 """
920 """
921 divergent = set()
921 divergent = set()
922 obsstore = repo.obsstore
922 obsstore = repo.obsstore
923 newermap = {}
923 newermap = {}
924 for ctx in repo.set('(not public()) - obsolete()'):
924 for ctx in repo.set('(not public()) - obsolete()'):
925 mark = obsstore.precursors.get(ctx.node(), ())
925 mark = obsstore.precursors.get(ctx.node(), ())
926 toprocess = set(mark)
926 toprocess = set(mark)
927 while toprocess:
927 while toprocess:
928 prec = toprocess.pop()[0]
928 prec = toprocess.pop()[0]
929 if prec not in newermap:
929 if prec not in newermap:
930 successorssets(repo, prec, newermap)
930 successorssets(repo, prec, newermap)
931 newer = [n for n in newermap[prec] if n]
931 newer = [n for n in newermap[prec] if n]
932 if len(newer) > 1:
932 if len(newer) > 1:
933 divergent.add(ctx.rev())
933 divergent.add(ctx.rev())
934 break
934 break
935 toprocess.update(obsstore.precursors.get(prec, ()))
935 toprocess.update(obsstore.precursors.get(prec, ()))
936 return divergent
936 return divergent
937
937
938
938
939 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
939 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
940 """Add obsolete markers between changesets in a repo
940 """Add obsolete markers between changesets in a repo
941
941
942 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
942 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
943 tuple. `old` and `news` are changectx. metadata is an optional dictionary
943 tuple. `old` and `news` are changectx. metadata is an optional dictionary
944 containing metadata for this marker only. It is merged with the global
944 containing metadata for this marker only. It is merged with the global
945 metadata specified through the `metadata` argument of this function,
945 metadata specified through the `metadata` argument of this function,
946
946
947 Trying to obsolete a public changeset will raise an exception.
947 Trying to obsolete a public changeset will raise an exception.
948
948
949 Current user and date are used except if specified otherwise in the
949 Current user and date are used except if specified otherwise in the
950 metadata attribute.
950 metadata attribute.
951
951
952 This function operates within a transaction of its own, but does
952 This function operates within a transaction of its own, but does
953 not take any lock on the repo.
953 not take any lock on the repo.
954 """
954 """
955 # prepare metadata
955 # prepare metadata
956 if metadata is None:
956 if metadata is None:
957 metadata = {}
957 metadata = {}
958 if 'user' not in metadata:
958 if 'user' not in metadata:
959 metadata['user'] = repo.ui.username()
959 metadata['user'] = repo.ui.username()
960 tr = repo.transaction('add-obsolescence-marker')
960 tr = repo.transaction('add-obsolescence-marker')
961 try:
961 try:
962 for rel in relations:
962 for rel in relations:
963 prec = rel[0]
963 prec = rel[0]
964 sucs = rel[1]
964 sucs = rel[1]
965 localmetadata = metadata.copy()
965 localmetadata = metadata.copy()
966 if 2 < len(rel):
966 if 2 < len(rel):
967 localmetadata.update(rel[2])
967 localmetadata.update(rel[2])
968
968
969 if not prec.mutable():
969 if not prec.mutable():
970 raise util.Abort("cannot obsolete immutable changeset: %s"
970 raise util.Abort("cannot obsolete immutable changeset: %s"
971 % prec)
971 % prec)
972 nprec = prec.node()
972 nprec = prec.node()
973 nsucs = tuple(s.node() for s in sucs)
973 nsucs = tuple(s.node() for s in sucs)
974 npare = None
974 npare = None
975 if not nsucs:
975 if not nsucs:
976 npare = tuple(p.node() for p in prec.parents())
976 npare = tuple(p.node() for p in prec.parents())
977 if nprec in nsucs:
977 if nprec in nsucs:
978 raise util.Abort("changeset %s cannot obsolete itself" % prec)
978 raise util.Abort("changeset %s cannot obsolete itself" % prec)
979 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
979 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
980 date=date, metadata=localmetadata)
980 date=date, metadata=localmetadata)
981 repo.filteredrevcache.clear()
981 repo.filteredrevcache.clear()
982 tr.close()
982 tr.close()
983 finally:
983 finally:
984 tr.release()
984 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now