##// END OF EJS Templates
obsolete: have _readmarkers return the format version...
Pierre-Yves David -
r22332:13e22358 default
parent child Browse files
Show More
@@ -1,989 +1,990
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fm0version = 0
102 _fm0version = 0
103 _fm0fixed = '>BIB20s'
103 _fm0fixed = '>BIB20s'
104 _fm0node = '20s'
104 _fm0node = '20s'
105 _fm0fsize = struct.calcsize(_fm0fixed)
105 _fm0fsize = struct.calcsize(_fm0fixed)
106 _fm0fnodesize = struct.calcsize(_fm0node)
106 _fm0fnodesize = struct.calcsize(_fm0node)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion not in formats:
145 if diskversion not in formats:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148 return formats[diskversion][0](data, off)
148 return diskversion, formats[diskversion][0](data, off)
149
149
150 def _fm0readmarkers(data, off=0):
150 def _fm0readmarkers(data, off=0):
151 """Read and enumerate markers from raw data in format version 0"""
151 """Read and enumerate markers from raw data in format version 0"""
152 # Loop on markers
152 # Loop on markers
153 l = len(data)
153 l = len(data)
154 while off + _fm0fsize <= l:
154 while off + _fm0fsize <= l:
155 # read fixed part
155 # read fixed part
156 cur = data[off:off + _fm0fsize]
156 cur = data[off:off + _fm0fsize]
157 off += _fm0fsize
157 off += _fm0fsize
158 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
158 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
159 # read replacement
159 # read replacement
160 sucs = ()
160 sucs = ()
161 if nbsuc:
161 if nbsuc:
162 s = (_fm0fnodesize * nbsuc)
162 s = (_fm0fnodesize * nbsuc)
163 cur = data[off:off + s]
163 cur = data[off:off + s]
164 sucs = _unpack(_fm0node * nbsuc, cur)
164 sucs = _unpack(_fm0node * nbsuc, cur)
165 off += s
165 off += s
166 # read metadata
166 # read metadata
167 # (metadata will be decoded on demand)
167 # (metadata will be decoded on demand)
168 metadata = data[off:off + mdsize]
168 metadata = data[off:off + mdsize]
169 if len(metadata) != mdsize:
169 if len(metadata) != mdsize:
170 raise util.Abort(_('parsing obsolete marker: metadata is too '
170 raise util.Abort(_('parsing obsolete marker: metadata is too '
171 'short, %d bytes expected, got %d')
171 'short, %d bytes expected, got %d')
172 % (mdsize, len(metadata)))
172 % (mdsize, len(metadata)))
173 off += mdsize
173 off += mdsize
174 meta = decodemeta(metadata)
174 meta = decodemeta(metadata)
175 try:
175 try:
176 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
176 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
177 date = float(when), int(offset)
177 date = float(when), int(offset)
178 except ValueError:
178 except ValueError:
179 date = (0., 0)
179 date = (0., 0)
180 parents = None
180 parents = None
181 if 'p2' in meta:
181 if 'p2' in meta:
182 parents = (meta.pop('p1', None), meta.pop('p2', None))
182 parents = (meta.pop('p1', None), meta.pop('p2', None))
183 elif 'p1' in meta:
183 elif 'p1' in meta:
184 parents = (meta.pop('p1', None),)
184 parents = (meta.pop('p1', None),)
185 elif 'p0' in meta:
185 elif 'p0' in meta:
186 parents = ()
186 parents = ()
187 if parents is not None:
187 if parents is not None:
188 try:
188 try:
189 parents = tuple(node.bin(p) for p in parents)
189 parents = tuple(node.bin(p) for p in parents)
190 # if parent content is not a nodeid, drop the data
190 # if parent content is not a nodeid, drop the data
191 for p in parents:
191 for p in parents:
192 if len(p) != 20:
192 if len(p) != 20:
193 parents = None
193 parents = None
194 break
194 break
195 except TypeError:
195 except TypeError:
196 # if content cannot be translated to nodeid drop the data.
196 # if content cannot be translated to nodeid drop the data.
197 parents = None
197 parents = None
198
198
199 metadata = encodemeta(meta)
199 metadata = encodemeta(meta)
200
200
201 yield (pre, sucs, flags, metadata, date, parents)
201 yield (pre, sucs, flags, metadata, date, parents)
202
202
203 def _fm0encodeonemarker(marker):
203 def _fm0encodeonemarker(marker):
204 pre, sucs, flags, metadata, date, parents = marker
204 pre, sucs, flags, metadata, date, parents = marker
205 metadata = decodemeta(metadata)
205 metadata = decodemeta(metadata)
206 metadata['date'] = '%d %i' % date
206 metadata['date'] = '%d %i' % date
207 if parents is not None:
207 if parents is not None:
208 if not parents:
208 if not parents:
209 # mark that we explicitly recorded no parents
209 # mark that we explicitly recorded no parents
210 metadata['p0'] = ''
210 metadata['p0'] = ''
211 for i, p in enumerate(parents):
211 for i, p in enumerate(parents):
212 metadata['p%i' % (i + 1)] = node.hex(p)
212 metadata['p%i' % (i + 1)] = node.hex(p)
213 metadata = encodemeta(metadata)
213 metadata = encodemeta(metadata)
214 nbsuc = len(sucs)
214 nbsuc = len(sucs)
215 format = _fm0fixed + (_fm0node * nbsuc)
215 format = _fm0fixed + (_fm0node * nbsuc)
216 data = [nbsuc, len(metadata), flags, pre]
216 data = [nbsuc, len(metadata), flags, pre]
217 data.extend(sucs)
217 data.extend(sucs)
218 return _pack(format, *data) + metadata
218 return _pack(format, *data) + metadata
219
219
220 # mapping to read/write various marker formats
220 # mapping to read/write various marker formats
221 # <version> -> (decoder, encoder)
221 # <version> -> (decoder, encoder)
222 formats = {0: (_fm0readmarkers, _fm0encodeonemarker)}
222 formats = {0: (_fm0readmarkers, _fm0encodeonemarker)}
223
223
224 def encodemeta(meta):
224 def encodemeta(meta):
225 """Return encoded metadata string to string mapping.
225 """Return encoded metadata string to string mapping.
226
226
227 Assume no ':' in key and no '\0' in both key and value."""
227 Assume no ':' in key and no '\0' in both key and value."""
228 for key, value in meta.iteritems():
228 for key, value in meta.iteritems():
229 if ':' in key or '\0' in key:
229 if ':' in key or '\0' in key:
230 raise ValueError("':' and '\0' are forbidden in metadata key'")
230 raise ValueError("':' and '\0' are forbidden in metadata key'")
231 if '\0' in value:
231 if '\0' in value:
232 raise ValueError("':' is forbidden in metadata value'")
232 raise ValueError("':' is forbidden in metadata value'")
233 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
233 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
234
234
235 def decodemeta(data):
235 def decodemeta(data):
236 """Return string to string dictionary from encoded version."""
236 """Return string to string dictionary from encoded version."""
237 d = {}
237 d = {}
238 for l in data.split('\0'):
238 for l in data.split('\0'):
239 if l:
239 if l:
240 key, value = l.split(':')
240 key, value = l.split(':')
241 d[key] = value
241 d[key] = value
242 return d
242 return d
243
243
244 class marker(object):
244 class marker(object):
245 """Wrap obsolete marker raw data"""
245 """Wrap obsolete marker raw data"""
246
246
247 def __init__(self, repo, data):
247 def __init__(self, repo, data):
248 # the repo argument will be used to create changectx in later version
248 # the repo argument will be used to create changectx in later version
249 self._repo = repo
249 self._repo = repo
250 self._data = data
250 self._data = data
251 self._decodedmeta = None
251 self._decodedmeta = None
252
252
253 def __hash__(self):
253 def __hash__(self):
254 return hash(self._data)
254 return hash(self._data)
255
255
256 def __eq__(self, other):
256 def __eq__(self, other):
257 if type(other) != type(self):
257 if type(other) != type(self):
258 return False
258 return False
259 return self._data == other._data
259 return self._data == other._data
260
260
261 def precnode(self):
261 def precnode(self):
262 """Precursor changeset node identifier"""
262 """Precursor changeset node identifier"""
263 return self._data[0]
263 return self._data[0]
264
264
265 def succnodes(self):
265 def succnodes(self):
266 """List of successor changesets node identifiers"""
266 """List of successor changesets node identifiers"""
267 return self._data[1]
267 return self._data[1]
268
268
269 def parentnodes(self):
269 def parentnodes(self):
270 """Parents of the precursors (None if not recorded)"""
270 """Parents of the precursors (None if not recorded)"""
271 return self._data[5]
271 return self._data[5]
272
272
273 def metadata(self):
273 def metadata(self):
274 """Decoded metadata dictionary"""
274 """Decoded metadata dictionary"""
275 if self._decodedmeta is None:
275 if self._decodedmeta is None:
276 self._decodedmeta = decodemeta(self._data[3])
276 self._decodedmeta = decodemeta(self._data[3])
277 return self._decodedmeta
277 return self._decodedmeta
278
278
279 def date(self):
279 def date(self):
280 """Creation date as (unixtime, offset)"""
280 """Creation date as (unixtime, offset)"""
281 return self._data[4]
281 return self._data[4]
282
282
283 def flags(self):
283 def flags(self):
284 """The flags field of the marker"""
284 """The flags field of the marker"""
285 return self._data[2]
285 return self._data[2]
286
286
287 class obsstore(object):
287 class obsstore(object):
288 """Store obsolete markers
288 """Store obsolete markers
289
289
290 Markers can be accessed with two mappings:
290 Markers can be accessed with two mappings:
291 - precursors[x] -> set(markers on precursors edges of x)
291 - precursors[x] -> set(markers on precursors edges of x)
292 - successors[x] -> set(markers on successors edges of x)
292 - successors[x] -> set(markers on successors edges of x)
293 - children[x] -> set(markers on precursors edges of children(x)
293 - children[x] -> set(markers on precursors edges of children(x)
294 """
294 """
295
295
296 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
296 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
297 # prec: nodeid, precursor changesets
297 # prec: nodeid, precursor changesets
298 # succs: tuple of nodeid, successor changesets (0-N length)
298 # succs: tuple of nodeid, successor changesets (0-N length)
299 # flag: integer, flag field carrying modifier for the markers (see doc)
299 # flag: integer, flag field carrying modifier for the markers (see doc)
300 # meta: binary blob, encoded metadata dictionary
300 # meta: binary blob, encoded metadata dictionary
301 # date: (float, int) tuple, date of marker creation
301 # date: (float, int) tuple, date of marker creation
302 # parents: (tuple of nodeid) or None, parents of precursors
302 # parents: (tuple of nodeid) or None, parents of precursors
303 # None is used when no data has been recorded
303 # None is used when no data has been recorded
304
304
305 def __init__(self, sopener):
305 def __init__(self, sopener):
306 # caches for various obsolescence related cache
306 # caches for various obsolescence related cache
307 self.caches = {}
307 self.caches = {}
308 self._all = []
308 self._all = []
309 self.precursors = {}
309 self.precursors = {}
310 self.successors = {}
310 self.successors = {}
311 self.children = {}
311 self.children = {}
312 self.sopener = sopener
312 self.sopener = sopener
313 data = sopener.tryread('obsstore')
313 data = sopener.tryread('obsstore')
314 if data:
314 if data:
315 self._load(_readmarkers(data))
315 version, markers = _readmarkers(data)
316 self._load(markers)
316
317
317 def __iter__(self):
318 def __iter__(self):
318 return iter(self._all)
319 return iter(self._all)
319
320
320 def __len__(self):
321 def __len__(self):
321 return len(self._all)
322 return len(self._all)
322
323
323 def __nonzero__(self):
324 def __nonzero__(self):
324 return bool(self._all)
325 return bool(self._all)
325
326
326 def create(self, transaction, prec, succs=(), flag=0, parents=None,
327 def create(self, transaction, prec, succs=(), flag=0, parents=None,
327 date=None, metadata=None):
328 date=None, metadata=None):
328 """obsolete: add a new obsolete marker
329 """obsolete: add a new obsolete marker
329
330
330 * ensuring it is hashable
331 * ensuring it is hashable
331 * check mandatory metadata
332 * check mandatory metadata
332 * encode metadata
333 * encode metadata
333
334
334 If you are a human writing code creating marker you want to use the
335 If you are a human writing code creating marker you want to use the
335 `createmarkers` function in this module instead.
336 `createmarkers` function in this module instead.
336
337
337 return True if a new marker have been added, False if the markers
338 return True if a new marker have been added, False if the markers
338 already existed (no op).
339 already existed (no op).
339 """
340 """
340 if metadata is None:
341 if metadata is None:
341 metadata = {}
342 metadata = {}
342 if date is None:
343 if date is None:
343 if 'date' in metadata:
344 if 'date' in metadata:
344 # as a courtesy for out-of-tree extensions
345 # as a courtesy for out-of-tree extensions
345 date = util.parsedate(metadata.pop('date'))
346 date = util.parsedate(metadata.pop('date'))
346 else:
347 else:
347 date = util.makedate()
348 date = util.makedate()
348 if len(prec) != 20:
349 if len(prec) != 20:
349 raise ValueError(prec)
350 raise ValueError(prec)
350 for succ in succs:
351 for succ in succs:
351 if len(succ) != 20:
352 if len(succ) != 20:
352 raise ValueError(succ)
353 raise ValueError(succ)
353 if prec in succs:
354 if prec in succs:
354 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
355 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
355 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
356 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
356 date, parents)
357 date, parents)
357 return bool(self.add(transaction, [marker]))
358 return bool(self.add(transaction, [marker]))
358
359
359 def add(self, transaction, markers):
360 def add(self, transaction, markers):
360 """Add new markers to the store
361 """Add new markers to the store
361
362
362 Take care of filtering duplicate.
363 Take care of filtering duplicate.
363 Return the number of new marker."""
364 Return the number of new marker."""
364 if not _enabled:
365 if not _enabled:
365 raise util.Abort('obsolete feature is not enabled on this repo')
366 raise util.Abort('obsolete feature is not enabled on this repo')
366 known = set(self._all)
367 known = set(self._all)
367 new = []
368 new = []
368 for m in markers:
369 for m in markers:
369 if m not in known:
370 if m not in known:
370 known.add(m)
371 known.add(m)
371 new.append(m)
372 new.append(m)
372 if new:
373 if new:
373 f = self.sopener('obsstore', 'ab')
374 f = self.sopener('obsstore', 'ab')
374 try:
375 try:
375 # Whether the file's current position is at the begin or at
376 # Whether the file's current position is at the begin or at
376 # the end after opening a file for appending is implementation
377 # the end after opening a file for appending is implementation
377 # defined. So we must seek to the end before calling tell(),
378 # defined. So we must seek to the end before calling tell(),
378 # or we may get a zero offset for non-zero sized files on
379 # or we may get a zero offset for non-zero sized files on
379 # some platforms (issue3543).
380 # some platforms (issue3543).
380 f.seek(0, _SEEK_END)
381 f.seek(0, _SEEK_END)
381 offset = f.tell()
382 offset = f.tell()
382 transaction.add('obsstore', offset)
383 transaction.add('obsstore', offset)
383 # offset == 0: new file - add the version header
384 # offset == 0: new file - add the version header
384 for bytes in _encodemarkers(new, offset == 0):
385 for bytes in _encodemarkers(new, offset == 0):
385 f.write(bytes)
386 f.write(bytes)
386 finally:
387 finally:
387 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
388 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
388 # call 'filecacheentry.refresh()' here
389 # call 'filecacheentry.refresh()' here
389 f.close()
390 f.close()
390 self._load(new)
391 self._load(new)
391 # new marker *may* have changed several set. invalidate the cache.
392 # new marker *may* have changed several set. invalidate the cache.
392 self.caches.clear()
393 self.caches.clear()
393 return len(new)
394 return len(new)
394
395
395 def mergemarkers(self, transaction, data):
396 def mergemarkers(self, transaction, data):
396 """merge a binary stream of markers inside the obsstore
397 """merge a binary stream of markers inside the obsstore
397
398
398 Returns the number of new markers added."""
399 Returns the number of new markers added."""
399 markers = _readmarkers(data)
400 version, markers = _readmarkers(data)
400 return self.add(transaction, markers)
401 return self.add(transaction, markers)
401
402
402 def _load(self, markers):
403 def _load(self, markers):
403 for mark in markers:
404 for mark in markers:
404 self._all.append(mark)
405 self._all.append(mark)
405 pre, sucs = mark[:2]
406 pre, sucs = mark[:2]
406 self.successors.setdefault(pre, set()).add(mark)
407 self.successors.setdefault(pre, set()).add(mark)
407 for suc in sucs:
408 for suc in sucs:
408 self.precursors.setdefault(suc, set()).add(mark)
409 self.precursors.setdefault(suc, set()).add(mark)
409 parents = mark[5]
410 parents = mark[5]
410 if parents is not None:
411 if parents is not None:
411 for p in parents:
412 for p in parents:
412 self.children.setdefault(p, set()).add(mark)
413 self.children.setdefault(p, set()).add(mark)
413 if node.nullid in self.precursors:
414 if node.nullid in self.precursors:
414 raise util.Abort(_('bad obsolescence marker detected: '
415 raise util.Abort(_('bad obsolescence marker detected: '
415 'invalid successors nullid'))
416 'invalid successors nullid'))
416 def relevantmarkers(self, nodes):
417 def relevantmarkers(self, nodes):
417 """return a set of all obsolescence markers relevant to a set of nodes.
418 """return a set of all obsolescence markers relevant to a set of nodes.
418
419
419 "relevant" to a set of nodes mean:
420 "relevant" to a set of nodes mean:
420
421
421 - marker that use this changeset as successor
422 - marker that use this changeset as successor
422 - prune marker of direct children on this changeset
423 - prune marker of direct children on this changeset
423 - recursive application of the two rules on precursors of these markers
424 - recursive application of the two rules on precursors of these markers
424
425
425 It is a set so you cannot rely on order."""
426 It is a set so you cannot rely on order."""
426
427
427 pendingnodes = set(nodes)
428 pendingnodes = set(nodes)
428 seenmarkers = set()
429 seenmarkers = set()
429 seennodes = set(pendingnodes)
430 seennodes = set(pendingnodes)
430 precursorsmarkers = self.precursors
431 precursorsmarkers = self.precursors
431 children = self.children
432 children = self.children
432 while pendingnodes:
433 while pendingnodes:
433 direct = set()
434 direct = set()
434 for current in pendingnodes:
435 for current in pendingnodes:
435 direct.update(precursorsmarkers.get(current, ()))
436 direct.update(precursorsmarkers.get(current, ()))
436 pruned = [m for m in children.get(current, ()) if not m[1]]
437 pruned = [m for m in children.get(current, ()) if not m[1]]
437 direct.update(pruned)
438 direct.update(pruned)
438 direct -= seenmarkers
439 direct -= seenmarkers
439 pendingnodes = set([m[0] for m in direct])
440 pendingnodes = set([m[0] for m in direct])
440 seenmarkers |= direct
441 seenmarkers |= direct
441 pendingnodes -= seennodes
442 pendingnodes -= seennodes
442 seennodes |= pendingnodes
443 seennodes |= pendingnodes
443 return seenmarkers
444 return seenmarkers
444
445
445 def _encodemarkers(markers, addheader=False, version=_fm0version):
446 def _encodemarkers(markers, addheader=False, version=_fm0version):
446 # Kept separate from flushmarkers(), it will be reused for
447 # Kept separate from flushmarkers(), it will be reused for
447 # markers exchange.
448 # markers exchange.
448 encodeone = formats[version][1]
449 encodeone = formats[version][1]
449 if addheader:
450 if addheader:
450 yield _pack('>B', _fm0version)
451 yield _pack('>B', _fm0version)
451 for marker in markers:
452 for marker in markers:
452 yield encodeone(marker)
453 yield encodeone(marker)
453
454
454
455
455 # arbitrary picked to fit into 8K limit from HTTP server
456 # arbitrary picked to fit into 8K limit from HTTP server
456 # you have to take in account:
457 # you have to take in account:
457 # - the version header
458 # - the version header
458 # - the base85 encoding
459 # - the base85 encoding
459 _maxpayload = 5300
460 _maxpayload = 5300
460
461
461 def _pushkeyescape(markers):
462 def _pushkeyescape(markers):
462 """encode markers into a dict suitable for pushkey exchange
463 """encode markers into a dict suitable for pushkey exchange
463
464
464 - binary data is base85 encoded
465 - binary data is base85 encoded
465 - split in chunks smaller than 5300 bytes"""
466 - split in chunks smaller than 5300 bytes"""
466 keys = {}
467 keys = {}
467 parts = []
468 parts = []
468 currentlen = _maxpayload * 2 # ensure we create a new part
469 currentlen = _maxpayload * 2 # ensure we create a new part
469 for marker in markers:
470 for marker in markers:
470 nextdata = _fm0encodeonemarker(marker)
471 nextdata = _fm0encodeonemarker(marker)
471 if (len(nextdata) + currentlen > _maxpayload):
472 if (len(nextdata) + currentlen > _maxpayload):
472 currentpart = []
473 currentpart = []
473 currentlen = 0
474 currentlen = 0
474 parts.append(currentpart)
475 parts.append(currentpart)
475 currentpart.append(nextdata)
476 currentpart.append(nextdata)
476 currentlen += len(nextdata)
477 currentlen += len(nextdata)
477 for idx, part in enumerate(reversed(parts)):
478 for idx, part in enumerate(reversed(parts)):
478 data = ''.join([_pack('>B', _fm0version)] + part)
479 data = ''.join([_pack('>B', _fm0version)] + part)
479 keys['dump%i' % idx] = base85.b85encode(data)
480 keys['dump%i' % idx] = base85.b85encode(data)
480 return keys
481 return keys
481
482
482 def listmarkers(repo):
483 def listmarkers(repo):
483 """List markers over pushkey"""
484 """List markers over pushkey"""
484 if not repo.obsstore:
485 if not repo.obsstore:
485 return {}
486 return {}
486 return _pushkeyescape(repo.obsstore)
487 return _pushkeyescape(repo.obsstore)
487
488
488 def pushmarker(repo, key, old, new):
489 def pushmarker(repo, key, old, new):
489 """Push markers over pushkey"""
490 """Push markers over pushkey"""
490 if not key.startswith('dump'):
491 if not key.startswith('dump'):
491 repo.ui.warn(_('unknown key: %r') % key)
492 repo.ui.warn(_('unknown key: %r') % key)
492 return 0
493 return 0
493 if old:
494 if old:
494 repo.ui.warn(_('unexpected old value for %r') % key)
495 repo.ui.warn(_('unexpected old value for %r') % key)
495 return 0
496 return 0
496 data = base85.b85decode(new)
497 data = base85.b85decode(new)
497 lock = repo.lock()
498 lock = repo.lock()
498 try:
499 try:
499 tr = repo.transaction('pushkey: obsolete markers')
500 tr = repo.transaction('pushkey: obsolete markers')
500 try:
501 try:
501 repo.obsstore.mergemarkers(tr, data)
502 repo.obsstore.mergemarkers(tr, data)
502 tr.close()
503 tr.close()
503 return 1
504 return 1
504 finally:
505 finally:
505 tr.release()
506 tr.release()
506 finally:
507 finally:
507 lock.release()
508 lock.release()
508
509
509 def getmarkers(repo, nodes=None):
510 def getmarkers(repo, nodes=None):
510 """returns markers known in a repository
511 """returns markers known in a repository
511
512
512 If <nodes> is specified, only markers "relevant" to those nodes are are
513 If <nodes> is specified, only markers "relevant" to those nodes are are
513 returned"""
514 returned"""
514 if nodes is None:
515 if nodes is None:
515 rawmarkers = repo.obsstore
516 rawmarkers = repo.obsstore
516 else:
517 else:
517 rawmarkers = repo.obsstore.relevantmarkers(nodes)
518 rawmarkers = repo.obsstore.relevantmarkers(nodes)
518
519
519 for markerdata in rawmarkers:
520 for markerdata in rawmarkers:
520 yield marker(repo, markerdata)
521 yield marker(repo, markerdata)
521
522
522 def relevantmarkers(repo, node):
523 def relevantmarkers(repo, node):
523 """all obsolete markers relevant to some revision"""
524 """all obsolete markers relevant to some revision"""
524 for markerdata in repo.obsstore.relevantmarkers(node):
525 for markerdata in repo.obsstore.relevantmarkers(node):
525 yield marker(repo, markerdata)
526 yield marker(repo, markerdata)
526
527
527
528
528 def precursormarkers(ctx):
529 def precursormarkers(ctx):
529 """obsolete marker marking this changeset as a successors"""
530 """obsolete marker marking this changeset as a successors"""
530 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
531 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
531 yield marker(ctx._repo, data)
532 yield marker(ctx._repo, data)
532
533
533 def successormarkers(ctx):
534 def successormarkers(ctx):
534 """obsolete marker making this changeset obsolete"""
535 """obsolete marker making this changeset obsolete"""
535 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
536 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
536 yield marker(ctx._repo, data)
537 yield marker(ctx._repo, data)
537
538
538 def allsuccessors(obsstore, nodes, ignoreflags=0):
539 def allsuccessors(obsstore, nodes, ignoreflags=0):
539 """Yield node for every successor of <nodes>.
540 """Yield node for every successor of <nodes>.
540
541
541 Some successors may be unknown locally.
542 Some successors may be unknown locally.
542
543
543 This is a linear yield unsuited to detecting split changesets. It includes
544 This is a linear yield unsuited to detecting split changesets. It includes
544 initial nodes too."""
545 initial nodes too."""
545 remaining = set(nodes)
546 remaining = set(nodes)
546 seen = set(remaining)
547 seen = set(remaining)
547 while remaining:
548 while remaining:
548 current = remaining.pop()
549 current = remaining.pop()
549 yield current
550 yield current
550 for mark in obsstore.successors.get(current, ()):
551 for mark in obsstore.successors.get(current, ()):
551 # ignore marker flagged with specified flag
552 # ignore marker flagged with specified flag
552 if mark[2] & ignoreflags:
553 if mark[2] & ignoreflags:
553 continue
554 continue
554 for suc in mark[1]:
555 for suc in mark[1]:
555 if suc not in seen:
556 if suc not in seen:
556 seen.add(suc)
557 seen.add(suc)
557 remaining.add(suc)
558 remaining.add(suc)
558
559
559 def allprecursors(obsstore, nodes, ignoreflags=0):
560 def allprecursors(obsstore, nodes, ignoreflags=0):
560 """Yield node for every precursors of <nodes>.
561 """Yield node for every precursors of <nodes>.
561
562
562 Some precursors may be unknown locally.
563 Some precursors may be unknown locally.
563
564
564 This is a linear yield unsuited to detecting folded changesets. It includes
565 This is a linear yield unsuited to detecting folded changesets. It includes
565 initial nodes too."""
566 initial nodes too."""
566
567
567 remaining = set(nodes)
568 remaining = set(nodes)
568 seen = set(remaining)
569 seen = set(remaining)
569 while remaining:
570 while remaining:
570 current = remaining.pop()
571 current = remaining.pop()
571 yield current
572 yield current
572 for mark in obsstore.precursors.get(current, ()):
573 for mark in obsstore.precursors.get(current, ()):
573 # ignore marker flagged with specified flag
574 # ignore marker flagged with specified flag
574 if mark[2] & ignoreflags:
575 if mark[2] & ignoreflags:
575 continue
576 continue
576 suc = mark[0]
577 suc = mark[0]
577 if suc not in seen:
578 if suc not in seen:
578 seen.add(suc)
579 seen.add(suc)
579 remaining.add(suc)
580 remaining.add(suc)
580
581
581 def foreground(repo, nodes):
582 def foreground(repo, nodes):
582 """return all nodes in the "foreground" of other node
583 """return all nodes in the "foreground" of other node
583
584
584 The foreground of a revision is anything reachable using parent -> children
585 The foreground of a revision is anything reachable using parent -> children
585 or precursor -> successor relation. It is very similar to "descendant" but
586 or precursor -> successor relation. It is very similar to "descendant" but
586 augmented with obsolescence information.
587 augmented with obsolescence information.
587
588
588 Beware that possible obsolescence cycle may result if complex situation.
589 Beware that possible obsolescence cycle may result if complex situation.
589 """
590 """
590 repo = repo.unfiltered()
591 repo = repo.unfiltered()
591 foreground = set(repo.set('%ln::', nodes))
592 foreground = set(repo.set('%ln::', nodes))
592 if repo.obsstore:
593 if repo.obsstore:
593 # We only need this complicated logic if there is obsolescence
594 # We only need this complicated logic if there is obsolescence
594 # XXX will probably deserve an optimised revset.
595 # XXX will probably deserve an optimised revset.
595 nm = repo.changelog.nodemap
596 nm = repo.changelog.nodemap
596 plen = -1
597 plen = -1
597 # compute the whole set of successors or descendants
598 # compute the whole set of successors or descendants
598 while len(foreground) != plen:
599 while len(foreground) != plen:
599 plen = len(foreground)
600 plen = len(foreground)
600 succs = set(c.node() for c in foreground)
601 succs = set(c.node() for c in foreground)
601 mutable = [c.node() for c in foreground if c.mutable()]
602 mutable = [c.node() for c in foreground if c.mutable()]
602 succs.update(allsuccessors(repo.obsstore, mutable))
603 succs.update(allsuccessors(repo.obsstore, mutable))
603 known = (n for n in succs if n in nm)
604 known = (n for n in succs if n in nm)
604 foreground = set(repo.set('%ln::', known))
605 foreground = set(repo.set('%ln::', known))
605 return set(c.node() for c in foreground)
606 return set(c.node() for c in foreground)
606
607
607
608
608 def successorssets(repo, initialnode, cache=None):
609 def successorssets(repo, initialnode, cache=None):
609 """Return all set of successors of initial nodes
610 """Return all set of successors of initial nodes
610
611
611 The successors set of a changeset A are a group of revisions that succeed
612 The successors set of a changeset A are a group of revisions that succeed
612 A. It succeeds A as a consistent whole, each revision being only a partial
613 A. It succeeds A as a consistent whole, each revision being only a partial
613 replacement. The successors set contains non-obsolete changesets only.
614 replacement. The successors set contains non-obsolete changesets only.
614
615
615 This function returns the full list of successor sets which is why it
616 This function returns the full list of successor sets which is why it
616 returns a list of tuples and not just a single tuple. Each tuple is a valid
617 returns a list of tuples and not just a single tuple. Each tuple is a valid
617 successors set. Not that (A,) may be a valid successors set for changeset A
618 successors set. Not that (A,) may be a valid successors set for changeset A
618 (see below).
619 (see below).
619
620
620 In most cases, a changeset A will have a single element (e.g. the changeset
621 In most cases, a changeset A will have a single element (e.g. the changeset
621 A is replaced by A') in its successors set. Though, it is also common for a
622 A is replaced by A') in its successors set. Though, it is also common for a
622 changeset A to have no elements in its successor set (e.g. the changeset
623 changeset A to have no elements in its successor set (e.g. the changeset
623 has been pruned). Therefore, the returned list of successors sets will be
624 has been pruned). Therefore, the returned list of successors sets will be
624 [(A',)] or [], respectively.
625 [(A',)] or [], respectively.
625
626
626 When a changeset A is split into A' and B', however, it will result in a
627 When a changeset A is split into A' and B', however, it will result in a
627 successors set containing more than a single element, i.e. [(A',B')].
628 successors set containing more than a single element, i.e. [(A',B')].
628 Divergent changesets will result in multiple successors sets, i.e. [(A',),
629 Divergent changesets will result in multiple successors sets, i.e. [(A',),
629 (A'')].
630 (A'')].
630
631
631 If a changeset A is not obsolete, then it will conceptually have no
632 If a changeset A is not obsolete, then it will conceptually have no
632 successors set. To distinguish this from a pruned changeset, the successor
633 successors set. To distinguish this from a pruned changeset, the successor
633 set will only contain itself, i.e. [(A,)].
634 set will only contain itself, i.e. [(A,)].
634
635
635 Finally, successors unknown locally are considered to be pruned (obsoleted
636 Finally, successors unknown locally are considered to be pruned (obsoleted
636 without any successors).
637 without any successors).
637
638
638 The optional `cache` parameter is a dictionary that may contain precomputed
639 The optional `cache` parameter is a dictionary that may contain precomputed
639 successors sets. It is meant to reuse the computation of a previous call to
640 successors sets. It is meant to reuse the computation of a previous call to
640 `successorssets` when multiple calls are made at the same time. The cache
641 `successorssets` when multiple calls are made at the same time. The cache
641 dictionary is updated in place. The caller is responsible for its live
642 dictionary is updated in place. The caller is responsible for its live
642 spawn. Code that makes multiple calls to `successorssets` *must* use this
643 spawn. Code that makes multiple calls to `successorssets` *must* use this
643 cache mechanism or suffer terrible performances.
644 cache mechanism or suffer terrible performances.
644
645
645 """
646 """
646
647
647 succmarkers = repo.obsstore.successors
648 succmarkers = repo.obsstore.successors
648
649
649 # Stack of nodes we search successors sets for
650 # Stack of nodes we search successors sets for
650 toproceed = [initialnode]
651 toproceed = [initialnode]
651 # set version of above list for fast loop detection
652 # set version of above list for fast loop detection
652 # element added to "toproceed" must be added here
653 # element added to "toproceed" must be added here
653 stackedset = set(toproceed)
654 stackedset = set(toproceed)
654 if cache is None:
655 if cache is None:
655 cache = {}
656 cache = {}
656
657
657 # This while loop is the flattened version of a recursive search for
658 # This while loop is the flattened version of a recursive search for
658 # successors sets
659 # successors sets
659 #
660 #
660 # def successorssets(x):
661 # def successorssets(x):
661 # successors = directsuccessors(x)
662 # successors = directsuccessors(x)
662 # ss = [[]]
663 # ss = [[]]
663 # for succ in directsuccessors(x):
664 # for succ in directsuccessors(x):
664 # # product as in itertools cartesian product
665 # # product as in itertools cartesian product
665 # ss = product(ss, successorssets(succ))
666 # ss = product(ss, successorssets(succ))
666 # return ss
667 # return ss
667 #
668 #
668 # But we can not use plain recursive calls here:
669 # But we can not use plain recursive calls here:
669 # - that would blow the python call stack
670 # - that would blow the python call stack
670 # - obsolescence markers may have cycles, we need to handle them.
671 # - obsolescence markers may have cycles, we need to handle them.
671 #
672 #
672 # The `toproceed` list act as our call stack. Every node we search
673 # The `toproceed` list act as our call stack. Every node we search
673 # successors set for are stacked there.
674 # successors set for are stacked there.
674 #
675 #
675 # The `stackedset` is set version of this stack used to check if a node is
676 # The `stackedset` is set version of this stack used to check if a node is
676 # already stacked. This check is used to detect cycles and prevent infinite
677 # already stacked. This check is used to detect cycles and prevent infinite
677 # loop.
678 # loop.
678 #
679 #
679 # successors set of all nodes are stored in the `cache` dictionary.
680 # successors set of all nodes are stored in the `cache` dictionary.
680 #
681 #
681 # After this while loop ends we use the cache to return the successors sets
682 # After this while loop ends we use the cache to return the successors sets
682 # for the node requested by the caller.
683 # for the node requested by the caller.
683 while toproceed:
684 while toproceed:
684 # Every iteration tries to compute the successors sets of the topmost
685 # Every iteration tries to compute the successors sets of the topmost
685 # node of the stack: CURRENT.
686 # node of the stack: CURRENT.
686 #
687 #
687 # There are four possible outcomes:
688 # There are four possible outcomes:
688 #
689 #
689 # 1) We already know the successors sets of CURRENT:
690 # 1) We already know the successors sets of CURRENT:
690 # -> mission accomplished, pop it from the stack.
691 # -> mission accomplished, pop it from the stack.
691 # 2) Node is not obsolete:
692 # 2) Node is not obsolete:
692 # -> the node is its own successors sets. Add it to the cache.
693 # -> the node is its own successors sets. Add it to the cache.
693 # 3) We do not know successors set of direct successors of CURRENT:
694 # 3) We do not know successors set of direct successors of CURRENT:
694 # -> We add those successors to the stack.
695 # -> We add those successors to the stack.
695 # 4) We know successors sets of all direct successors of CURRENT:
696 # 4) We know successors sets of all direct successors of CURRENT:
696 # -> We can compute CURRENT successors set and add it to the
697 # -> We can compute CURRENT successors set and add it to the
697 # cache.
698 # cache.
698 #
699 #
699 current = toproceed[-1]
700 current = toproceed[-1]
700 if current in cache:
701 if current in cache:
701 # case (1): We already know the successors sets
702 # case (1): We already know the successors sets
702 stackedset.remove(toproceed.pop())
703 stackedset.remove(toproceed.pop())
703 elif current not in succmarkers:
704 elif current not in succmarkers:
704 # case (2): The node is not obsolete.
705 # case (2): The node is not obsolete.
705 if current in repo:
706 if current in repo:
706 # We have a valid last successors.
707 # We have a valid last successors.
707 cache[current] = [(current,)]
708 cache[current] = [(current,)]
708 else:
709 else:
709 # Final obsolete version is unknown locally.
710 # Final obsolete version is unknown locally.
710 # Do not count that as a valid successors
711 # Do not count that as a valid successors
711 cache[current] = []
712 cache[current] = []
712 else:
713 else:
713 # cases (3) and (4)
714 # cases (3) and (4)
714 #
715 #
715 # We proceed in two phases. Phase 1 aims to distinguish case (3)
716 # We proceed in two phases. Phase 1 aims to distinguish case (3)
716 # from case (4):
717 # from case (4):
717 #
718 #
718 # For each direct successors of CURRENT, we check whether its
719 # For each direct successors of CURRENT, we check whether its
719 # successors sets are known. If they are not, we stack the
720 # successors sets are known. If they are not, we stack the
720 # unknown node and proceed to the next iteration of the while
721 # unknown node and proceed to the next iteration of the while
721 # loop. (case 3)
722 # loop. (case 3)
722 #
723 #
723 # During this step, we may detect obsolescence cycles: a node
724 # During this step, we may detect obsolescence cycles: a node
724 # with unknown successors sets but already in the call stack.
725 # with unknown successors sets but already in the call stack.
725 # In such a situation, we arbitrary set the successors sets of
726 # In such a situation, we arbitrary set the successors sets of
726 # the node to nothing (node pruned) to break the cycle.
727 # the node to nothing (node pruned) to break the cycle.
727 #
728 #
728 # If no break was encountered we proceed to phase 2.
729 # If no break was encountered we proceed to phase 2.
729 #
730 #
730 # Phase 2 computes successors sets of CURRENT (case 4); see details
731 # Phase 2 computes successors sets of CURRENT (case 4); see details
731 # in phase 2 itself.
732 # in phase 2 itself.
732 #
733 #
733 # Note the two levels of iteration in each phase.
734 # Note the two levels of iteration in each phase.
734 # - The first one handles obsolescence markers using CURRENT as
735 # - The first one handles obsolescence markers using CURRENT as
735 # precursor (successors markers of CURRENT).
736 # precursor (successors markers of CURRENT).
736 #
737 #
737 # Having multiple entry here means divergence.
738 # Having multiple entry here means divergence.
738 #
739 #
739 # - The second one handles successors defined in each marker.
740 # - The second one handles successors defined in each marker.
740 #
741 #
741 # Having none means pruned node, multiple successors means split,
742 # Having none means pruned node, multiple successors means split,
742 # single successors are standard replacement.
743 # single successors are standard replacement.
743 #
744 #
744 for mark in sorted(succmarkers[current]):
745 for mark in sorted(succmarkers[current]):
745 for suc in mark[1]:
746 for suc in mark[1]:
746 if suc not in cache:
747 if suc not in cache:
747 if suc in stackedset:
748 if suc in stackedset:
748 # cycle breaking
749 # cycle breaking
749 cache[suc] = []
750 cache[suc] = []
750 else:
751 else:
751 # case (3) If we have not computed successors sets
752 # case (3) If we have not computed successors sets
752 # of one of those successors we add it to the
753 # of one of those successors we add it to the
753 # `toproceed` stack and stop all work for this
754 # `toproceed` stack and stop all work for this
754 # iteration.
755 # iteration.
755 toproceed.append(suc)
756 toproceed.append(suc)
756 stackedset.add(suc)
757 stackedset.add(suc)
757 break
758 break
758 else:
759 else:
759 continue
760 continue
760 break
761 break
761 else:
762 else:
762 # case (4): we know all successors sets of all direct
763 # case (4): we know all successors sets of all direct
763 # successors
764 # successors
764 #
765 #
765 # Successors set contributed by each marker depends on the
766 # Successors set contributed by each marker depends on the
766 # successors sets of all its "successors" node.
767 # successors sets of all its "successors" node.
767 #
768 #
768 # Each different marker is a divergence in the obsolescence
769 # Each different marker is a divergence in the obsolescence
769 # history. It contributes successors sets distinct from other
770 # history. It contributes successors sets distinct from other
770 # markers.
771 # markers.
771 #
772 #
772 # Within a marker, a successor may have divergent successors
773 # Within a marker, a successor may have divergent successors
773 # sets. In such a case, the marker will contribute multiple
774 # sets. In such a case, the marker will contribute multiple
774 # divergent successors sets. If multiple successors have
775 # divergent successors sets. If multiple successors have
775 # divergent successors sets, a Cartesian product is used.
776 # divergent successors sets, a Cartesian product is used.
776 #
777 #
777 # At the end we post-process successors sets to remove
778 # At the end we post-process successors sets to remove
778 # duplicated entry and successors set that are strict subset of
779 # duplicated entry and successors set that are strict subset of
779 # another one.
780 # another one.
780 succssets = []
781 succssets = []
781 for mark in sorted(succmarkers[current]):
782 for mark in sorted(succmarkers[current]):
782 # successors sets contributed by this marker
783 # successors sets contributed by this marker
783 markss = [[]]
784 markss = [[]]
784 for suc in mark[1]:
785 for suc in mark[1]:
785 # cardinal product with previous successors
786 # cardinal product with previous successors
786 productresult = []
787 productresult = []
787 for prefix in markss:
788 for prefix in markss:
788 for suffix in cache[suc]:
789 for suffix in cache[suc]:
789 newss = list(prefix)
790 newss = list(prefix)
790 for part in suffix:
791 for part in suffix:
791 # do not duplicated entry in successors set
792 # do not duplicated entry in successors set
792 # first entry wins.
793 # first entry wins.
793 if part not in newss:
794 if part not in newss:
794 newss.append(part)
795 newss.append(part)
795 productresult.append(newss)
796 productresult.append(newss)
796 markss = productresult
797 markss = productresult
797 succssets.extend(markss)
798 succssets.extend(markss)
798 # remove duplicated and subset
799 # remove duplicated and subset
799 seen = []
800 seen = []
800 final = []
801 final = []
801 candidate = sorted(((set(s), s) for s in succssets if s),
802 candidate = sorted(((set(s), s) for s in succssets if s),
802 key=lambda x: len(x[1]), reverse=True)
803 key=lambda x: len(x[1]), reverse=True)
803 for setversion, listversion in candidate:
804 for setversion, listversion in candidate:
804 for seenset in seen:
805 for seenset in seen:
805 if setversion.issubset(seenset):
806 if setversion.issubset(seenset):
806 break
807 break
807 else:
808 else:
808 final.append(listversion)
809 final.append(listversion)
809 seen.append(setversion)
810 seen.append(setversion)
810 final.reverse() # put small successors set first
811 final.reverse() # put small successors set first
811 cache[current] = final
812 cache[current] = final
812 return cache[initialnode]
813 return cache[initialnode]
813
814
814 def _knownrevs(repo, nodes):
815 def _knownrevs(repo, nodes):
815 """yield revision numbers of known nodes passed in parameters
816 """yield revision numbers of known nodes passed in parameters
816
817
817 Unknown revisions are silently ignored."""
818 Unknown revisions are silently ignored."""
818 torev = repo.changelog.nodemap.get
819 torev = repo.changelog.nodemap.get
819 for n in nodes:
820 for n in nodes:
820 rev = torev(n)
821 rev = torev(n)
821 if rev is not None:
822 if rev is not None:
822 yield rev
823 yield rev
823
824
824 # mapping of 'set-name' -> <function to compute this set>
825 # mapping of 'set-name' -> <function to compute this set>
825 cachefuncs = {}
826 cachefuncs = {}
826 def cachefor(name):
827 def cachefor(name):
827 """Decorator to register a function as computing the cache for a set"""
828 """Decorator to register a function as computing the cache for a set"""
828 def decorator(func):
829 def decorator(func):
829 assert name not in cachefuncs
830 assert name not in cachefuncs
830 cachefuncs[name] = func
831 cachefuncs[name] = func
831 return func
832 return func
832 return decorator
833 return decorator
833
834
834 def getrevs(repo, name):
835 def getrevs(repo, name):
835 """Return the set of revision that belong to the <name> set
836 """Return the set of revision that belong to the <name> set
836
837
837 Such access may compute the set and cache it for future use"""
838 Such access may compute the set and cache it for future use"""
838 repo = repo.unfiltered()
839 repo = repo.unfiltered()
839 if not repo.obsstore:
840 if not repo.obsstore:
840 return ()
841 return ()
841 if name not in repo.obsstore.caches:
842 if name not in repo.obsstore.caches:
842 repo.obsstore.caches[name] = cachefuncs[name](repo)
843 repo.obsstore.caches[name] = cachefuncs[name](repo)
843 return repo.obsstore.caches[name]
844 return repo.obsstore.caches[name]
844
845
845 # To be simple we need to invalidate obsolescence cache when:
846 # To be simple we need to invalidate obsolescence cache when:
846 #
847 #
847 # - new changeset is added:
848 # - new changeset is added:
848 # - public phase is changed
849 # - public phase is changed
849 # - obsolescence marker are added
850 # - obsolescence marker are added
850 # - strip is used a repo
851 # - strip is used a repo
851 def clearobscaches(repo):
852 def clearobscaches(repo):
852 """Remove all obsolescence related cache from a repo
853 """Remove all obsolescence related cache from a repo
853
854
854 This remove all cache in obsstore is the obsstore already exist on the
855 This remove all cache in obsstore is the obsstore already exist on the
855 repo.
856 repo.
856
857
857 (We could be smarter here given the exact event that trigger the cache
858 (We could be smarter here given the exact event that trigger the cache
858 clearing)"""
859 clearing)"""
859 # only clear cache is there is obsstore data in this repo
860 # only clear cache is there is obsstore data in this repo
860 if 'obsstore' in repo._filecache:
861 if 'obsstore' in repo._filecache:
861 repo.obsstore.caches.clear()
862 repo.obsstore.caches.clear()
862
863
863 @cachefor('obsolete')
864 @cachefor('obsolete')
864 def _computeobsoleteset(repo):
865 def _computeobsoleteset(repo):
865 """the set of obsolete revisions"""
866 """the set of obsolete revisions"""
866 obs = set()
867 obs = set()
867 getrev = repo.changelog.nodemap.get
868 getrev = repo.changelog.nodemap.get
868 getphase = repo._phasecache.phase
869 getphase = repo._phasecache.phase
869 for n in repo.obsstore.successors:
870 for n in repo.obsstore.successors:
870 rev = getrev(n)
871 rev = getrev(n)
871 if rev is not None and getphase(repo, rev):
872 if rev is not None and getphase(repo, rev):
872 obs.add(rev)
873 obs.add(rev)
873 return obs
874 return obs
874
875
875 @cachefor('unstable')
876 @cachefor('unstable')
876 def _computeunstableset(repo):
877 def _computeunstableset(repo):
877 """the set of non obsolete revisions with obsolete parents"""
878 """the set of non obsolete revisions with obsolete parents"""
878 # revset is not efficient enough here
879 # revset is not efficient enough here
879 # we do (obsolete()::) - obsolete() by hand
880 # we do (obsolete()::) - obsolete() by hand
880 obs = getrevs(repo, 'obsolete')
881 obs = getrevs(repo, 'obsolete')
881 if not obs:
882 if not obs:
882 return set()
883 return set()
883 cl = repo.changelog
884 cl = repo.changelog
884 return set(r for r in cl.descendants(obs) if r not in obs)
885 return set(r for r in cl.descendants(obs) if r not in obs)
885
886
886 @cachefor('suspended')
887 @cachefor('suspended')
887 def _computesuspendedset(repo):
888 def _computesuspendedset(repo):
888 """the set of obsolete parents with non obsolete descendants"""
889 """the set of obsolete parents with non obsolete descendants"""
889 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
890 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
890 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
891 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
891
892
892 @cachefor('extinct')
893 @cachefor('extinct')
893 def _computeextinctset(repo):
894 def _computeextinctset(repo):
894 """the set of obsolete parents without non obsolete descendants"""
895 """the set of obsolete parents without non obsolete descendants"""
895 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
896 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
896
897
897
898
898 @cachefor('bumped')
899 @cachefor('bumped')
899 def _computebumpedset(repo):
900 def _computebumpedset(repo):
900 """the set of revs trying to obsolete public revisions"""
901 """the set of revs trying to obsolete public revisions"""
901 bumped = set()
902 bumped = set()
902 # util function (avoid attribute lookup in the loop)
903 # util function (avoid attribute lookup in the loop)
903 phase = repo._phasecache.phase # would be faster to grab the full list
904 phase = repo._phasecache.phase # would be faster to grab the full list
904 public = phases.public
905 public = phases.public
905 cl = repo.changelog
906 cl = repo.changelog
906 torev = cl.nodemap.get
907 torev = cl.nodemap.get
907 obs = getrevs(repo, 'obsolete')
908 obs = getrevs(repo, 'obsolete')
908 for rev in repo:
909 for rev in repo:
909 # We only evaluate mutable, non-obsolete revision
910 # We only evaluate mutable, non-obsolete revision
910 if (public < phase(repo, rev)) and (rev not in obs):
911 if (public < phase(repo, rev)) and (rev not in obs):
911 node = cl.node(rev)
912 node = cl.node(rev)
912 # (future) A cache of precursors may worth if split is very common
913 # (future) A cache of precursors may worth if split is very common
913 for pnode in allprecursors(repo.obsstore, [node],
914 for pnode in allprecursors(repo.obsstore, [node],
914 ignoreflags=bumpedfix):
915 ignoreflags=bumpedfix):
915 prev = torev(pnode) # unfiltered! but so is phasecache
916 prev = torev(pnode) # unfiltered! but so is phasecache
916 if (prev is not None) and (phase(repo, prev) <= public):
917 if (prev is not None) and (phase(repo, prev) <= public):
917 # we have a public precursors
918 # we have a public precursors
918 bumped.add(rev)
919 bumped.add(rev)
919 break # Next draft!
920 break # Next draft!
920 return bumped
921 return bumped
921
922
922 @cachefor('divergent')
923 @cachefor('divergent')
923 def _computedivergentset(repo):
924 def _computedivergentset(repo):
924 """the set of rev that compete to be the final successors of some revision.
925 """the set of rev that compete to be the final successors of some revision.
925 """
926 """
926 divergent = set()
927 divergent = set()
927 obsstore = repo.obsstore
928 obsstore = repo.obsstore
928 newermap = {}
929 newermap = {}
929 for ctx in repo.set('(not public()) - obsolete()'):
930 for ctx in repo.set('(not public()) - obsolete()'):
930 mark = obsstore.precursors.get(ctx.node(), ())
931 mark = obsstore.precursors.get(ctx.node(), ())
931 toprocess = set(mark)
932 toprocess = set(mark)
932 while toprocess:
933 while toprocess:
933 prec = toprocess.pop()[0]
934 prec = toprocess.pop()[0]
934 if prec not in newermap:
935 if prec not in newermap:
935 successorssets(repo, prec, newermap)
936 successorssets(repo, prec, newermap)
936 newer = [n for n in newermap[prec] if n]
937 newer = [n for n in newermap[prec] if n]
937 if len(newer) > 1:
938 if len(newer) > 1:
938 divergent.add(ctx.rev())
939 divergent.add(ctx.rev())
939 break
940 break
940 toprocess.update(obsstore.precursors.get(prec, ()))
941 toprocess.update(obsstore.precursors.get(prec, ()))
941 return divergent
942 return divergent
942
943
943
944
944 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
945 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
945 """Add obsolete markers between changesets in a repo
946 """Add obsolete markers between changesets in a repo
946
947
947 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
948 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
948 tuple. `old` and `news` are changectx. metadata is an optional dictionary
949 tuple. `old` and `news` are changectx. metadata is an optional dictionary
949 containing metadata for this marker only. It is merged with the global
950 containing metadata for this marker only. It is merged with the global
950 metadata specified through the `metadata` argument of this function,
951 metadata specified through the `metadata` argument of this function,
951
952
952 Trying to obsolete a public changeset will raise an exception.
953 Trying to obsolete a public changeset will raise an exception.
953
954
954 Current user and date are used except if specified otherwise in the
955 Current user and date are used except if specified otherwise in the
955 metadata attribute.
956 metadata attribute.
956
957
957 This function operates within a transaction of its own, but does
958 This function operates within a transaction of its own, but does
958 not take any lock on the repo.
959 not take any lock on the repo.
959 """
960 """
960 # prepare metadata
961 # prepare metadata
961 if metadata is None:
962 if metadata is None:
962 metadata = {}
963 metadata = {}
963 if 'user' not in metadata:
964 if 'user' not in metadata:
964 metadata['user'] = repo.ui.username()
965 metadata['user'] = repo.ui.username()
965 tr = repo.transaction('add-obsolescence-marker')
966 tr = repo.transaction('add-obsolescence-marker')
966 try:
967 try:
967 for rel in relations:
968 for rel in relations:
968 prec = rel[0]
969 prec = rel[0]
969 sucs = rel[1]
970 sucs = rel[1]
970 localmetadata = metadata.copy()
971 localmetadata = metadata.copy()
971 if 2 < len(rel):
972 if 2 < len(rel):
972 localmetadata.update(rel[2])
973 localmetadata.update(rel[2])
973
974
974 if not prec.mutable():
975 if not prec.mutable():
975 raise util.Abort("cannot obsolete immutable changeset: %s"
976 raise util.Abort("cannot obsolete immutable changeset: %s"
976 % prec)
977 % prec)
977 nprec = prec.node()
978 nprec = prec.node()
978 nsucs = tuple(s.node() for s in sucs)
979 nsucs = tuple(s.node() for s in sucs)
979 npare = None
980 npare = None
980 if not nsucs:
981 if not nsucs:
981 npare = tuple(p.node() for p in prec.parents())
982 npare = tuple(p.node() for p in prec.parents())
982 if nprec in nsucs:
983 if nprec in nsucs:
983 raise util.Abort("changeset %s cannot obsolete itself" % prec)
984 raise util.Abort("changeset %s cannot obsolete itself" % prec)
984 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
985 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
985 date=date, metadata=localmetadata)
986 date=date, metadata=localmetadata)
986 repo.filteredrevcache.clear()
987 repo.filteredrevcache.clear()
987 tr.close()
988 tr.close()
988 finally:
989 finally:
989 tr.release()
990 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now