##// END OF EJS Templates
obsolete: make encodemarkers a public function...
Pierre-Yves David -
r22335:7c4c19df default
parent child Browse files
Show More
@@ -1,990 +1,990
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66
66
67 The header is followed by the markers. Each marker is made of:
67 The header is followed by the markers. Each marker is made of:
68
68
69 - 1 unsigned byte: number of new changesets "N", can be zero.
69 - 1 unsigned byte: number of new changesets "N", can be zero.
70
70
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
72
72
73 - 1 byte: a bit field. It is reserved for flags used in common
73 - 1 byte: a bit field. It is reserved for flags used in common
74 obsolete marker operations, to avoid repeated decoding of metadata
74 obsolete marker operations, to avoid repeated decoding of metadata
75 entries.
75 entries.
76
76
77 - 20 bytes: obsoleted changeset identifier.
77 - 20 bytes: obsoleted changeset identifier.
78
78
79 - N*20 bytes: new changesets identifiers.
79 - N*20 bytes: new changesets identifiers.
80
80
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 - M bytes: metadata as a sequence of nul-terminated strings. Each
82 string contains a key and a value, separated by a colon ':', without
82 string contains a key and a value, separated by a colon ':', without
83 additional encoding. Keys cannot contain '\0' or ':' and values
83 additional encoding. Keys cannot contain '\0' or ':' and values
84 cannot contain '\0'.
84 cannot contain '\0'.
85
85
86 """
86 """
87 import struct
87 import struct
88 import util, base85, node
88 import util, base85, node
89 import phases
89 import phases
90 from i18n import _
90 from i18n import _
91
91
92 _pack = struct.pack
92 _pack = struct.pack
93 _unpack = struct.unpack
93 _unpack = struct.unpack
94
94
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
95 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
96
96
97 # the obsolete feature is not mature enough to be enabled by default.
97 # the obsolete feature is not mature enough to be enabled by default.
98 # you have to rely on third party extension extension to enable this.
98 # you have to rely on third party extension extension to enable this.
99 _enabled = False
99 _enabled = False
100
100
101 # data used for parsing and writing
101 # data used for parsing and writing
102 _fm0version = 0
102 _fm0version = 0
103 _fm0fixed = '>BIB20s'
103 _fm0fixed = '>BIB20s'
104 _fm0node = '20s'
104 _fm0node = '20s'
105 _fm0fsize = struct.calcsize(_fm0fixed)
105 _fm0fsize = struct.calcsize(_fm0fixed)
106 _fm0fnodesize = struct.calcsize(_fm0node)
106 _fm0fnodesize = struct.calcsize(_fm0node)
107
107
108 ### obsolescence marker flag
108 ### obsolescence marker flag
109
109
110 ## bumpedfix flag
110 ## bumpedfix flag
111 #
111 #
112 # When a changeset A' succeed to a changeset A which became public, we call A'
112 # When a changeset A' succeed to a changeset A which became public, we call A'
113 # "bumped" because it's a successors of a public changesets
113 # "bumped" because it's a successors of a public changesets
114 #
114 #
115 # o A' (bumped)
115 # o A' (bumped)
116 # |`:
116 # |`:
117 # | o A
117 # | o A
118 # |/
118 # |/
119 # o Z
119 # o Z
120 #
120 #
121 # The way to solve this situation is to create a new changeset Ad as children
121 # The way to solve this situation is to create a new changeset Ad as children
122 # of A. This changeset have the same content than A'. So the diff from A to A'
122 # of A. This changeset have the same content than A'. So the diff from A to A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
123 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
124 #
124 #
125 # o Ad
125 # o Ad
126 # |`:
126 # |`:
127 # | x A'
127 # | x A'
128 # |'|
128 # |'|
129 # o | A
129 # o | A
130 # |/
130 # |/
131 # o Z
131 # o Z
132 #
132 #
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
133 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
134 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
135 # This flag mean that the successors express the changes between the public and
135 # This flag mean that the successors express the changes between the public and
136 # bumped version and fix the situation, breaking the transitivity of
136 # bumped version and fix the situation, breaking the transitivity of
137 # "bumped" here.
137 # "bumped" here.
138 bumpedfix = 1
138 bumpedfix = 1
139
139
140 def _readmarkers(data):
140 def _readmarkers(data):
141 """Read and enumerate markers from raw data"""
141 """Read and enumerate markers from raw data"""
142 off = 0
142 off = 0
143 diskversion = _unpack('>B', data[off:off + 1])[0]
143 diskversion = _unpack('>B', data[off:off + 1])[0]
144 off += 1
144 off += 1
145 if diskversion not in formats:
145 if diskversion not in formats:
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
146 raise util.Abort(_('parsing obsolete marker: unknown version %r')
147 % diskversion)
147 % diskversion)
148 return diskversion, formats[diskversion][0](data, off)
148 return diskversion, formats[diskversion][0](data, off)
149
149
150 def _encodemarkers(markers, addheader=False, version=_fm0version):
150 def encodemarkers(markers, addheader=False, version=_fm0version):
151 # Kept separate from flushmarkers(), it will be reused for
151 # Kept separate from flushmarkers(), it will be reused for
152 # markers exchange.
152 # markers exchange.
153 encodeone = formats[version][1]
153 encodeone = formats[version][1]
154 if addheader:
154 if addheader:
155 yield _pack('>B', _fm0version)
155 yield _pack('>B', _fm0version)
156 for marker in markers:
156 for marker in markers:
157 yield encodeone(marker)
157 yield encodeone(marker)
158
158
159 def _fm0readmarkers(data, off=0):
159 def _fm0readmarkers(data, off=0):
160 # Loop on markers
160 # Loop on markers
161 l = len(data)
161 l = len(data)
162 while off + _fm0fsize <= l:
162 while off + _fm0fsize <= l:
163 # read fixed part
163 # read fixed part
164 cur = data[off:off + _fm0fsize]
164 cur = data[off:off + _fm0fsize]
165 off += _fm0fsize
165 off += _fm0fsize
166 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
166 nbsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
167 # read replacement
167 # read replacement
168 sucs = ()
168 sucs = ()
169 if nbsuc:
169 if nbsuc:
170 s = (_fm0fnodesize * nbsuc)
170 s = (_fm0fnodesize * nbsuc)
171 cur = data[off:off + s]
171 cur = data[off:off + s]
172 sucs = _unpack(_fm0node * nbsuc, cur)
172 sucs = _unpack(_fm0node * nbsuc, cur)
173 off += s
173 off += s
174 # read metadata
174 # read metadata
175 # (metadata will be decoded on demand)
175 # (metadata will be decoded on demand)
176 metadata = data[off:off + mdsize]
176 metadata = data[off:off + mdsize]
177 if len(metadata) != mdsize:
177 if len(metadata) != mdsize:
178 raise util.Abort(_('parsing obsolete marker: metadata is too '
178 raise util.Abort(_('parsing obsolete marker: metadata is too '
179 'short, %d bytes expected, got %d')
179 'short, %d bytes expected, got %d')
180 % (mdsize, len(metadata)))
180 % (mdsize, len(metadata)))
181 off += mdsize
181 off += mdsize
182 meta = decodemeta(metadata)
182 meta = decodemeta(metadata)
183 try:
183 try:
184 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
184 when, offset = decodemeta(metadata).pop('date', '0 0').split(' ')
185 date = float(when), int(offset)
185 date = float(when), int(offset)
186 except ValueError:
186 except ValueError:
187 date = (0., 0)
187 date = (0., 0)
188 parents = None
188 parents = None
189 if 'p2' in meta:
189 if 'p2' in meta:
190 parents = (meta.pop('p1', None), meta.pop('p2', None))
190 parents = (meta.pop('p1', None), meta.pop('p2', None))
191 elif 'p1' in meta:
191 elif 'p1' in meta:
192 parents = (meta.pop('p1', None),)
192 parents = (meta.pop('p1', None),)
193 elif 'p0' in meta:
193 elif 'p0' in meta:
194 parents = ()
194 parents = ()
195 if parents is not None:
195 if parents is not None:
196 try:
196 try:
197 parents = tuple(node.bin(p) for p in parents)
197 parents = tuple(node.bin(p) for p in parents)
198 # if parent content is not a nodeid, drop the data
198 # if parent content is not a nodeid, drop the data
199 for p in parents:
199 for p in parents:
200 if len(p) != 20:
200 if len(p) != 20:
201 parents = None
201 parents = None
202 break
202 break
203 except TypeError:
203 except TypeError:
204 # if content cannot be translated to nodeid drop the data.
204 # if content cannot be translated to nodeid drop the data.
205 parents = None
205 parents = None
206
206
207 metadata = encodemeta(meta)
207 metadata = encodemeta(meta)
208
208
209 yield (pre, sucs, flags, metadata, date, parents)
209 yield (pre, sucs, flags, metadata, date, parents)
210
210
211 def _fm0encodeonemarker(marker):
211 def _fm0encodeonemarker(marker):
212 pre, sucs, flags, metadata, date, parents = marker
212 pre, sucs, flags, metadata, date, parents = marker
213 metadata = decodemeta(metadata)
213 metadata = decodemeta(metadata)
214 metadata['date'] = '%d %i' % date
214 metadata['date'] = '%d %i' % date
215 if parents is not None:
215 if parents is not None:
216 if not parents:
216 if not parents:
217 # mark that we explicitly recorded no parents
217 # mark that we explicitly recorded no parents
218 metadata['p0'] = ''
218 metadata['p0'] = ''
219 for i, p in enumerate(parents):
219 for i, p in enumerate(parents):
220 metadata['p%i' % (i + 1)] = node.hex(p)
220 metadata['p%i' % (i + 1)] = node.hex(p)
221 metadata = encodemeta(metadata)
221 metadata = encodemeta(metadata)
222 nbsuc = len(sucs)
222 nbsuc = len(sucs)
223 format = _fm0fixed + (_fm0node * nbsuc)
223 format = _fm0fixed + (_fm0node * nbsuc)
224 data = [nbsuc, len(metadata), flags, pre]
224 data = [nbsuc, len(metadata), flags, pre]
225 data.extend(sucs)
225 data.extend(sucs)
226 return _pack(format, *data) + metadata
226 return _pack(format, *data) + metadata
227
227
228 # mapping to read/write various marker formats
228 # mapping to read/write various marker formats
229 # <version> -> (decoder, encoder)
229 # <version> -> (decoder, encoder)
230 formats = {0: (_fm0readmarkers, _fm0encodeonemarker)}
230 formats = {0: (_fm0readmarkers, _fm0encodeonemarker)}
231
231
232 def encodemeta(meta):
232 def encodemeta(meta):
233 """Return encoded metadata string to string mapping.
233 """Return encoded metadata string to string mapping.
234
234
235 Assume no ':' in key and no '\0' in both key and value."""
235 Assume no ':' in key and no '\0' in both key and value."""
236 for key, value in meta.iteritems():
236 for key, value in meta.iteritems():
237 if ':' in key or '\0' in key:
237 if ':' in key or '\0' in key:
238 raise ValueError("':' and '\0' are forbidden in metadata key'")
238 raise ValueError("':' and '\0' are forbidden in metadata key'")
239 if '\0' in value:
239 if '\0' in value:
240 raise ValueError("':' is forbidden in metadata value'")
240 raise ValueError("':' is forbidden in metadata value'")
241 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
241 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
242
242
243 def decodemeta(data):
243 def decodemeta(data):
244 """Return string to string dictionary from encoded version."""
244 """Return string to string dictionary from encoded version."""
245 d = {}
245 d = {}
246 for l in data.split('\0'):
246 for l in data.split('\0'):
247 if l:
247 if l:
248 key, value = l.split(':')
248 key, value = l.split(':')
249 d[key] = value
249 d[key] = value
250 return d
250 return d
251
251
252 class marker(object):
252 class marker(object):
253 """Wrap obsolete marker raw data"""
253 """Wrap obsolete marker raw data"""
254
254
255 def __init__(self, repo, data):
255 def __init__(self, repo, data):
256 # the repo argument will be used to create changectx in later version
256 # the repo argument will be used to create changectx in later version
257 self._repo = repo
257 self._repo = repo
258 self._data = data
258 self._data = data
259 self._decodedmeta = None
259 self._decodedmeta = None
260
260
261 def __hash__(self):
261 def __hash__(self):
262 return hash(self._data)
262 return hash(self._data)
263
263
264 def __eq__(self, other):
264 def __eq__(self, other):
265 if type(other) != type(self):
265 if type(other) != type(self):
266 return False
266 return False
267 return self._data == other._data
267 return self._data == other._data
268
268
269 def precnode(self):
269 def precnode(self):
270 """Precursor changeset node identifier"""
270 """Precursor changeset node identifier"""
271 return self._data[0]
271 return self._data[0]
272
272
273 def succnodes(self):
273 def succnodes(self):
274 """List of successor changesets node identifiers"""
274 """List of successor changesets node identifiers"""
275 return self._data[1]
275 return self._data[1]
276
276
277 def parentnodes(self):
277 def parentnodes(self):
278 """Parents of the precursors (None if not recorded)"""
278 """Parents of the precursors (None if not recorded)"""
279 return self._data[5]
279 return self._data[5]
280
280
281 def metadata(self):
281 def metadata(self):
282 """Decoded metadata dictionary"""
282 """Decoded metadata dictionary"""
283 if self._decodedmeta is None:
283 if self._decodedmeta is None:
284 self._decodedmeta = decodemeta(self._data[3])
284 self._decodedmeta = decodemeta(self._data[3])
285 return self._decodedmeta
285 return self._decodedmeta
286
286
287 def date(self):
287 def date(self):
288 """Creation date as (unixtime, offset)"""
288 """Creation date as (unixtime, offset)"""
289 return self._data[4]
289 return self._data[4]
290
290
291 def flags(self):
291 def flags(self):
292 """The flags field of the marker"""
292 """The flags field of the marker"""
293 return self._data[2]
293 return self._data[2]
294
294
295 class obsstore(object):
295 class obsstore(object):
296 """Store obsolete markers
296 """Store obsolete markers
297
297
298 Markers can be accessed with two mappings:
298 Markers can be accessed with two mappings:
299 - precursors[x] -> set(markers on precursors edges of x)
299 - precursors[x] -> set(markers on precursors edges of x)
300 - successors[x] -> set(markers on successors edges of x)
300 - successors[x] -> set(markers on successors edges of x)
301 - children[x] -> set(markers on precursors edges of children(x)
301 - children[x] -> set(markers on precursors edges of children(x)
302 """
302 """
303
303
304 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
304 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
305 # prec: nodeid, precursor changesets
305 # prec: nodeid, precursor changesets
306 # succs: tuple of nodeid, successor changesets (0-N length)
306 # succs: tuple of nodeid, successor changesets (0-N length)
307 # flag: integer, flag field carrying modifier for the markers (see doc)
307 # flag: integer, flag field carrying modifier for the markers (see doc)
308 # meta: binary blob, encoded metadata dictionary
308 # meta: binary blob, encoded metadata dictionary
309 # date: (float, int) tuple, date of marker creation
309 # date: (float, int) tuple, date of marker creation
310 # parents: (tuple of nodeid) or None, parents of precursors
310 # parents: (tuple of nodeid) or None, parents of precursors
311 # None is used when no data has been recorded
311 # None is used when no data has been recorded
312
312
313 def __init__(self, sopener):
313 def __init__(self, sopener):
314 # caches for various obsolescence related cache
314 # caches for various obsolescence related cache
315 self.caches = {}
315 self.caches = {}
316 self._all = []
316 self._all = []
317 self.precursors = {}
317 self.precursors = {}
318 self.successors = {}
318 self.successors = {}
319 self.children = {}
319 self.children = {}
320 self.sopener = sopener
320 self.sopener = sopener
321 data = sopener.tryread('obsstore')
321 data = sopener.tryread('obsstore')
322 self._version = _fm0version
322 self._version = _fm0version
323 if data:
323 if data:
324 self._version, markers = _readmarkers(data)
324 self._version, markers = _readmarkers(data)
325 self._load(markers)
325 self._load(markers)
326
326
327 def __iter__(self):
327 def __iter__(self):
328 return iter(self._all)
328 return iter(self._all)
329
329
330 def __len__(self):
330 def __len__(self):
331 return len(self._all)
331 return len(self._all)
332
332
333 def __nonzero__(self):
333 def __nonzero__(self):
334 return bool(self._all)
334 return bool(self._all)
335
335
336 def create(self, transaction, prec, succs=(), flag=0, parents=None,
336 def create(self, transaction, prec, succs=(), flag=0, parents=None,
337 date=None, metadata=None):
337 date=None, metadata=None):
338 """obsolete: add a new obsolete marker
338 """obsolete: add a new obsolete marker
339
339
340 * ensuring it is hashable
340 * ensuring it is hashable
341 * check mandatory metadata
341 * check mandatory metadata
342 * encode metadata
342 * encode metadata
343
343
344 If you are a human writing code creating marker you want to use the
344 If you are a human writing code creating marker you want to use the
345 `createmarkers` function in this module instead.
345 `createmarkers` function in this module instead.
346
346
347 return True if a new marker have been added, False if the markers
347 return True if a new marker have been added, False if the markers
348 already existed (no op).
348 already existed (no op).
349 """
349 """
350 if metadata is None:
350 if metadata is None:
351 metadata = {}
351 metadata = {}
352 if date is None:
352 if date is None:
353 if 'date' in metadata:
353 if 'date' in metadata:
354 # as a courtesy for out-of-tree extensions
354 # as a courtesy for out-of-tree extensions
355 date = util.parsedate(metadata.pop('date'))
355 date = util.parsedate(metadata.pop('date'))
356 else:
356 else:
357 date = util.makedate()
357 date = util.makedate()
358 if len(prec) != 20:
358 if len(prec) != 20:
359 raise ValueError(prec)
359 raise ValueError(prec)
360 for succ in succs:
360 for succ in succs:
361 if len(succ) != 20:
361 if len(succ) != 20:
362 raise ValueError(succ)
362 raise ValueError(succ)
363 if prec in succs:
363 if prec in succs:
364 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
364 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
365 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
365 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata),
366 date, parents)
366 date, parents)
367 return bool(self.add(transaction, [marker]))
367 return bool(self.add(transaction, [marker]))
368
368
369 def add(self, transaction, markers):
369 def add(self, transaction, markers):
370 """Add new markers to the store
370 """Add new markers to the store
371
371
372 Take care of filtering duplicate.
372 Take care of filtering duplicate.
373 Return the number of new marker."""
373 Return the number of new marker."""
374 if not _enabled:
374 if not _enabled:
375 raise util.Abort('obsolete feature is not enabled on this repo')
375 raise util.Abort('obsolete feature is not enabled on this repo')
376 known = set(self._all)
376 known = set(self._all)
377 new = []
377 new = []
378 for m in markers:
378 for m in markers:
379 if m not in known:
379 if m not in known:
380 known.add(m)
380 known.add(m)
381 new.append(m)
381 new.append(m)
382 if new:
382 if new:
383 f = self.sopener('obsstore', 'ab')
383 f = self.sopener('obsstore', 'ab')
384 try:
384 try:
385 # Whether the file's current position is at the begin or at
385 # Whether the file's current position is at the begin or at
386 # the end after opening a file for appending is implementation
386 # the end after opening a file for appending is implementation
387 # defined. So we must seek to the end before calling tell(),
387 # defined. So we must seek to the end before calling tell(),
388 # or we may get a zero offset for non-zero sized files on
388 # or we may get a zero offset for non-zero sized files on
389 # some platforms (issue3543).
389 # some platforms (issue3543).
390 f.seek(0, _SEEK_END)
390 f.seek(0, _SEEK_END)
391 offset = f.tell()
391 offset = f.tell()
392 transaction.add('obsstore', offset)
392 transaction.add('obsstore', offset)
393 # offset == 0: new file - add the version header
393 # offset == 0: new file - add the version header
394 for bytes in _encodemarkers(new, offset == 0, self._version):
394 for bytes in encodemarkers(new, offset == 0, self._version):
395 f.write(bytes)
395 f.write(bytes)
396 finally:
396 finally:
397 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
397 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
398 # call 'filecacheentry.refresh()' here
398 # call 'filecacheentry.refresh()' here
399 f.close()
399 f.close()
400 self._load(new)
400 self._load(new)
401 # new marker *may* have changed several set. invalidate the cache.
401 # new marker *may* have changed several set. invalidate the cache.
402 self.caches.clear()
402 self.caches.clear()
403 return len(new)
403 return len(new)
404
404
405 def mergemarkers(self, transaction, data):
405 def mergemarkers(self, transaction, data):
406 """merge a binary stream of markers inside the obsstore
406 """merge a binary stream of markers inside the obsstore
407
407
408 Returns the number of new markers added."""
408 Returns the number of new markers added."""
409 version, markers = _readmarkers(data)
409 version, markers = _readmarkers(data)
410 return self.add(transaction, markers)
410 return self.add(transaction, markers)
411
411
412 def _load(self, markers):
412 def _load(self, markers):
413 for mark in markers:
413 for mark in markers:
414 self._all.append(mark)
414 self._all.append(mark)
415 pre, sucs = mark[:2]
415 pre, sucs = mark[:2]
416 self.successors.setdefault(pre, set()).add(mark)
416 self.successors.setdefault(pre, set()).add(mark)
417 for suc in sucs:
417 for suc in sucs:
418 self.precursors.setdefault(suc, set()).add(mark)
418 self.precursors.setdefault(suc, set()).add(mark)
419 parents = mark[5]
419 parents = mark[5]
420 if parents is not None:
420 if parents is not None:
421 for p in parents:
421 for p in parents:
422 self.children.setdefault(p, set()).add(mark)
422 self.children.setdefault(p, set()).add(mark)
423 if node.nullid in self.precursors:
423 if node.nullid in self.precursors:
424 raise util.Abort(_('bad obsolescence marker detected: '
424 raise util.Abort(_('bad obsolescence marker detected: '
425 'invalid successors nullid'))
425 'invalid successors nullid'))
426 def relevantmarkers(self, nodes):
426 def relevantmarkers(self, nodes):
427 """return a set of all obsolescence markers relevant to a set of nodes.
427 """return a set of all obsolescence markers relevant to a set of nodes.
428
428
429 "relevant" to a set of nodes mean:
429 "relevant" to a set of nodes mean:
430
430
431 - marker that use this changeset as successor
431 - marker that use this changeset as successor
432 - prune marker of direct children on this changeset
432 - prune marker of direct children on this changeset
433 - recursive application of the two rules on precursors of these markers
433 - recursive application of the two rules on precursors of these markers
434
434
435 It is a set so you cannot rely on order."""
435 It is a set so you cannot rely on order."""
436
436
437 pendingnodes = set(nodes)
437 pendingnodes = set(nodes)
438 seenmarkers = set()
438 seenmarkers = set()
439 seennodes = set(pendingnodes)
439 seennodes = set(pendingnodes)
440 precursorsmarkers = self.precursors
440 precursorsmarkers = self.precursors
441 children = self.children
441 children = self.children
442 while pendingnodes:
442 while pendingnodes:
443 direct = set()
443 direct = set()
444 for current in pendingnodes:
444 for current in pendingnodes:
445 direct.update(precursorsmarkers.get(current, ()))
445 direct.update(precursorsmarkers.get(current, ()))
446 pruned = [m for m in children.get(current, ()) if not m[1]]
446 pruned = [m for m in children.get(current, ()) if not m[1]]
447 direct.update(pruned)
447 direct.update(pruned)
448 direct -= seenmarkers
448 direct -= seenmarkers
449 pendingnodes = set([m[0] for m in direct])
449 pendingnodes = set([m[0] for m in direct])
450 seenmarkers |= direct
450 seenmarkers |= direct
451 pendingnodes -= seennodes
451 pendingnodes -= seennodes
452 seennodes |= pendingnodes
452 seennodes |= pendingnodes
453 return seenmarkers
453 return seenmarkers
454
454
455
455
456 # arbitrary picked to fit into 8K limit from HTTP server
456 # arbitrary picked to fit into 8K limit from HTTP server
457 # you have to take in account:
457 # you have to take in account:
458 # - the version header
458 # - the version header
459 # - the base85 encoding
459 # - the base85 encoding
460 _maxpayload = 5300
460 _maxpayload = 5300
461
461
462 def _pushkeyescape(markers):
462 def _pushkeyescape(markers):
463 """encode markers into a dict suitable for pushkey exchange
463 """encode markers into a dict suitable for pushkey exchange
464
464
465 - binary data is base85 encoded
465 - binary data is base85 encoded
466 - split in chunks smaller than 5300 bytes"""
466 - split in chunks smaller than 5300 bytes"""
467 keys = {}
467 keys = {}
468 parts = []
468 parts = []
469 currentlen = _maxpayload * 2 # ensure we create a new part
469 currentlen = _maxpayload * 2 # ensure we create a new part
470 for marker in markers:
470 for marker in markers:
471 nextdata = _fm0encodeonemarker(marker)
471 nextdata = _fm0encodeonemarker(marker)
472 if (len(nextdata) + currentlen > _maxpayload):
472 if (len(nextdata) + currentlen > _maxpayload):
473 currentpart = []
473 currentpart = []
474 currentlen = 0
474 currentlen = 0
475 parts.append(currentpart)
475 parts.append(currentpart)
476 currentpart.append(nextdata)
476 currentpart.append(nextdata)
477 currentlen += len(nextdata)
477 currentlen += len(nextdata)
478 for idx, part in enumerate(reversed(parts)):
478 for idx, part in enumerate(reversed(parts)):
479 data = ''.join([_pack('>B', _fm0version)] + part)
479 data = ''.join([_pack('>B', _fm0version)] + part)
480 keys['dump%i' % idx] = base85.b85encode(data)
480 keys['dump%i' % idx] = base85.b85encode(data)
481 return keys
481 return keys
482
482
483 def listmarkers(repo):
483 def listmarkers(repo):
484 """List markers over pushkey"""
484 """List markers over pushkey"""
485 if not repo.obsstore:
485 if not repo.obsstore:
486 return {}
486 return {}
487 return _pushkeyescape(repo.obsstore)
487 return _pushkeyescape(repo.obsstore)
488
488
489 def pushmarker(repo, key, old, new):
489 def pushmarker(repo, key, old, new):
490 """Push markers over pushkey"""
490 """Push markers over pushkey"""
491 if not key.startswith('dump'):
491 if not key.startswith('dump'):
492 repo.ui.warn(_('unknown key: %r') % key)
492 repo.ui.warn(_('unknown key: %r') % key)
493 return 0
493 return 0
494 if old:
494 if old:
495 repo.ui.warn(_('unexpected old value for %r') % key)
495 repo.ui.warn(_('unexpected old value for %r') % key)
496 return 0
496 return 0
497 data = base85.b85decode(new)
497 data = base85.b85decode(new)
498 lock = repo.lock()
498 lock = repo.lock()
499 try:
499 try:
500 tr = repo.transaction('pushkey: obsolete markers')
500 tr = repo.transaction('pushkey: obsolete markers')
501 try:
501 try:
502 repo.obsstore.mergemarkers(tr, data)
502 repo.obsstore.mergemarkers(tr, data)
503 tr.close()
503 tr.close()
504 return 1
504 return 1
505 finally:
505 finally:
506 tr.release()
506 tr.release()
507 finally:
507 finally:
508 lock.release()
508 lock.release()
509
509
510 def getmarkers(repo, nodes=None):
510 def getmarkers(repo, nodes=None):
511 """returns markers known in a repository
511 """returns markers known in a repository
512
512
513 If <nodes> is specified, only markers "relevant" to those nodes are are
513 If <nodes> is specified, only markers "relevant" to those nodes are are
514 returned"""
514 returned"""
515 if nodes is None:
515 if nodes is None:
516 rawmarkers = repo.obsstore
516 rawmarkers = repo.obsstore
517 else:
517 else:
518 rawmarkers = repo.obsstore.relevantmarkers(nodes)
518 rawmarkers = repo.obsstore.relevantmarkers(nodes)
519
519
520 for markerdata in rawmarkers:
520 for markerdata in rawmarkers:
521 yield marker(repo, markerdata)
521 yield marker(repo, markerdata)
522
522
523 def relevantmarkers(repo, node):
523 def relevantmarkers(repo, node):
524 """all obsolete markers relevant to some revision"""
524 """all obsolete markers relevant to some revision"""
525 for markerdata in repo.obsstore.relevantmarkers(node):
525 for markerdata in repo.obsstore.relevantmarkers(node):
526 yield marker(repo, markerdata)
526 yield marker(repo, markerdata)
527
527
528
528
529 def precursormarkers(ctx):
529 def precursormarkers(ctx):
530 """obsolete marker marking this changeset as a successors"""
530 """obsolete marker marking this changeset as a successors"""
531 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
531 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
532 yield marker(ctx._repo, data)
532 yield marker(ctx._repo, data)
533
533
534 def successormarkers(ctx):
534 def successormarkers(ctx):
535 """obsolete marker making this changeset obsolete"""
535 """obsolete marker making this changeset obsolete"""
536 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
536 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
537 yield marker(ctx._repo, data)
537 yield marker(ctx._repo, data)
538
538
539 def allsuccessors(obsstore, nodes, ignoreflags=0):
539 def allsuccessors(obsstore, nodes, ignoreflags=0):
540 """Yield node for every successor of <nodes>.
540 """Yield node for every successor of <nodes>.
541
541
542 Some successors may be unknown locally.
542 Some successors may be unknown locally.
543
543
544 This is a linear yield unsuited to detecting split changesets. It includes
544 This is a linear yield unsuited to detecting split changesets. It includes
545 initial nodes too."""
545 initial nodes too."""
546 remaining = set(nodes)
546 remaining = set(nodes)
547 seen = set(remaining)
547 seen = set(remaining)
548 while remaining:
548 while remaining:
549 current = remaining.pop()
549 current = remaining.pop()
550 yield current
550 yield current
551 for mark in obsstore.successors.get(current, ()):
551 for mark in obsstore.successors.get(current, ()):
552 # ignore marker flagged with specified flag
552 # ignore marker flagged with specified flag
553 if mark[2] & ignoreflags:
553 if mark[2] & ignoreflags:
554 continue
554 continue
555 for suc in mark[1]:
555 for suc in mark[1]:
556 if suc not in seen:
556 if suc not in seen:
557 seen.add(suc)
557 seen.add(suc)
558 remaining.add(suc)
558 remaining.add(suc)
559
559
560 def allprecursors(obsstore, nodes, ignoreflags=0):
560 def allprecursors(obsstore, nodes, ignoreflags=0):
561 """Yield node for every precursors of <nodes>.
561 """Yield node for every precursors of <nodes>.
562
562
563 Some precursors may be unknown locally.
563 Some precursors may be unknown locally.
564
564
565 This is a linear yield unsuited to detecting folded changesets. It includes
565 This is a linear yield unsuited to detecting folded changesets. It includes
566 initial nodes too."""
566 initial nodes too."""
567
567
568 remaining = set(nodes)
568 remaining = set(nodes)
569 seen = set(remaining)
569 seen = set(remaining)
570 while remaining:
570 while remaining:
571 current = remaining.pop()
571 current = remaining.pop()
572 yield current
572 yield current
573 for mark in obsstore.precursors.get(current, ()):
573 for mark in obsstore.precursors.get(current, ()):
574 # ignore marker flagged with specified flag
574 # ignore marker flagged with specified flag
575 if mark[2] & ignoreflags:
575 if mark[2] & ignoreflags:
576 continue
576 continue
577 suc = mark[0]
577 suc = mark[0]
578 if suc not in seen:
578 if suc not in seen:
579 seen.add(suc)
579 seen.add(suc)
580 remaining.add(suc)
580 remaining.add(suc)
581
581
582 def foreground(repo, nodes):
582 def foreground(repo, nodes):
583 """return all nodes in the "foreground" of other node
583 """return all nodes in the "foreground" of other node
584
584
585 The foreground of a revision is anything reachable using parent -> children
585 The foreground of a revision is anything reachable using parent -> children
586 or precursor -> successor relation. It is very similar to "descendant" but
586 or precursor -> successor relation. It is very similar to "descendant" but
587 augmented with obsolescence information.
587 augmented with obsolescence information.
588
588
589 Beware that possible obsolescence cycle may result if complex situation.
589 Beware that possible obsolescence cycle may result if complex situation.
590 """
590 """
591 repo = repo.unfiltered()
591 repo = repo.unfiltered()
592 foreground = set(repo.set('%ln::', nodes))
592 foreground = set(repo.set('%ln::', nodes))
593 if repo.obsstore:
593 if repo.obsstore:
594 # We only need this complicated logic if there is obsolescence
594 # We only need this complicated logic if there is obsolescence
595 # XXX will probably deserve an optimised revset.
595 # XXX will probably deserve an optimised revset.
596 nm = repo.changelog.nodemap
596 nm = repo.changelog.nodemap
597 plen = -1
597 plen = -1
598 # compute the whole set of successors or descendants
598 # compute the whole set of successors or descendants
599 while len(foreground) != plen:
599 while len(foreground) != plen:
600 plen = len(foreground)
600 plen = len(foreground)
601 succs = set(c.node() for c in foreground)
601 succs = set(c.node() for c in foreground)
602 mutable = [c.node() for c in foreground if c.mutable()]
602 mutable = [c.node() for c in foreground if c.mutable()]
603 succs.update(allsuccessors(repo.obsstore, mutable))
603 succs.update(allsuccessors(repo.obsstore, mutable))
604 known = (n for n in succs if n in nm)
604 known = (n for n in succs if n in nm)
605 foreground = set(repo.set('%ln::', known))
605 foreground = set(repo.set('%ln::', known))
606 return set(c.node() for c in foreground)
606 return set(c.node() for c in foreground)
607
607
608
608
609 def successorssets(repo, initialnode, cache=None):
609 def successorssets(repo, initialnode, cache=None):
610 """Return all set of successors of initial nodes
610 """Return all set of successors of initial nodes
611
611
612 The successors set of a changeset A are a group of revisions that succeed
612 The successors set of a changeset A are a group of revisions that succeed
613 A. It succeeds A as a consistent whole, each revision being only a partial
613 A. It succeeds A as a consistent whole, each revision being only a partial
614 replacement. The successors set contains non-obsolete changesets only.
614 replacement. The successors set contains non-obsolete changesets only.
615
615
616 This function returns the full list of successor sets which is why it
616 This function returns the full list of successor sets which is why it
617 returns a list of tuples and not just a single tuple. Each tuple is a valid
617 returns a list of tuples and not just a single tuple. Each tuple is a valid
618 successors set. Not that (A,) may be a valid successors set for changeset A
618 successors set. Not that (A,) may be a valid successors set for changeset A
619 (see below).
619 (see below).
620
620
621 In most cases, a changeset A will have a single element (e.g. the changeset
621 In most cases, a changeset A will have a single element (e.g. the changeset
622 A is replaced by A') in its successors set. Though, it is also common for a
622 A is replaced by A') in its successors set. Though, it is also common for a
623 changeset A to have no elements in its successor set (e.g. the changeset
623 changeset A to have no elements in its successor set (e.g. the changeset
624 has been pruned). Therefore, the returned list of successors sets will be
624 has been pruned). Therefore, the returned list of successors sets will be
625 [(A',)] or [], respectively.
625 [(A',)] or [], respectively.
626
626
627 When a changeset A is split into A' and B', however, it will result in a
627 When a changeset A is split into A' and B', however, it will result in a
628 successors set containing more than a single element, i.e. [(A',B')].
628 successors set containing more than a single element, i.e. [(A',B')].
629 Divergent changesets will result in multiple successors sets, i.e. [(A',),
629 Divergent changesets will result in multiple successors sets, i.e. [(A',),
630 (A'')].
630 (A'')].
631
631
632 If a changeset A is not obsolete, then it will conceptually have no
632 If a changeset A is not obsolete, then it will conceptually have no
633 successors set. To distinguish this from a pruned changeset, the successor
633 successors set. To distinguish this from a pruned changeset, the successor
634 set will only contain itself, i.e. [(A,)].
634 set will only contain itself, i.e. [(A,)].
635
635
636 Finally, successors unknown locally are considered to be pruned (obsoleted
636 Finally, successors unknown locally are considered to be pruned (obsoleted
637 without any successors).
637 without any successors).
638
638
639 The optional `cache` parameter is a dictionary that may contain precomputed
639 The optional `cache` parameter is a dictionary that may contain precomputed
640 successors sets. It is meant to reuse the computation of a previous call to
640 successors sets. It is meant to reuse the computation of a previous call to
641 `successorssets` when multiple calls are made at the same time. The cache
641 `successorssets` when multiple calls are made at the same time. The cache
642 dictionary is updated in place. The caller is responsible for its live
642 dictionary is updated in place. The caller is responsible for its live
643 spawn. Code that makes multiple calls to `successorssets` *must* use this
643 spawn. Code that makes multiple calls to `successorssets` *must* use this
644 cache mechanism or suffer terrible performances.
644 cache mechanism or suffer terrible performances.
645
645
646 """
646 """
647
647
648 succmarkers = repo.obsstore.successors
648 succmarkers = repo.obsstore.successors
649
649
650 # Stack of nodes we search successors sets for
650 # Stack of nodes we search successors sets for
651 toproceed = [initialnode]
651 toproceed = [initialnode]
652 # set version of above list for fast loop detection
652 # set version of above list for fast loop detection
653 # element added to "toproceed" must be added here
653 # element added to "toproceed" must be added here
654 stackedset = set(toproceed)
654 stackedset = set(toproceed)
655 if cache is None:
655 if cache is None:
656 cache = {}
656 cache = {}
657
657
658 # This while loop is the flattened version of a recursive search for
658 # This while loop is the flattened version of a recursive search for
659 # successors sets
659 # successors sets
660 #
660 #
661 # def successorssets(x):
661 # def successorssets(x):
662 # successors = directsuccessors(x)
662 # successors = directsuccessors(x)
663 # ss = [[]]
663 # ss = [[]]
664 # for succ in directsuccessors(x):
664 # for succ in directsuccessors(x):
665 # # product as in itertools cartesian product
665 # # product as in itertools cartesian product
666 # ss = product(ss, successorssets(succ))
666 # ss = product(ss, successorssets(succ))
667 # return ss
667 # return ss
668 #
668 #
669 # But we can not use plain recursive calls here:
669 # But we can not use plain recursive calls here:
670 # - that would blow the python call stack
670 # - that would blow the python call stack
671 # - obsolescence markers may have cycles, we need to handle them.
671 # - obsolescence markers may have cycles, we need to handle them.
672 #
672 #
673 # The `toproceed` list act as our call stack. Every node we search
673 # The `toproceed` list act as our call stack. Every node we search
674 # successors set for are stacked there.
674 # successors set for are stacked there.
675 #
675 #
676 # The `stackedset` is set version of this stack used to check if a node is
676 # The `stackedset` is set version of this stack used to check if a node is
677 # already stacked. This check is used to detect cycles and prevent infinite
677 # already stacked. This check is used to detect cycles and prevent infinite
678 # loop.
678 # loop.
679 #
679 #
680 # successors set of all nodes are stored in the `cache` dictionary.
680 # successors set of all nodes are stored in the `cache` dictionary.
681 #
681 #
682 # After this while loop ends we use the cache to return the successors sets
682 # After this while loop ends we use the cache to return the successors sets
683 # for the node requested by the caller.
683 # for the node requested by the caller.
684 while toproceed:
684 while toproceed:
685 # Every iteration tries to compute the successors sets of the topmost
685 # Every iteration tries to compute the successors sets of the topmost
686 # node of the stack: CURRENT.
686 # node of the stack: CURRENT.
687 #
687 #
688 # There are four possible outcomes:
688 # There are four possible outcomes:
689 #
689 #
690 # 1) We already know the successors sets of CURRENT:
690 # 1) We already know the successors sets of CURRENT:
691 # -> mission accomplished, pop it from the stack.
691 # -> mission accomplished, pop it from the stack.
692 # 2) Node is not obsolete:
692 # 2) Node is not obsolete:
693 # -> the node is its own successors sets. Add it to the cache.
693 # -> the node is its own successors sets. Add it to the cache.
694 # 3) We do not know successors set of direct successors of CURRENT:
694 # 3) We do not know successors set of direct successors of CURRENT:
695 # -> We add those successors to the stack.
695 # -> We add those successors to the stack.
696 # 4) We know successors sets of all direct successors of CURRENT:
696 # 4) We know successors sets of all direct successors of CURRENT:
697 # -> We can compute CURRENT successors set and add it to the
697 # -> We can compute CURRENT successors set and add it to the
698 # cache.
698 # cache.
699 #
699 #
700 current = toproceed[-1]
700 current = toproceed[-1]
701 if current in cache:
701 if current in cache:
702 # case (1): We already know the successors sets
702 # case (1): We already know the successors sets
703 stackedset.remove(toproceed.pop())
703 stackedset.remove(toproceed.pop())
704 elif current not in succmarkers:
704 elif current not in succmarkers:
705 # case (2): The node is not obsolete.
705 # case (2): The node is not obsolete.
706 if current in repo:
706 if current in repo:
707 # We have a valid last successors.
707 # We have a valid last successors.
708 cache[current] = [(current,)]
708 cache[current] = [(current,)]
709 else:
709 else:
710 # Final obsolete version is unknown locally.
710 # Final obsolete version is unknown locally.
711 # Do not count that as a valid successors
711 # Do not count that as a valid successors
712 cache[current] = []
712 cache[current] = []
713 else:
713 else:
714 # cases (3) and (4)
714 # cases (3) and (4)
715 #
715 #
716 # We proceed in two phases. Phase 1 aims to distinguish case (3)
716 # We proceed in two phases. Phase 1 aims to distinguish case (3)
717 # from case (4):
717 # from case (4):
718 #
718 #
719 # For each direct successors of CURRENT, we check whether its
719 # For each direct successors of CURRENT, we check whether its
720 # successors sets are known. If they are not, we stack the
720 # successors sets are known. If they are not, we stack the
721 # unknown node and proceed to the next iteration of the while
721 # unknown node and proceed to the next iteration of the while
722 # loop. (case 3)
722 # loop. (case 3)
723 #
723 #
724 # During this step, we may detect obsolescence cycles: a node
724 # During this step, we may detect obsolescence cycles: a node
725 # with unknown successors sets but already in the call stack.
725 # with unknown successors sets but already in the call stack.
726 # In such a situation, we arbitrary set the successors sets of
726 # In such a situation, we arbitrary set the successors sets of
727 # the node to nothing (node pruned) to break the cycle.
727 # the node to nothing (node pruned) to break the cycle.
728 #
728 #
729 # If no break was encountered we proceed to phase 2.
729 # If no break was encountered we proceed to phase 2.
730 #
730 #
731 # Phase 2 computes successors sets of CURRENT (case 4); see details
731 # Phase 2 computes successors sets of CURRENT (case 4); see details
732 # in phase 2 itself.
732 # in phase 2 itself.
733 #
733 #
734 # Note the two levels of iteration in each phase.
734 # Note the two levels of iteration in each phase.
735 # - The first one handles obsolescence markers using CURRENT as
735 # - The first one handles obsolescence markers using CURRENT as
736 # precursor (successors markers of CURRENT).
736 # precursor (successors markers of CURRENT).
737 #
737 #
738 # Having multiple entry here means divergence.
738 # Having multiple entry here means divergence.
739 #
739 #
740 # - The second one handles successors defined in each marker.
740 # - The second one handles successors defined in each marker.
741 #
741 #
742 # Having none means pruned node, multiple successors means split,
742 # Having none means pruned node, multiple successors means split,
743 # single successors are standard replacement.
743 # single successors are standard replacement.
744 #
744 #
745 for mark in sorted(succmarkers[current]):
745 for mark in sorted(succmarkers[current]):
746 for suc in mark[1]:
746 for suc in mark[1]:
747 if suc not in cache:
747 if suc not in cache:
748 if suc in stackedset:
748 if suc in stackedset:
749 # cycle breaking
749 # cycle breaking
750 cache[suc] = []
750 cache[suc] = []
751 else:
751 else:
752 # case (3) If we have not computed successors sets
752 # case (3) If we have not computed successors sets
753 # of one of those successors we add it to the
753 # of one of those successors we add it to the
754 # `toproceed` stack and stop all work for this
754 # `toproceed` stack and stop all work for this
755 # iteration.
755 # iteration.
756 toproceed.append(suc)
756 toproceed.append(suc)
757 stackedset.add(suc)
757 stackedset.add(suc)
758 break
758 break
759 else:
759 else:
760 continue
760 continue
761 break
761 break
762 else:
762 else:
763 # case (4): we know all successors sets of all direct
763 # case (4): we know all successors sets of all direct
764 # successors
764 # successors
765 #
765 #
766 # Successors set contributed by each marker depends on the
766 # Successors set contributed by each marker depends on the
767 # successors sets of all its "successors" node.
767 # successors sets of all its "successors" node.
768 #
768 #
769 # Each different marker is a divergence in the obsolescence
769 # Each different marker is a divergence in the obsolescence
770 # history. It contributes successors sets distinct from other
770 # history. It contributes successors sets distinct from other
771 # markers.
771 # markers.
772 #
772 #
773 # Within a marker, a successor may have divergent successors
773 # Within a marker, a successor may have divergent successors
774 # sets. In such a case, the marker will contribute multiple
774 # sets. In such a case, the marker will contribute multiple
775 # divergent successors sets. If multiple successors have
775 # divergent successors sets. If multiple successors have
776 # divergent successors sets, a Cartesian product is used.
776 # divergent successors sets, a Cartesian product is used.
777 #
777 #
778 # At the end we post-process successors sets to remove
778 # At the end we post-process successors sets to remove
779 # duplicated entry and successors set that are strict subset of
779 # duplicated entry and successors set that are strict subset of
780 # another one.
780 # another one.
781 succssets = []
781 succssets = []
782 for mark in sorted(succmarkers[current]):
782 for mark in sorted(succmarkers[current]):
783 # successors sets contributed by this marker
783 # successors sets contributed by this marker
784 markss = [[]]
784 markss = [[]]
785 for suc in mark[1]:
785 for suc in mark[1]:
786 # cardinal product with previous successors
786 # cardinal product with previous successors
787 productresult = []
787 productresult = []
788 for prefix in markss:
788 for prefix in markss:
789 for suffix in cache[suc]:
789 for suffix in cache[suc]:
790 newss = list(prefix)
790 newss = list(prefix)
791 for part in suffix:
791 for part in suffix:
792 # do not duplicated entry in successors set
792 # do not duplicated entry in successors set
793 # first entry wins.
793 # first entry wins.
794 if part not in newss:
794 if part not in newss:
795 newss.append(part)
795 newss.append(part)
796 productresult.append(newss)
796 productresult.append(newss)
797 markss = productresult
797 markss = productresult
798 succssets.extend(markss)
798 succssets.extend(markss)
799 # remove duplicated and subset
799 # remove duplicated and subset
800 seen = []
800 seen = []
801 final = []
801 final = []
802 candidate = sorted(((set(s), s) for s in succssets if s),
802 candidate = sorted(((set(s), s) for s in succssets if s),
803 key=lambda x: len(x[1]), reverse=True)
803 key=lambda x: len(x[1]), reverse=True)
804 for setversion, listversion in candidate:
804 for setversion, listversion in candidate:
805 for seenset in seen:
805 for seenset in seen:
806 if setversion.issubset(seenset):
806 if setversion.issubset(seenset):
807 break
807 break
808 else:
808 else:
809 final.append(listversion)
809 final.append(listversion)
810 seen.append(setversion)
810 seen.append(setversion)
811 final.reverse() # put small successors set first
811 final.reverse() # put small successors set first
812 cache[current] = final
812 cache[current] = final
813 return cache[initialnode]
813 return cache[initialnode]
814
814
815 def _knownrevs(repo, nodes):
815 def _knownrevs(repo, nodes):
816 """yield revision numbers of known nodes passed in parameters
816 """yield revision numbers of known nodes passed in parameters
817
817
818 Unknown revisions are silently ignored."""
818 Unknown revisions are silently ignored."""
819 torev = repo.changelog.nodemap.get
819 torev = repo.changelog.nodemap.get
820 for n in nodes:
820 for n in nodes:
821 rev = torev(n)
821 rev = torev(n)
822 if rev is not None:
822 if rev is not None:
823 yield rev
823 yield rev
824
824
825 # mapping of 'set-name' -> <function to compute this set>
825 # mapping of 'set-name' -> <function to compute this set>
826 cachefuncs = {}
826 cachefuncs = {}
827 def cachefor(name):
827 def cachefor(name):
828 """Decorator to register a function as computing the cache for a set"""
828 """Decorator to register a function as computing the cache for a set"""
829 def decorator(func):
829 def decorator(func):
830 assert name not in cachefuncs
830 assert name not in cachefuncs
831 cachefuncs[name] = func
831 cachefuncs[name] = func
832 return func
832 return func
833 return decorator
833 return decorator
834
834
835 def getrevs(repo, name):
835 def getrevs(repo, name):
836 """Return the set of revision that belong to the <name> set
836 """Return the set of revision that belong to the <name> set
837
837
838 Such access may compute the set and cache it for future use"""
838 Such access may compute the set and cache it for future use"""
839 repo = repo.unfiltered()
839 repo = repo.unfiltered()
840 if not repo.obsstore:
840 if not repo.obsstore:
841 return ()
841 return ()
842 if name not in repo.obsstore.caches:
842 if name not in repo.obsstore.caches:
843 repo.obsstore.caches[name] = cachefuncs[name](repo)
843 repo.obsstore.caches[name] = cachefuncs[name](repo)
844 return repo.obsstore.caches[name]
844 return repo.obsstore.caches[name]
845
845
846 # To be simple we need to invalidate obsolescence cache when:
846 # To be simple we need to invalidate obsolescence cache when:
847 #
847 #
848 # - new changeset is added:
848 # - new changeset is added:
849 # - public phase is changed
849 # - public phase is changed
850 # - obsolescence marker are added
850 # - obsolescence marker are added
851 # - strip is used a repo
851 # - strip is used a repo
852 def clearobscaches(repo):
852 def clearobscaches(repo):
853 """Remove all obsolescence related cache from a repo
853 """Remove all obsolescence related cache from a repo
854
854
855 This remove all cache in obsstore is the obsstore already exist on the
855 This remove all cache in obsstore is the obsstore already exist on the
856 repo.
856 repo.
857
857
858 (We could be smarter here given the exact event that trigger the cache
858 (We could be smarter here given the exact event that trigger the cache
859 clearing)"""
859 clearing)"""
860 # only clear cache is there is obsstore data in this repo
860 # only clear cache is there is obsstore data in this repo
861 if 'obsstore' in repo._filecache:
861 if 'obsstore' in repo._filecache:
862 repo.obsstore.caches.clear()
862 repo.obsstore.caches.clear()
863
863
864 @cachefor('obsolete')
864 @cachefor('obsolete')
865 def _computeobsoleteset(repo):
865 def _computeobsoleteset(repo):
866 """the set of obsolete revisions"""
866 """the set of obsolete revisions"""
867 obs = set()
867 obs = set()
868 getrev = repo.changelog.nodemap.get
868 getrev = repo.changelog.nodemap.get
869 getphase = repo._phasecache.phase
869 getphase = repo._phasecache.phase
870 for n in repo.obsstore.successors:
870 for n in repo.obsstore.successors:
871 rev = getrev(n)
871 rev = getrev(n)
872 if rev is not None and getphase(repo, rev):
872 if rev is not None and getphase(repo, rev):
873 obs.add(rev)
873 obs.add(rev)
874 return obs
874 return obs
875
875
876 @cachefor('unstable')
876 @cachefor('unstable')
877 def _computeunstableset(repo):
877 def _computeunstableset(repo):
878 """the set of non obsolete revisions with obsolete parents"""
878 """the set of non obsolete revisions with obsolete parents"""
879 # revset is not efficient enough here
879 # revset is not efficient enough here
880 # we do (obsolete()::) - obsolete() by hand
880 # we do (obsolete()::) - obsolete() by hand
881 obs = getrevs(repo, 'obsolete')
881 obs = getrevs(repo, 'obsolete')
882 if not obs:
882 if not obs:
883 return set()
883 return set()
884 cl = repo.changelog
884 cl = repo.changelog
885 return set(r for r in cl.descendants(obs) if r not in obs)
885 return set(r for r in cl.descendants(obs) if r not in obs)
886
886
887 @cachefor('suspended')
887 @cachefor('suspended')
888 def _computesuspendedset(repo):
888 def _computesuspendedset(repo):
889 """the set of obsolete parents with non obsolete descendants"""
889 """the set of obsolete parents with non obsolete descendants"""
890 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
890 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
891 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
891 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
892
892
893 @cachefor('extinct')
893 @cachefor('extinct')
894 def _computeextinctset(repo):
894 def _computeextinctset(repo):
895 """the set of obsolete parents without non obsolete descendants"""
895 """the set of obsolete parents without non obsolete descendants"""
896 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
896 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
897
897
898
898
899 @cachefor('bumped')
899 @cachefor('bumped')
900 def _computebumpedset(repo):
900 def _computebumpedset(repo):
901 """the set of revs trying to obsolete public revisions"""
901 """the set of revs trying to obsolete public revisions"""
902 bumped = set()
902 bumped = set()
903 # util function (avoid attribute lookup in the loop)
903 # util function (avoid attribute lookup in the loop)
904 phase = repo._phasecache.phase # would be faster to grab the full list
904 phase = repo._phasecache.phase # would be faster to grab the full list
905 public = phases.public
905 public = phases.public
906 cl = repo.changelog
906 cl = repo.changelog
907 torev = cl.nodemap.get
907 torev = cl.nodemap.get
908 obs = getrevs(repo, 'obsolete')
908 obs = getrevs(repo, 'obsolete')
909 for rev in repo:
909 for rev in repo:
910 # We only evaluate mutable, non-obsolete revision
910 # We only evaluate mutable, non-obsolete revision
911 if (public < phase(repo, rev)) and (rev not in obs):
911 if (public < phase(repo, rev)) and (rev not in obs):
912 node = cl.node(rev)
912 node = cl.node(rev)
913 # (future) A cache of precursors may worth if split is very common
913 # (future) A cache of precursors may worth if split is very common
914 for pnode in allprecursors(repo.obsstore, [node],
914 for pnode in allprecursors(repo.obsstore, [node],
915 ignoreflags=bumpedfix):
915 ignoreflags=bumpedfix):
916 prev = torev(pnode) # unfiltered! but so is phasecache
916 prev = torev(pnode) # unfiltered! but so is phasecache
917 if (prev is not None) and (phase(repo, prev) <= public):
917 if (prev is not None) and (phase(repo, prev) <= public):
918 # we have a public precursors
918 # we have a public precursors
919 bumped.add(rev)
919 bumped.add(rev)
920 break # Next draft!
920 break # Next draft!
921 return bumped
921 return bumped
922
922
923 @cachefor('divergent')
923 @cachefor('divergent')
924 def _computedivergentset(repo):
924 def _computedivergentset(repo):
925 """the set of rev that compete to be the final successors of some revision.
925 """the set of rev that compete to be the final successors of some revision.
926 """
926 """
927 divergent = set()
927 divergent = set()
928 obsstore = repo.obsstore
928 obsstore = repo.obsstore
929 newermap = {}
929 newermap = {}
930 for ctx in repo.set('(not public()) - obsolete()'):
930 for ctx in repo.set('(not public()) - obsolete()'):
931 mark = obsstore.precursors.get(ctx.node(), ())
931 mark = obsstore.precursors.get(ctx.node(), ())
932 toprocess = set(mark)
932 toprocess = set(mark)
933 while toprocess:
933 while toprocess:
934 prec = toprocess.pop()[0]
934 prec = toprocess.pop()[0]
935 if prec not in newermap:
935 if prec not in newermap:
936 successorssets(repo, prec, newermap)
936 successorssets(repo, prec, newermap)
937 newer = [n for n in newermap[prec] if n]
937 newer = [n for n in newermap[prec] if n]
938 if len(newer) > 1:
938 if len(newer) > 1:
939 divergent.add(ctx.rev())
939 divergent.add(ctx.rev())
940 break
940 break
941 toprocess.update(obsstore.precursors.get(prec, ()))
941 toprocess.update(obsstore.precursors.get(prec, ()))
942 return divergent
942 return divergent
943
943
944
944
945 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
945 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
946 """Add obsolete markers between changesets in a repo
946 """Add obsolete markers between changesets in a repo
947
947
948 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
948 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
949 tuple. `old` and `news` are changectx. metadata is an optional dictionary
949 tuple. `old` and `news` are changectx. metadata is an optional dictionary
950 containing metadata for this marker only. It is merged with the global
950 containing metadata for this marker only. It is merged with the global
951 metadata specified through the `metadata` argument of this function,
951 metadata specified through the `metadata` argument of this function,
952
952
953 Trying to obsolete a public changeset will raise an exception.
953 Trying to obsolete a public changeset will raise an exception.
954
954
955 Current user and date are used except if specified otherwise in the
955 Current user and date are used except if specified otherwise in the
956 metadata attribute.
956 metadata attribute.
957
957
958 This function operates within a transaction of its own, but does
958 This function operates within a transaction of its own, but does
959 not take any lock on the repo.
959 not take any lock on the repo.
960 """
960 """
961 # prepare metadata
961 # prepare metadata
962 if metadata is None:
962 if metadata is None:
963 metadata = {}
963 metadata = {}
964 if 'user' not in metadata:
964 if 'user' not in metadata:
965 metadata['user'] = repo.ui.username()
965 metadata['user'] = repo.ui.username()
966 tr = repo.transaction('add-obsolescence-marker')
966 tr = repo.transaction('add-obsolescence-marker')
967 try:
967 try:
968 for rel in relations:
968 for rel in relations:
969 prec = rel[0]
969 prec = rel[0]
970 sucs = rel[1]
970 sucs = rel[1]
971 localmetadata = metadata.copy()
971 localmetadata = metadata.copy()
972 if 2 < len(rel):
972 if 2 < len(rel):
973 localmetadata.update(rel[2])
973 localmetadata.update(rel[2])
974
974
975 if not prec.mutable():
975 if not prec.mutable():
976 raise util.Abort("cannot obsolete immutable changeset: %s"
976 raise util.Abort("cannot obsolete immutable changeset: %s"
977 % prec)
977 % prec)
978 nprec = prec.node()
978 nprec = prec.node()
979 nsucs = tuple(s.node() for s in sucs)
979 nsucs = tuple(s.node() for s in sucs)
980 npare = None
980 npare = None
981 if not nsucs:
981 if not nsucs:
982 npare = tuple(p.node() for p in prec.parents())
982 npare = tuple(p.node() for p in prec.parents())
983 if nprec in nsucs:
983 if nprec in nsucs:
984 raise util.Abort("changeset %s cannot obsolete itself" % prec)
984 raise util.Abort("changeset %s cannot obsolete itself" % prec)
985 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
985 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
986 date=date, metadata=localmetadata)
986 date=date, metadata=localmetadata)
987 repo.filteredrevcache.clear()
987 repo.filteredrevcache.clear()
988 tr.close()
988 tr.close()
989 finally:
989 finally:
990 tr.release()
990 tr.release()
General Comments 0
You need to be logged in to leave comments. Login now