##// END OF EJS Templates
readmarkers: drop metadata temporary
Matt Mackall -
r23796:5fc29b45 default
parent child Browse files
Show More
@@ -1,1169 +1,1168
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete marker handling
9 """Obsolete marker handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewrite operations, and help
17 transformations performed by history rewrite operations, and help
18 building new tools to reconcile conflicting rewrite actions. To
18 building new tools to reconcile conflicting rewrite actions. To
19 facilitate conflict resolution, markers include various annotations
19 facilitate conflict resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called a "precursor" and possible
23 The old obsoleted changeset is called a "precursor" and possible
24 replacements are called "successors". Markers that used changeset X as
24 replacements are called "successors". Markers that used changeset X as
25 a precursor are called "successor markers of X" because they hold
25 a precursor are called "successor markers of X" because they hold
26 information about the successors of X. Markers that use changeset Y as
26 information about the successors of X. Markers that use changeset Y as
27 a successors are call "precursor markers of Y" because they hold
27 a successors are call "precursor markers of Y" because they hold
28 information about the precursors of Y.
28 information about the precursors of Y.
29
29
30 Examples:
30 Examples:
31
31
32 - When changeset A is replaced by changeset A', one marker is stored:
32 - When changeset A is replaced by changeset A', one marker is stored:
33
33
34 (A, (A',))
34 (A, (A',))
35
35
36 - When changesets A and B are folded into a new changeset C, two markers are
36 - When changesets A and B are folded into a new changeset C, two markers are
37 stored:
37 stored:
38
38
39 (A, (C,)) and (B, (C,))
39 (A, (C,)) and (B, (C,))
40
40
41 - When changeset A is simply "pruned" from the graph, a marker is created:
41 - When changeset A is simply "pruned" from the graph, a marker is created:
42
42
43 (A, ())
43 (A, ())
44
44
45 - When changeset A is split into B and C, a single marker are used:
45 - When changeset A is split into B and C, a single marker are used:
46
46
47 (A, (C, C))
47 (A, (C, C))
48
48
49 We use a single marker to distinguish the "split" case from the "divergence"
49 We use a single marker to distinguish the "split" case from the "divergence"
50 case. If two independent operations rewrite the same changeset A in to A' and
50 case. If two independent operations rewrite the same changeset A in to A' and
51 A'', we have an error case: divergent rewriting. We can detect it because
51 A'', we have an error case: divergent rewriting. We can detect it because
52 two markers will be created independently:
52 two markers will be created independently:
53
53
54 (A, (B,)) and (A, (C,))
54 (A, (B,)) and (A, (C,))
55
55
56 Format
56 Format
57 ------
57 ------
58
58
59 Markers are stored in an append-only file stored in
59 Markers are stored in an append-only file stored in
60 '.hg/store/obsstore'.
60 '.hg/store/obsstore'.
61
61
62 The file starts with a version header:
62 The file starts with a version header:
63
63
64 - 1 unsigned byte: version number, starting at zero.
64 - 1 unsigned byte: version number, starting at zero.
65
65
66 The header is followed by the markers. Marker format depend of the version. See
66 The header is followed by the markers. Marker format depend of the version. See
67 comment associated with each format for details.
67 comment associated with each format for details.
68
68
69 """
69 """
70 import struct
70 import struct
71 import util, base85, node
71 import util, base85, node
72 import phases
72 import phases
73 from i18n import _
73 from i18n import _
74
74
75 _pack = struct.pack
75 _pack = struct.pack
76 _unpack = struct.unpack
76 _unpack = struct.unpack
77 _calcsize = struct.calcsize
77 _calcsize = struct.calcsize
78
78
79 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
79 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
80
80
81 # the obsolete feature is not mature enough to be enabled by default.
81 # the obsolete feature is not mature enough to be enabled by default.
82 # you have to rely on third party extension extension to enable this.
82 # you have to rely on third party extension extension to enable this.
83 _enabled = False
83 _enabled = False
84
84
85 # Options for obsolescence
85 # Options for obsolescence
86 createmarkersopt = 'createmarkers'
86 createmarkersopt = 'createmarkers'
87 allowunstableopt = 'allowunstable'
87 allowunstableopt = 'allowunstable'
88 exchangeopt = 'exchange'
88 exchangeopt = 'exchange'
89
89
90 ### obsolescence marker flag
90 ### obsolescence marker flag
91
91
92 ## bumpedfix flag
92 ## bumpedfix flag
93 #
93 #
94 # When a changeset A' succeed to a changeset A which became public, we call A'
94 # When a changeset A' succeed to a changeset A which became public, we call A'
95 # "bumped" because it's a successors of a public changesets
95 # "bumped" because it's a successors of a public changesets
96 #
96 #
97 # o A' (bumped)
97 # o A' (bumped)
98 # |`:
98 # |`:
99 # | o A
99 # | o A
100 # |/
100 # |/
101 # o Z
101 # o Z
102 #
102 #
103 # The way to solve this situation is to create a new changeset Ad as children
103 # The way to solve this situation is to create a new changeset Ad as children
104 # of A. This changeset have the same content than A'. So the diff from A to A'
104 # of A. This changeset have the same content than A'. So the diff from A to A'
105 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
105 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
106 #
106 #
107 # o Ad
107 # o Ad
108 # |`:
108 # |`:
109 # | x A'
109 # | x A'
110 # |'|
110 # |'|
111 # o | A
111 # o | A
112 # |/
112 # |/
113 # o Z
113 # o Z
114 #
114 #
115 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
115 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
116 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
116 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
117 # This flag mean that the successors express the changes between the public and
117 # This flag mean that the successors express the changes between the public and
118 # bumped version and fix the situation, breaking the transitivity of
118 # bumped version and fix the situation, breaking the transitivity of
119 # "bumped" here.
119 # "bumped" here.
120 bumpedfix = 1
120 bumpedfix = 1
121 usingsha256 = 2
121 usingsha256 = 2
122
122
123 ## Parsing and writing of version "0"
123 ## Parsing and writing of version "0"
124 #
124 #
125 # The header is followed by the markers. Each marker is made of:
125 # The header is followed by the markers. Each marker is made of:
126 #
126 #
127 # - 1 uint8 : number of new changesets "N", can be zero.
127 # - 1 uint8 : number of new changesets "N", can be zero.
128 #
128 #
129 # - 1 uint32: metadata size "M" in bytes.
129 # - 1 uint32: metadata size "M" in bytes.
130 #
130 #
131 # - 1 byte: a bit field. It is reserved for flags used in common
131 # - 1 byte: a bit field. It is reserved for flags used in common
132 # obsolete marker operations, to avoid repeated decoding of metadata
132 # obsolete marker operations, to avoid repeated decoding of metadata
133 # entries.
133 # entries.
134 #
134 #
135 # - 20 bytes: obsoleted changeset identifier.
135 # - 20 bytes: obsoleted changeset identifier.
136 #
136 #
137 # - N*20 bytes: new changesets identifiers.
137 # - N*20 bytes: new changesets identifiers.
138 #
138 #
139 # - M bytes: metadata as a sequence of nul-terminated strings. Each
139 # - M bytes: metadata as a sequence of nul-terminated strings. Each
140 # string contains a key and a value, separated by a colon ':', without
140 # string contains a key and a value, separated by a colon ':', without
141 # additional encoding. Keys cannot contain '\0' or ':' and values
141 # additional encoding. Keys cannot contain '\0' or ':' and values
142 # cannot contain '\0'.
142 # cannot contain '\0'.
143 _fm0version = 0
143 _fm0version = 0
144 _fm0fixed = '>BIB20s'
144 _fm0fixed = '>BIB20s'
145 _fm0node = '20s'
145 _fm0node = '20s'
146 _fm0fsize = _calcsize(_fm0fixed)
146 _fm0fsize = _calcsize(_fm0fixed)
147 _fm0fnodesize = _calcsize(_fm0node)
147 _fm0fnodesize = _calcsize(_fm0node)
148
148
149 def _fm0readmarkers(data, off=0):
149 def _fm0readmarkers(data, off=0):
150 # Loop on markers
150 # Loop on markers
151 l = len(data)
151 l = len(data)
152 while off + _fm0fsize <= l:
152 while off + _fm0fsize <= l:
153 # read fixed part
153 # read fixed part
154 cur = data[off:off + _fm0fsize]
154 cur = data[off:off + _fm0fsize]
155 off += _fm0fsize
155 off += _fm0fsize
156 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
156 numsuc, mdsize, flags, pre = _unpack(_fm0fixed, cur)
157 # read replacement
157 # read replacement
158 sucs = ()
158 sucs = ()
159 if numsuc:
159 if numsuc:
160 s = (_fm0fnodesize * numsuc)
160 s = (_fm0fnodesize * numsuc)
161 cur = data[off:off + s]
161 cur = data[off:off + s]
162 sucs = _unpack(_fm0node * numsuc, cur)
162 sucs = _unpack(_fm0node * numsuc, cur)
163 off += s
163 off += s
164 # read metadata
164 # read metadata
165 # (metadata will be decoded on demand)
165 # (metadata will be decoded on demand)
166 metadata = data[off:off + mdsize]
166 metadata = data[off:off + mdsize]
167 if len(metadata) != mdsize:
167 if len(metadata) != mdsize:
168 raise util.Abort(_('parsing obsolete marker: metadata is too '
168 raise util.Abort(_('parsing obsolete marker: metadata is too '
169 'short, %d bytes expected, got %d')
169 'short, %d bytes expected, got %d')
170 % (mdsize, len(metadata)))
170 % (mdsize, len(metadata)))
171 off += mdsize
171 off += mdsize
172 metadata = _fm0decodemeta(metadata)
172 metadata = _fm0decodemeta(metadata)
173 try:
173 try:
174 when, offset = metadata.pop('date', '0 0').split(' ')
174 when, offset = metadata.pop('date', '0 0').split(' ')
175 date = float(when), int(offset)
175 date = float(when), int(offset)
176 except ValueError:
176 except ValueError:
177 date = (0., 0)
177 date = (0., 0)
178 parents = None
178 parents = None
179 if 'p2' in metadata:
179 if 'p2' in metadata:
180 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
180 parents = (metadata.pop('p1', None), metadata.pop('p2', None))
181 elif 'p1' in metadata:
181 elif 'p1' in metadata:
182 parents = (metadata.pop('p1', None),)
182 parents = (metadata.pop('p1', None),)
183 elif 'p0' in metadata:
183 elif 'p0' in metadata:
184 parents = ()
184 parents = ()
185 if parents is not None:
185 if parents is not None:
186 try:
186 try:
187 parents = tuple(node.bin(p) for p in parents)
187 parents = tuple(node.bin(p) for p in parents)
188 # if parent content is not a nodeid, drop the data
188 # if parent content is not a nodeid, drop the data
189 for p in parents:
189 for p in parents:
190 if len(p) != 20:
190 if len(p) != 20:
191 parents = None
191 parents = None
192 break
192 break
193 except TypeError:
193 except TypeError:
194 # if content cannot be translated to nodeid drop the data.
194 # if content cannot be translated to nodeid drop the data.
195 parents = None
195 parents = None
196
196
197 metadata = tuple(sorted(metadata.iteritems()))
197 metadata = tuple(sorted(metadata.iteritems()))
198
198
199 yield (pre, sucs, flags, metadata, date, parents)
199 yield (pre, sucs, flags, metadata, date, parents)
200
200
201 def _fm0encodeonemarker(marker):
201 def _fm0encodeonemarker(marker):
202 pre, sucs, flags, metadata, date, parents = marker
202 pre, sucs, flags, metadata, date, parents = marker
203 if flags & usingsha256:
203 if flags & usingsha256:
204 raise util.Abort(_('cannot handle sha256 with old obsstore format'))
204 raise util.Abort(_('cannot handle sha256 with old obsstore format'))
205 metadata = dict(metadata)
205 metadata = dict(metadata)
206 time, tz = date
206 time, tz = date
207 metadata['date'] = '%r %i' % (time, tz)
207 metadata['date'] = '%r %i' % (time, tz)
208 if parents is not None:
208 if parents is not None:
209 if not parents:
209 if not parents:
210 # mark that we explicitly recorded no parents
210 # mark that we explicitly recorded no parents
211 metadata['p0'] = ''
211 metadata['p0'] = ''
212 for i, p in enumerate(parents):
212 for i, p in enumerate(parents):
213 metadata['p%i' % (i + 1)] = node.hex(p)
213 metadata['p%i' % (i + 1)] = node.hex(p)
214 metadata = _fm0encodemeta(metadata)
214 metadata = _fm0encodemeta(metadata)
215 numsuc = len(sucs)
215 numsuc = len(sucs)
216 format = _fm0fixed + (_fm0node * numsuc)
216 format = _fm0fixed + (_fm0node * numsuc)
217 data = [numsuc, len(metadata), flags, pre]
217 data = [numsuc, len(metadata), flags, pre]
218 data.extend(sucs)
218 data.extend(sucs)
219 return _pack(format, *data) + metadata
219 return _pack(format, *data) + metadata
220
220
221 def _fm0encodemeta(meta):
221 def _fm0encodemeta(meta):
222 """Return encoded metadata string to string mapping.
222 """Return encoded metadata string to string mapping.
223
223
224 Assume no ':' in key and no '\0' in both key and value."""
224 Assume no ':' in key and no '\0' in both key and value."""
225 for key, value in meta.iteritems():
225 for key, value in meta.iteritems():
226 if ':' in key or '\0' in key:
226 if ':' in key or '\0' in key:
227 raise ValueError("':' and '\0' are forbidden in metadata key'")
227 raise ValueError("':' and '\0' are forbidden in metadata key'")
228 if '\0' in value:
228 if '\0' in value:
229 raise ValueError("':' is forbidden in metadata value'")
229 raise ValueError("':' is forbidden in metadata value'")
230 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
230 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
231
231
232 def _fm0decodemeta(data):
232 def _fm0decodemeta(data):
233 """Return string to string dictionary from encoded version."""
233 """Return string to string dictionary from encoded version."""
234 d = {}
234 d = {}
235 for l in data.split('\0'):
235 for l in data.split('\0'):
236 if l:
236 if l:
237 key, value = l.split(':')
237 key, value = l.split(':')
238 d[key] = value
238 d[key] = value
239 return d
239 return d
240
240
241 ## Parsing and writing of version "1"
241 ## Parsing and writing of version "1"
242 #
242 #
243 # The header is followed by the markers. Each marker is made of:
243 # The header is followed by the markers. Each marker is made of:
244 #
244 #
245 # - uint32: total size of the marker (including this field)
245 # - uint32: total size of the marker (including this field)
246 #
246 #
247 # - float64: date in seconds since epoch
247 # - float64: date in seconds since epoch
248 #
248 #
249 # - int16: timezone offset in minutes
249 # - int16: timezone offset in minutes
250 #
250 #
251 # - uint16: a bit field. It is reserved for flags used in common
251 # - uint16: a bit field. It is reserved for flags used in common
252 # obsolete marker operations, to avoid repeated decoding of metadata
252 # obsolete marker operations, to avoid repeated decoding of metadata
253 # entries.
253 # entries.
254 #
254 #
255 # - uint8: number of successors "N", can be zero.
255 # - uint8: number of successors "N", can be zero.
256 #
256 #
257 # - uint8: number of parents "P", can be zero.
257 # - uint8: number of parents "P", can be zero.
258 #
258 #
259 # 0: parents data stored but no parent,
259 # 0: parents data stored but no parent,
260 # 1: one parent stored,
260 # 1: one parent stored,
261 # 2: two parents stored,
261 # 2: two parents stored,
262 # 3: no parent data stored
262 # 3: no parent data stored
263 #
263 #
264 # - uint8: number of metadata entries M
264 # - uint8: number of metadata entries M
265 #
265 #
266 # - 20 or 32 bytes: precursor changeset identifier.
266 # - 20 or 32 bytes: precursor changeset identifier.
267 #
267 #
268 # - N*(20 or 32) bytes: successors changesets identifiers.
268 # - N*(20 or 32) bytes: successors changesets identifiers.
269 #
269 #
270 # - P*(20 or 32) bytes: parents of the precursors changesets.
270 # - P*(20 or 32) bytes: parents of the precursors changesets.
271 #
271 #
272 # - M*(uint8, uint8): size of all metadata entries (key and value)
272 # - M*(uint8, uint8): size of all metadata entries (key and value)
273 #
273 #
274 # - remaining bytes: the metadata, each (key, value) pair after the other.
274 # - remaining bytes: the metadata, each (key, value) pair after the other.
275 _fm1version = 1
275 _fm1version = 1
276 _fm1fixed = '>IdhHBBB20s'
276 _fm1fixed = '>IdhHBBB20s'
277 _fm1nodesha1 = '20s'
277 _fm1nodesha1 = '20s'
278 _fm1nodesha256 = '32s'
278 _fm1nodesha256 = '32s'
279 _fm1nodesha1size = _calcsize(_fm1nodesha1)
279 _fm1nodesha1size = _calcsize(_fm1nodesha1)
280 _fm1nodesha256size = _calcsize(_fm1nodesha256)
280 _fm1nodesha256size = _calcsize(_fm1nodesha256)
281 _fm1fsize = _calcsize(_fm1fixed)
281 _fm1fsize = _calcsize(_fm1fixed)
282 _fm1parentnone = 3
282 _fm1parentnone = 3
283 _fm1parentshift = 14
283 _fm1parentshift = 14
284 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
284 _fm1parentmask = (_fm1parentnone << _fm1parentshift)
285 _fm1metapair = 'BB'
285 _fm1metapair = 'BB'
286 _fm1metapairsize = _calcsize('BB')
286 _fm1metapairsize = _calcsize('BB')
287
287
288 def _fm1readmarkers(data, off=0):
288 def _fm1readmarkers(data, off=0):
289 # Loop on markers
289 # Loop on markers
290 l = len(data)
290 l = len(data)
291 while off + _fm1fsize <= l:
291 while off + _fm1fsize <= l:
292 # read fixed part
292 # read fixed part
293 fixeddata = _unpack(_fm1fixed, data[off:off + _fm1fsize])
293 fixeddata = _unpack(_fm1fixed, data[off:off + _fm1fsize])
294 off += _fm1fsize
294 off += _fm1fsize
295 ttsize, seconds, tz, flags, numsuc, numpar, nummeta, prec = fixeddata
295 ttsize, seconds, tz, flags, numsuc, numpar, nummeta, prec = fixeddata
296
296
297 _fm1node = _fm1nodesha1
297 _fm1node = _fm1nodesha1
298 fnodesize = _fm1nodesha1size
298 fnodesize = _fm1nodesha1size
299 if flags & usingsha256:
299 if flags & usingsha256:
300 _fm1node = _fm1nodesha256
300 _fm1node = _fm1nodesha256
301 fnodesize = _fm1nodesha256size
301 fnodesize = _fm1nodesha256size
302
302
303 # read 0 or more successors
303 # read 0 or more successors
304 s = (fnodesize * numsuc)
304 s = (fnodesize * numsuc)
305 sucs = _unpack(_fm1node * numsuc, data[off:off + s])
305 sucs = _unpack(_fm1node * numsuc, data[off:off + s])
306 off += s
306 off += s
307
307
308 # read parents
308 # read parents
309 if numpar == _fm1parentnone:
309 if numpar == _fm1parentnone:
310 parents = None
310 parents = None
311 else:
311 else:
312 s = (fnodesize * numpar)
312 s = (fnodesize * numpar)
313 parents = _unpack(_fm1node * numpar, data[off:off + s])
313 parents = _unpack(_fm1node * numpar, data[off:off + s])
314 off += s
314 off += s
315
315
316 # read metadata
316 # read metadata
317 metaformat = '>' + (_fm1metapair * nummeta)
317 metaformat = '>' + (_fm1metapair * nummeta)
318 s = _fm1metapairsize * nummeta
318 s = _fm1metapairsize * nummeta
319 metapairsize = _unpack(metaformat, data[off:off + s])
319 metapairsize = _unpack(metaformat, data[off:off + s])
320 off += s
320 off += s
321 metadata = []
321 metadata = []
322 for idx in xrange(0, len(metapairsize), 2):
322 for idx in xrange(0, len(metapairsize), 2):
323 sk = metapairsize[idx]
323 sk = metapairsize[idx]
324 sv = metapairsize[idx + 1]
324 sv = metapairsize[idx + 1]
325 metadata.append((data[off:off + sk], data[off + sk:off + sk + sv]))
325 metadata.append((data[off:off + sk], data[off + sk:off + sk + sv]))
326 off += sk + sv
326 off += sk + sv
327 metadata = tuple(metadata)
328
327
329 yield (prec, sucs, flags, metadata, (seconds, tz * 60), parents)
328 yield (prec, sucs, flags, tuple(metadata), (seconds, tz * 60), parents)
330
329
331 def _fm1encodeonemarker(marker):
330 def _fm1encodeonemarker(marker):
332 pre, sucs, flags, metadata, date, parents = marker
331 pre, sucs, flags, metadata, date, parents = marker
333 # determine node size
332 # determine node size
334 _fm1node = _fm1nodesha1
333 _fm1node = _fm1nodesha1
335 if flags & usingsha256:
334 if flags & usingsha256:
336 _fm1node = _fm1nodesha256
335 _fm1node = _fm1nodesha256
337 numsuc = len(sucs)
336 numsuc = len(sucs)
338 numextranodes = numsuc
337 numextranodes = numsuc
339 if parents is None:
338 if parents is None:
340 numpar = _fm1parentnone
339 numpar = _fm1parentnone
341 else:
340 else:
342 numpar = len(parents)
341 numpar = len(parents)
343 numextranodes += numpar
342 numextranodes += numpar
344 formatnodes = _fm1node * numextranodes
343 formatnodes = _fm1node * numextranodes
345 formatmeta = _fm1metapair * len(metadata)
344 formatmeta = _fm1metapair * len(metadata)
346 format = _fm1fixed + formatnodes + formatmeta
345 format = _fm1fixed + formatnodes + formatmeta
347 # tz is stored in minutes so we divide by 60
346 # tz is stored in minutes so we divide by 60
348 tz = date[1]//60
347 tz = date[1]//60
349 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
348 data = [None, date[0], tz, flags, numsuc, numpar, len(metadata), pre]
350 data.extend(sucs)
349 data.extend(sucs)
351 if parents is not None:
350 if parents is not None:
352 data.extend(parents)
351 data.extend(parents)
353 totalsize = _calcsize(format)
352 totalsize = _calcsize(format)
354 for key, value in metadata:
353 for key, value in metadata:
355 lk = len(key)
354 lk = len(key)
356 lv = len(value)
355 lv = len(value)
357 data.append(lk)
356 data.append(lk)
358 data.append(lv)
357 data.append(lv)
359 totalsize += lk + lv
358 totalsize += lk + lv
360 data[0] = totalsize
359 data[0] = totalsize
361 data = [_pack(format, *data)]
360 data = [_pack(format, *data)]
362 for key, value in metadata:
361 for key, value in metadata:
363 data.append(key)
362 data.append(key)
364 data.append(value)
363 data.append(value)
365 return ''.join(data)
364 return ''.join(data)
366
365
367 # mapping to read/write various marker formats
366 # mapping to read/write various marker formats
368 # <version> -> (decoder, encoder)
367 # <version> -> (decoder, encoder)
369 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
368 formats = {_fm0version: (_fm0readmarkers, _fm0encodeonemarker),
370 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
369 _fm1version: (_fm1readmarkers, _fm1encodeonemarker)}
371
370
372 @util.nogc
371 @util.nogc
373 def _readmarkers(data):
372 def _readmarkers(data):
374 """Read and enumerate markers from raw data"""
373 """Read and enumerate markers from raw data"""
375 off = 0
374 off = 0
376 diskversion = _unpack('>B', data[off:off + 1])[0]
375 diskversion = _unpack('>B', data[off:off + 1])[0]
377 off += 1
376 off += 1
378 if diskversion not in formats:
377 if diskversion not in formats:
379 raise util.Abort(_('parsing obsolete marker: unknown version %r')
378 raise util.Abort(_('parsing obsolete marker: unknown version %r')
380 % diskversion)
379 % diskversion)
381 return diskversion, formats[diskversion][0](data, off)
380 return diskversion, formats[diskversion][0](data, off)
382
381
383 def encodemarkers(markers, addheader=False, version=_fm0version):
382 def encodemarkers(markers, addheader=False, version=_fm0version):
384 # Kept separate from flushmarkers(), it will be reused for
383 # Kept separate from flushmarkers(), it will be reused for
385 # markers exchange.
384 # markers exchange.
386 encodeone = formats[version][1]
385 encodeone = formats[version][1]
387 if addheader:
386 if addheader:
388 yield _pack('>B', version)
387 yield _pack('>B', version)
389 for marker in markers:
388 for marker in markers:
390 yield encodeone(marker)
389 yield encodeone(marker)
391
390
392
391
393 class marker(object):
392 class marker(object):
394 """Wrap obsolete marker raw data"""
393 """Wrap obsolete marker raw data"""
395
394
396 def __init__(self, repo, data):
395 def __init__(self, repo, data):
397 # the repo argument will be used to create changectx in later version
396 # the repo argument will be used to create changectx in later version
398 self._repo = repo
397 self._repo = repo
399 self._data = data
398 self._data = data
400 self._decodedmeta = None
399 self._decodedmeta = None
401
400
402 def __hash__(self):
401 def __hash__(self):
403 return hash(self._data)
402 return hash(self._data)
404
403
405 def __eq__(self, other):
404 def __eq__(self, other):
406 if type(other) != type(self):
405 if type(other) != type(self):
407 return False
406 return False
408 return self._data == other._data
407 return self._data == other._data
409
408
410 def precnode(self):
409 def precnode(self):
411 """Precursor changeset node identifier"""
410 """Precursor changeset node identifier"""
412 return self._data[0]
411 return self._data[0]
413
412
414 def succnodes(self):
413 def succnodes(self):
415 """List of successor changesets node identifiers"""
414 """List of successor changesets node identifiers"""
416 return self._data[1]
415 return self._data[1]
417
416
418 def parentnodes(self):
417 def parentnodes(self):
419 """Parents of the precursors (None if not recorded)"""
418 """Parents of the precursors (None if not recorded)"""
420 return self._data[5]
419 return self._data[5]
421
420
422 def metadata(self):
421 def metadata(self):
423 """Decoded metadata dictionary"""
422 """Decoded metadata dictionary"""
424 return dict(self._data[3])
423 return dict(self._data[3])
425
424
426 def date(self):
425 def date(self):
427 """Creation date as (unixtime, offset)"""
426 """Creation date as (unixtime, offset)"""
428 return self._data[4]
427 return self._data[4]
429
428
430 def flags(self):
429 def flags(self):
431 """The flags field of the marker"""
430 """The flags field of the marker"""
432 return self._data[2]
431 return self._data[2]
433
432
434 class obsstore(object):
433 class obsstore(object):
435 """Store obsolete markers
434 """Store obsolete markers
436
435
437 Markers can be accessed with two mappings:
436 Markers can be accessed with two mappings:
438 - precursors[x] -> set(markers on precursors edges of x)
437 - precursors[x] -> set(markers on precursors edges of x)
439 - successors[x] -> set(markers on successors edges of x)
438 - successors[x] -> set(markers on successors edges of x)
440 - children[x] -> set(markers on precursors edges of children(x)
439 - children[x] -> set(markers on precursors edges of children(x)
441 """
440 """
442
441
443 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
442 fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
444 # prec: nodeid, precursor changesets
443 # prec: nodeid, precursor changesets
445 # succs: tuple of nodeid, successor changesets (0-N length)
444 # succs: tuple of nodeid, successor changesets (0-N length)
446 # flag: integer, flag field carrying modifier for the markers (see doc)
445 # flag: integer, flag field carrying modifier for the markers (see doc)
447 # meta: binary blob, encoded metadata dictionary
446 # meta: binary blob, encoded metadata dictionary
448 # date: (float, int) tuple, date of marker creation
447 # date: (float, int) tuple, date of marker creation
449 # parents: (tuple of nodeid) or None, parents of precursors
448 # parents: (tuple of nodeid) or None, parents of precursors
450 # None is used when no data has been recorded
449 # None is used when no data has been recorded
451
450
452 def __init__(self, sopener, defaultformat=_fm1version, readonly=False):
451 def __init__(self, sopener, defaultformat=_fm1version, readonly=False):
453 # caches for various obsolescence related cache
452 # caches for various obsolescence related cache
454 self.caches = {}
453 self.caches = {}
455 self._all = []
454 self._all = []
456 self.precursors = {}
455 self.precursors = {}
457 self.successors = {}
456 self.successors = {}
458 self.children = {}
457 self.children = {}
459 self.sopener = sopener
458 self.sopener = sopener
460 data = sopener.tryread('obsstore')
459 data = sopener.tryread('obsstore')
461 self._version = defaultformat
460 self._version = defaultformat
462 self._readonly = readonly
461 self._readonly = readonly
463 if data:
462 if data:
464 self._version, markers = _readmarkers(data)
463 self._version, markers = _readmarkers(data)
465 self._load(markers)
464 self._load(markers)
466
465
467 def __iter__(self):
466 def __iter__(self):
468 return iter(self._all)
467 return iter(self._all)
469
468
470 def __len__(self):
469 def __len__(self):
471 return len(self._all)
470 return len(self._all)
472
471
473 def __nonzero__(self):
472 def __nonzero__(self):
474 return bool(self._all)
473 return bool(self._all)
475
474
476 def create(self, transaction, prec, succs=(), flag=0, parents=None,
475 def create(self, transaction, prec, succs=(), flag=0, parents=None,
477 date=None, metadata=None):
476 date=None, metadata=None):
478 """obsolete: add a new obsolete marker
477 """obsolete: add a new obsolete marker
479
478
480 * ensuring it is hashable
479 * ensuring it is hashable
481 * check mandatory metadata
480 * check mandatory metadata
482 * encode metadata
481 * encode metadata
483
482
484 If you are a human writing code creating marker you want to use the
483 If you are a human writing code creating marker you want to use the
485 `createmarkers` function in this module instead.
484 `createmarkers` function in this module instead.
486
485
487 return True if a new marker have been added, False if the markers
486 return True if a new marker have been added, False if the markers
488 already existed (no op).
487 already existed (no op).
489 """
488 """
490 if metadata is None:
489 if metadata is None:
491 metadata = {}
490 metadata = {}
492 if date is None:
491 if date is None:
493 if 'date' in metadata:
492 if 'date' in metadata:
494 # as a courtesy for out-of-tree extensions
493 # as a courtesy for out-of-tree extensions
495 date = util.parsedate(metadata.pop('date'))
494 date = util.parsedate(metadata.pop('date'))
496 else:
495 else:
497 date = util.makedate()
496 date = util.makedate()
498 if len(prec) != 20:
497 if len(prec) != 20:
499 raise ValueError(prec)
498 raise ValueError(prec)
500 for succ in succs:
499 for succ in succs:
501 if len(succ) != 20:
500 if len(succ) != 20:
502 raise ValueError(succ)
501 raise ValueError(succ)
503 if prec in succs:
502 if prec in succs:
504 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
503 raise ValueError(_('in-marker cycle with %s') % node.hex(prec))
505
504
506 metadata = tuple(sorted(metadata.iteritems()))
505 metadata = tuple(sorted(metadata.iteritems()))
507
506
508 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
507 marker = (str(prec), tuple(succs), int(flag), metadata, date, parents)
509 return bool(self.add(transaction, [marker]))
508 return bool(self.add(transaction, [marker]))
510
509
511 def add(self, transaction, markers):
510 def add(self, transaction, markers):
512 """Add new markers to the store
511 """Add new markers to the store
513
512
514 Take care of filtering duplicate.
513 Take care of filtering duplicate.
515 Return the number of new marker."""
514 Return the number of new marker."""
516 if self._readonly:
515 if self._readonly:
517 raise util.Abort('creating obsolete markers is not enabled on this '
516 raise util.Abort('creating obsolete markers is not enabled on this '
518 'repo')
517 'repo')
519 known = set(self._all)
518 known = set(self._all)
520 new = []
519 new = []
521 for m in markers:
520 for m in markers:
522 if m not in known:
521 if m not in known:
523 known.add(m)
522 known.add(m)
524 new.append(m)
523 new.append(m)
525 if new:
524 if new:
526 f = self.sopener('obsstore', 'ab')
525 f = self.sopener('obsstore', 'ab')
527 try:
526 try:
528 # Whether the file's current position is at the begin or at
527 # Whether the file's current position is at the begin or at
529 # the end after opening a file for appending is implementation
528 # the end after opening a file for appending is implementation
530 # defined. So we must seek to the end before calling tell(),
529 # defined. So we must seek to the end before calling tell(),
531 # or we may get a zero offset for non-zero sized files on
530 # or we may get a zero offset for non-zero sized files on
532 # some platforms (issue3543).
531 # some platforms (issue3543).
533 f.seek(0, _SEEK_END)
532 f.seek(0, _SEEK_END)
534 offset = f.tell()
533 offset = f.tell()
535 transaction.add('obsstore', offset)
534 transaction.add('obsstore', offset)
536 # offset == 0: new file - add the version header
535 # offset == 0: new file - add the version header
537 for bytes in encodemarkers(new, offset == 0, self._version):
536 for bytes in encodemarkers(new, offset == 0, self._version):
538 f.write(bytes)
537 f.write(bytes)
539 finally:
538 finally:
540 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
539 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
541 # call 'filecacheentry.refresh()' here
540 # call 'filecacheentry.refresh()' here
542 f.close()
541 f.close()
543 self._load(new)
542 self._load(new)
544 # new marker *may* have changed several set. invalidate the cache.
543 # new marker *may* have changed several set. invalidate the cache.
545 self.caches.clear()
544 self.caches.clear()
546 # records the number of new markers for the transaction hooks
545 # records the number of new markers for the transaction hooks
547 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
546 previous = int(transaction.hookargs.get('new_obsmarkers', '0'))
548 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
547 transaction.hookargs['new_obsmarkers'] = str(previous + len(new))
549 return len(new)
548 return len(new)
550
549
551 def mergemarkers(self, transaction, data):
550 def mergemarkers(self, transaction, data):
552 """merge a binary stream of markers inside the obsstore
551 """merge a binary stream of markers inside the obsstore
553
552
554 Returns the number of new markers added."""
553 Returns the number of new markers added."""
555 version, markers = _readmarkers(data)
554 version, markers = _readmarkers(data)
556 return self.add(transaction, markers)
555 return self.add(transaction, markers)
557
556
558 @util.nogc
557 @util.nogc
559 def _load(self, markers):
558 def _load(self, markers):
560 for mark in markers:
559 for mark in markers:
561 self._all.append(mark)
560 self._all.append(mark)
562 pre, sucs = mark[:2]
561 pre, sucs = mark[:2]
563 self.successors.setdefault(pre, set()).add(mark)
562 self.successors.setdefault(pre, set()).add(mark)
564 for suc in sucs:
563 for suc in sucs:
565 self.precursors.setdefault(suc, set()).add(mark)
564 self.precursors.setdefault(suc, set()).add(mark)
566 parents = mark[5]
565 parents = mark[5]
567 if parents is not None:
566 if parents is not None:
568 for p in parents:
567 for p in parents:
569 self.children.setdefault(p, set()).add(mark)
568 self.children.setdefault(p, set()).add(mark)
570 if node.nullid in self.precursors:
569 if node.nullid in self.precursors:
571 raise util.Abort(_('bad obsolescence marker detected: '
570 raise util.Abort(_('bad obsolescence marker detected: '
572 'invalid successors nullid'))
571 'invalid successors nullid'))
573 def relevantmarkers(self, nodes):
572 def relevantmarkers(self, nodes):
574 """return a set of all obsolescence markers relevant to a set of nodes.
573 """return a set of all obsolescence markers relevant to a set of nodes.
575
574
576 "relevant" to a set of nodes mean:
575 "relevant" to a set of nodes mean:
577
576
578 - marker that use this changeset as successor
577 - marker that use this changeset as successor
579 - prune marker of direct children on this changeset
578 - prune marker of direct children on this changeset
580 - recursive application of the two rules on precursors of these markers
579 - recursive application of the two rules on precursors of these markers
581
580
582 It is a set so you cannot rely on order."""
581 It is a set so you cannot rely on order."""
583
582
584 pendingnodes = set(nodes)
583 pendingnodes = set(nodes)
585 seenmarkers = set()
584 seenmarkers = set()
586 seennodes = set(pendingnodes)
585 seennodes = set(pendingnodes)
587 precursorsmarkers = self.precursors
586 precursorsmarkers = self.precursors
588 children = self.children
587 children = self.children
589 while pendingnodes:
588 while pendingnodes:
590 direct = set()
589 direct = set()
591 for current in pendingnodes:
590 for current in pendingnodes:
592 direct.update(precursorsmarkers.get(current, ()))
591 direct.update(precursorsmarkers.get(current, ()))
593 pruned = [m for m in children.get(current, ()) if not m[1]]
592 pruned = [m for m in children.get(current, ()) if not m[1]]
594 direct.update(pruned)
593 direct.update(pruned)
595 direct -= seenmarkers
594 direct -= seenmarkers
596 pendingnodes = set([m[0] for m in direct])
595 pendingnodes = set([m[0] for m in direct])
597 seenmarkers |= direct
596 seenmarkers |= direct
598 pendingnodes -= seennodes
597 pendingnodes -= seennodes
599 seennodes |= pendingnodes
598 seennodes |= pendingnodes
600 return seenmarkers
599 return seenmarkers
601
600
602 def commonversion(versions):
601 def commonversion(versions):
603 """Return the newest version listed in both versions and our local formats.
602 """Return the newest version listed in both versions and our local formats.
604
603
605 Returns None if no common version exists.
604 Returns None if no common version exists.
606 """
605 """
607 versions.sort(reverse=True)
606 versions.sort(reverse=True)
608 # search for highest version known on both side
607 # search for highest version known on both side
609 for v in versions:
608 for v in versions:
610 if v in formats:
609 if v in formats:
611 return v
610 return v
612 return None
611 return None
613
612
614 # arbitrary picked to fit into 8K limit from HTTP server
613 # arbitrary picked to fit into 8K limit from HTTP server
615 # you have to take in account:
614 # you have to take in account:
616 # - the version header
615 # - the version header
617 # - the base85 encoding
616 # - the base85 encoding
618 _maxpayload = 5300
617 _maxpayload = 5300
619
618
620 def _pushkeyescape(markers):
619 def _pushkeyescape(markers):
621 """encode markers into a dict suitable for pushkey exchange
620 """encode markers into a dict suitable for pushkey exchange
622
621
623 - binary data is base85 encoded
622 - binary data is base85 encoded
624 - split in chunks smaller than 5300 bytes"""
623 - split in chunks smaller than 5300 bytes"""
625 keys = {}
624 keys = {}
626 parts = []
625 parts = []
627 currentlen = _maxpayload * 2 # ensure we create a new part
626 currentlen = _maxpayload * 2 # ensure we create a new part
628 for marker in markers:
627 for marker in markers:
629 nextdata = _fm0encodeonemarker(marker)
628 nextdata = _fm0encodeonemarker(marker)
630 if (len(nextdata) + currentlen > _maxpayload):
629 if (len(nextdata) + currentlen > _maxpayload):
631 currentpart = []
630 currentpart = []
632 currentlen = 0
631 currentlen = 0
633 parts.append(currentpart)
632 parts.append(currentpart)
634 currentpart.append(nextdata)
633 currentpart.append(nextdata)
635 currentlen += len(nextdata)
634 currentlen += len(nextdata)
636 for idx, part in enumerate(reversed(parts)):
635 for idx, part in enumerate(reversed(parts)):
637 data = ''.join([_pack('>B', _fm0version)] + part)
636 data = ''.join([_pack('>B', _fm0version)] + part)
638 keys['dump%i' % idx] = base85.b85encode(data)
637 keys['dump%i' % idx] = base85.b85encode(data)
639 return keys
638 return keys
640
639
641 def listmarkers(repo):
640 def listmarkers(repo):
642 """List markers over pushkey"""
641 """List markers over pushkey"""
643 if not repo.obsstore:
642 if not repo.obsstore:
644 return {}
643 return {}
645 return _pushkeyescape(repo.obsstore)
644 return _pushkeyescape(repo.obsstore)
646
645
647 def pushmarker(repo, key, old, new):
646 def pushmarker(repo, key, old, new):
648 """Push markers over pushkey"""
647 """Push markers over pushkey"""
649 if not key.startswith('dump'):
648 if not key.startswith('dump'):
650 repo.ui.warn(_('unknown key: %r') % key)
649 repo.ui.warn(_('unknown key: %r') % key)
651 return 0
650 return 0
652 if old:
651 if old:
653 repo.ui.warn(_('unexpected old value for %r') % key)
652 repo.ui.warn(_('unexpected old value for %r') % key)
654 return 0
653 return 0
655 data = base85.b85decode(new)
654 data = base85.b85decode(new)
656 lock = repo.lock()
655 lock = repo.lock()
657 try:
656 try:
658 tr = repo.transaction('pushkey: obsolete markers')
657 tr = repo.transaction('pushkey: obsolete markers')
659 try:
658 try:
660 repo.obsstore.mergemarkers(tr, data)
659 repo.obsstore.mergemarkers(tr, data)
661 tr.close()
660 tr.close()
662 return 1
661 return 1
663 finally:
662 finally:
664 tr.release()
663 tr.release()
665 finally:
664 finally:
666 lock.release()
665 lock.release()
667
666
668 def getmarkers(repo, nodes=None):
667 def getmarkers(repo, nodes=None):
669 """returns markers known in a repository
668 """returns markers known in a repository
670
669
671 If <nodes> is specified, only markers "relevant" to those nodes are are
670 If <nodes> is specified, only markers "relevant" to those nodes are are
672 returned"""
671 returned"""
673 if nodes is None:
672 if nodes is None:
674 rawmarkers = repo.obsstore
673 rawmarkers = repo.obsstore
675 else:
674 else:
676 rawmarkers = repo.obsstore.relevantmarkers(nodes)
675 rawmarkers = repo.obsstore.relevantmarkers(nodes)
677
676
678 for markerdata in rawmarkers:
677 for markerdata in rawmarkers:
679 yield marker(repo, markerdata)
678 yield marker(repo, markerdata)
680
679
681 def relevantmarkers(repo, node):
680 def relevantmarkers(repo, node):
682 """all obsolete markers relevant to some revision"""
681 """all obsolete markers relevant to some revision"""
683 for markerdata in repo.obsstore.relevantmarkers(node):
682 for markerdata in repo.obsstore.relevantmarkers(node):
684 yield marker(repo, markerdata)
683 yield marker(repo, markerdata)
685
684
686
685
687 def precursormarkers(ctx):
686 def precursormarkers(ctx):
688 """obsolete marker marking this changeset as a successors"""
687 """obsolete marker marking this changeset as a successors"""
689 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
688 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
690 yield marker(ctx._repo, data)
689 yield marker(ctx._repo, data)
691
690
692 def successormarkers(ctx):
691 def successormarkers(ctx):
693 """obsolete marker making this changeset obsolete"""
692 """obsolete marker making this changeset obsolete"""
694 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
693 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
695 yield marker(ctx._repo, data)
694 yield marker(ctx._repo, data)
696
695
697 def allsuccessors(obsstore, nodes, ignoreflags=0):
696 def allsuccessors(obsstore, nodes, ignoreflags=0):
698 """Yield node for every successor of <nodes>.
697 """Yield node for every successor of <nodes>.
699
698
700 Some successors may be unknown locally.
699 Some successors may be unknown locally.
701
700
702 This is a linear yield unsuited to detecting split changesets. It includes
701 This is a linear yield unsuited to detecting split changesets. It includes
703 initial nodes too."""
702 initial nodes too."""
704 remaining = set(nodes)
703 remaining = set(nodes)
705 seen = set(remaining)
704 seen = set(remaining)
706 while remaining:
705 while remaining:
707 current = remaining.pop()
706 current = remaining.pop()
708 yield current
707 yield current
709 for mark in obsstore.successors.get(current, ()):
708 for mark in obsstore.successors.get(current, ()):
710 # ignore marker flagged with specified flag
709 # ignore marker flagged with specified flag
711 if mark[2] & ignoreflags:
710 if mark[2] & ignoreflags:
712 continue
711 continue
713 for suc in mark[1]:
712 for suc in mark[1]:
714 if suc not in seen:
713 if suc not in seen:
715 seen.add(suc)
714 seen.add(suc)
716 remaining.add(suc)
715 remaining.add(suc)
717
716
718 def allprecursors(obsstore, nodes, ignoreflags=0):
717 def allprecursors(obsstore, nodes, ignoreflags=0):
719 """Yield node for every precursors of <nodes>.
718 """Yield node for every precursors of <nodes>.
720
719
721 Some precursors may be unknown locally.
720 Some precursors may be unknown locally.
722
721
723 This is a linear yield unsuited to detecting folded changesets. It includes
722 This is a linear yield unsuited to detecting folded changesets. It includes
724 initial nodes too."""
723 initial nodes too."""
725
724
726 remaining = set(nodes)
725 remaining = set(nodes)
727 seen = set(remaining)
726 seen = set(remaining)
728 while remaining:
727 while remaining:
729 current = remaining.pop()
728 current = remaining.pop()
730 yield current
729 yield current
731 for mark in obsstore.precursors.get(current, ()):
730 for mark in obsstore.precursors.get(current, ()):
732 # ignore marker flagged with specified flag
731 # ignore marker flagged with specified flag
733 if mark[2] & ignoreflags:
732 if mark[2] & ignoreflags:
734 continue
733 continue
735 suc = mark[0]
734 suc = mark[0]
736 if suc not in seen:
735 if suc not in seen:
737 seen.add(suc)
736 seen.add(suc)
738 remaining.add(suc)
737 remaining.add(suc)
739
738
740 def foreground(repo, nodes):
739 def foreground(repo, nodes):
741 """return all nodes in the "foreground" of other node
740 """return all nodes in the "foreground" of other node
742
741
743 The foreground of a revision is anything reachable using parent -> children
742 The foreground of a revision is anything reachable using parent -> children
744 or precursor -> successor relation. It is very similar to "descendant" but
743 or precursor -> successor relation. It is very similar to "descendant" but
745 augmented with obsolescence information.
744 augmented with obsolescence information.
746
745
747 Beware that possible obsolescence cycle may result if complex situation.
746 Beware that possible obsolescence cycle may result if complex situation.
748 """
747 """
749 repo = repo.unfiltered()
748 repo = repo.unfiltered()
750 foreground = set(repo.set('%ln::', nodes))
749 foreground = set(repo.set('%ln::', nodes))
751 if repo.obsstore:
750 if repo.obsstore:
752 # We only need this complicated logic if there is obsolescence
751 # We only need this complicated logic if there is obsolescence
753 # XXX will probably deserve an optimised revset.
752 # XXX will probably deserve an optimised revset.
754 nm = repo.changelog.nodemap
753 nm = repo.changelog.nodemap
755 plen = -1
754 plen = -1
756 # compute the whole set of successors or descendants
755 # compute the whole set of successors or descendants
757 while len(foreground) != plen:
756 while len(foreground) != plen:
758 plen = len(foreground)
757 plen = len(foreground)
759 succs = set(c.node() for c in foreground)
758 succs = set(c.node() for c in foreground)
760 mutable = [c.node() for c in foreground if c.mutable()]
759 mutable = [c.node() for c in foreground if c.mutable()]
761 succs.update(allsuccessors(repo.obsstore, mutable))
760 succs.update(allsuccessors(repo.obsstore, mutable))
762 known = (n for n in succs if n in nm)
761 known = (n for n in succs if n in nm)
763 foreground = set(repo.set('%ln::', known))
762 foreground = set(repo.set('%ln::', known))
764 return set(c.node() for c in foreground)
763 return set(c.node() for c in foreground)
765
764
766
765
767 def successorssets(repo, initialnode, cache=None):
766 def successorssets(repo, initialnode, cache=None):
768 """Return all set of successors of initial nodes
767 """Return all set of successors of initial nodes
769
768
770 The successors set of a changeset A are a group of revisions that succeed
769 The successors set of a changeset A are a group of revisions that succeed
771 A. It succeeds A as a consistent whole, each revision being only a partial
770 A. It succeeds A as a consistent whole, each revision being only a partial
772 replacement. The successors set contains non-obsolete changesets only.
771 replacement. The successors set contains non-obsolete changesets only.
773
772
774 This function returns the full list of successor sets which is why it
773 This function returns the full list of successor sets which is why it
775 returns a list of tuples and not just a single tuple. Each tuple is a valid
774 returns a list of tuples and not just a single tuple. Each tuple is a valid
776 successors set. Not that (A,) may be a valid successors set for changeset A
775 successors set. Not that (A,) may be a valid successors set for changeset A
777 (see below).
776 (see below).
778
777
779 In most cases, a changeset A will have a single element (e.g. the changeset
778 In most cases, a changeset A will have a single element (e.g. the changeset
780 A is replaced by A') in its successors set. Though, it is also common for a
779 A is replaced by A') in its successors set. Though, it is also common for a
781 changeset A to have no elements in its successor set (e.g. the changeset
780 changeset A to have no elements in its successor set (e.g. the changeset
782 has been pruned). Therefore, the returned list of successors sets will be
781 has been pruned). Therefore, the returned list of successors sets will be
783 [(A',)] or [], respectively.
782 [(A',)] or [], respectively.
784
783
785 When a changeset A is split into A' and B', however, it will result in a
784 When a changeset A is split into A' and B', however, it will result in a
786 successors set containing more than a single element, i.e. [(A',B')].
785 successors set containing more than a single element, i.e. [(A',B')].
787 Divergent changesets will result in multiple successors sets, i.e. [(A',),
786 Divergent changesets will result in multiple successors sets, i.e. [(A',),
788 (A'')].
787 (A'')].
789
788
790 If a changeset A is not obsolete, then it will conceptually have no
789 If a changeset A is not obsolete, then it will conceptually have no
791 successors set. To distinguish this from a pruned changeset, the successor
790 successors set. To distinguish this from a pruned changeset, the successor
792 set will only contain itself, i.e. [(A,)].
791 set will only contain itself, i.e. [(A,)].
793
792
794 Finally, successors unknown locally are considered to be pruned (obsoleted
793 Finally, successors unknown locally are considered to be pruned (obsoleted
795 without any successors).
794 without any successors).
796
795
797 The optional `cache` parameter is a dictionary that may contain precomputed
796 The optional `cache` parameter is a dictionary that may contain precomputed
798 successors sets. It is meant to reuse the computation of a previous call to
797 successors sets. It is meant to reuse the computation of a previous call to
799 `successorssets` when multiple calls are made at the same time. The cache
798 `successorssets` when multiple calls are made at the same time. The cache
800 dictionary is updated in place. The caller is responsible for its live
799 dictionary is updated in place. The caller is responsible for its live
801 spawn. Code that makes multiple calls to `successorssets` *must* use this
800 spawn. Code that makes multiple calls to `successorssets` *must* use this
802 cache mechanism or suffer terrible performances.
801 cache mechanism or suffer terrible performances.
803
802
804 """
803 """
805
804
806 succmarkers = repo.obsstore.successors
805 succmarkers = repo.obsstore.successors
807
806
808 # Stack of nodes we search successors sets for
807 # Stack of nodes we search successors sets for
809 toproceed = [initialnode]
808 toproceed = [initialnode]
810 # set version of above list for fast loop detection
809 # set version of above list for fast loop detection
811 # element added to "toproceed" must be added here
810 # element added to "toproceed" must be added here
812 stackedset = set(toproceed)
811 stackedset = set(toproceed)
813 if cache is None:
812 if cache is None:
814 cache = {}
813 cache = {}
815
814
816 # This while loop is the flattened version of a recursive search for
815 # This while loop is the flattened version of a recursive search for
817 # successors sets
816 # successors sets
818 #
817 #
819 # def successorssets(x):
818 # def successorssets(x):
820 # successors = directsuccessors(x)
819 # successors = directsuccessors(x)
821 # ss = [[]]
820 # ss = [[]]
822 # for succ in directsuccessors(x):
821 # for succ in directsuccessors(x):
823 # # product as in itertools cartesian product
822 # # product as in itertools cartesian product
824 # ss = product(ss, successorssets(succ))
823 # ss = product(ss, successorssets(succ))
825 # return ss
824 # return ss
826 #
825 #
827 # But we can not use plain recursive calls here:
826 # But we can not use plain recursive calls here:
828 # - that would blow the python call stack
827 # - that would blow the python call stack
829 # - obsolescence markers may have cycles, we need to handle them.
828 # - obsolescence markers may have cycles, we need to handle them.
830 #
829 #
831 # The `toproceed` list act as our call stack. Every node we search
830 # The `toproceed` list act as our call stack. Every node we search
832 # successors set for are stacked there.
831 # successors set for are stacked there.
833 #
832 #
834 # The `stackedset` is set version of this stack used to check if a node is
833 # The `stackedset` is set version of this stack used to check if a node is
835 # already stacked. This check is used to detect cycles and prevent infinite
834 # already stacked. This check is used to detect cycles and prevent infinite
836 # loop.
835 # loop.
837 #
836 #
838 # successors set of all nodes are stored in the `cache` dictionary.
837 # successors set of all nodes are stored in the `cache` dictionary.
839 #
838 #
840 # After this while loop ends we use the cache to return the successors sets
839 # After this while loop ends we use the cache to return the successors sets
841 # for the node requested by the caller.
840 # for the node requested by the caller.
842 while toproceed:
841 while toproceed:
843 # Every iteration tries to compute the successors sets of the topmost
842 # Every iteration tries to compute the successors sets of the topmost
844 # node of the stack: CURRENT.
843 # node of the stack: CURRENT.
845 #
844 #
846 # There are four possible outcomes:
845 # There are four possible outcomes:
847 #
846 #
848 # 1) We already know the successors sets of CURRENT:
847 # 1) We already know the successors sets of CURRENT:
849 # -> mission accomplished, pop it from the stack.
848 # -> mission accomplished, pop it from the stack.
850 # 2) Node is not obsolete:
849 # 2) Node is not obsolete:
851 # -> the node is its own successors sets. Add it to the cache.
850 # -> the node is its own successors sets. Add it to the cache.
852 # 3) We do not know successors set of direct successors of CURRENT:
851 # 3) We do not know successors set of direct successors of CURRENT:
853 # -> We add those successors to the stack.
852 # -> We add those successors to the stack.
854 # 4) We know successors sets of all direct successors of CURRENT:
853 # 4) We know successors sets of all direct successors of CURRENT:
855 # -> We can compute CURRENT successors set and add it to the
854 # -> We can compute CURRENT successors set and add it to the
856 # cache.
855 # cache.
857 #
856 #
858 current = toproceed[-1]
857 current = toproceed[-1]
859 if current in cache:
858 if current in cache:
860 # case (1): We already know the successors sets
859 # case (1): We already know the successors sets
861 stackedset.remove(toproceed.pop())
860 stackedset.remove(toproceed.pop())
862 elif current not in succmarkers:
861 elif current not in succmarkers:
863 # case (2): The node is not obsolete.
862 # case (2): The node is not obsolete.
864 if current in repo:
863 if current in repo:
865 # We have a valid last successors.
864 # We have a valid last successors.
866 cache[current] = [(current,)]
865 cache[current] = [(current,)]
867 else:
866 else:
868 # Final obsolete version is unknown locally.
867 # Final obsolete version is unknown locally.
869 # Do not count that as a valid successors
868 # Do not count that as a valid successors
870 cache[current] = []
869 cache[current] = []
871 else:
870 else:
872 # cases (3) and (4)
871 # cases (3) and (4)
873 #
872 #
874 # We proceed in two phases. Phase 1 aims to distinguish case (3)
873 # We proceed in two phases. Phase 1 aims to distinguish case (3)
875 # from case (4):
874 # from case (4):
876 #
875 #
877 # For each direct successors of CURRENT, we check whether its
876 # For each direct successors of CURRENT, we check whether its
878 # successors sets are known. If they are not, we stack the
877 # successors sets are known. If they are not, we stack the
879 # unknown node and proceed to the next iteration of the while
878 # unknown node and proceed to the next iteration of the while
880 # loop. (case 3)
879 # loop. (case 3)
881 #
880 #
882 # During this step, we may detect obsolescence cycles: a node
881 # During this step, we may detect obsolescence cycles: a node
883 # with unknown successors sets but already in the call stack.
882 # with unknown successors sets but already in the call stack.
884 # In such a situation, we arbitrary set the successors sets of
883 # In such a situation, we arbitrary set the successors sets of
885 # the node to nothing (node pruned) to break the cycle.
884 # the node to nothing (node pruned) to break the cycle.
886 #
885 #
887 # If no break was encountered we proceed to phase 2.
886 # If no break was encountered we proceed to phase 2.
888 #
887 #
889 # Phase 2 computes successors sets of CURRENT (case 4); see details
888 # Phase 2 computes successors sets of CURRENT (case 4); see details
890 # in phase 2 itself.
889 # in phase 2 itself.
891 #
890 #
892 # Note the two levels of iteration in each phase.
891 # Note the two levels of iteration in each phase.
893 # - The first one handles obsolescence markers using CURRENT as
892 # - The first one handles obsolescence markers using CURRENT as
894 # precursor (successors markers of CURRENT).
893 # precursor (successors markers of CURRENT).
895 #
894 #
896 # Having multiple entry here means divergence.
895 # Having multiple entry here means divergence.
897 #
896 #
898 # - The second one handles successors defined in each marker.
897 # - The second one handles successors defined in each marker.
899 #
898 #
900 # Having none means pruned node, multiple successors means split,
899 # Having none means pruned node, multiple successors means split,
901 # single successors are standard replacement.
900 # single successors are standard replacement.
902 #
901 #
903 for mark in sorted(succmarkers[current]):
902 for mark in sorted(succmarkers[current]):
904 for suc in mark[1]:
903 for suc in mark[1]:
905 if suc not in cache:
904 if suc not in cache:
906 if suc in stackedset:
905 if suc in stackedset:
907 # cycle breaking
906 # cycle breaking
908 cache[suc] = []
907 cache[suc] = []
909 else:
908 else:
910 # case (3) If we have not computed successors sets
909 # case (3) If we have not computed successors sets
911 # of one of those successors we add it to the
910 # of one of those successors we add it to the
912 # `toproceed` stack and stop all work for this
911 # `toproceed` stack and stop all work for this
913 # iteration.
912 # iteration.
914 toproceed.append(suc)
913 toproceed.append(suc)
915 stackedset.add(suc)
914 stackedset.add(suc)
916 break
915 break
917 else:
916 else:
918 continue
917 continue
919 break
918 break
920 else:
919 else:
921 # case (4): we know all successors sets of all direct
920 # case (4): we know all successors sets of all direct
922 # successors
921 # successors
923 #
922 #
924 # Successors set contributed by each marker depends on the
923 # Successors set contributed by each marker depends on the
925 # successors sets of all its "successors" node.
924 # successors sets of all its "successors" node.
926 #
925 #
927 # Each different marker is a divergence in the obsolescence
926 # Each different marker is a divergence in the obsolescence
928 # history. It contributes successors sets distinct from other
927 # history. It contributes successors sets distinct from other
929 # markers.
928 # markers.
930 #
929 #
931 # Within a marker, a successor may have divergent successors
930 # Within a marker, a successor may have divergent successors
932 # sets. In such a case, the marker will contribute multiple
931 # sets. In such a case, the marker will contribute multiple
933 # divergent successors sets. If multiple successors have
932 # divergent successors sets. If multiple successors have
934 # divergent successors sets, a Cartesian product is used.
933 # divergent successors sets, a Cartesian product is used.
935 #
934 #
936 # At the end we post-process successors sets to remove
935 # At the end we post-process successors sets to remove
937 # duplicated entry and successors set that are strict subset of
936 # duplicated entry and successors set that are strict subset of
938 # another one.
937 # another one.
939 succssets = []
938 succssets = []
940 for mark in sorted(succmarkers[current]):
939 for mark in sorted(succmarkers[current]):
941 # successors sets contributed by this marker
940 # successors sets contributed by this marker
942 markss = [[]]
941 markss = [[]]
943 for suc in mark[1]:
942 for suc in mark[1]:
944 # cardinal product with previous successors
943 # cardinal product with previous successors
945 productresult = []
944 productresult = []
946 for prefix in markss:
945 for prefix in markss:
947 for suffix in cache[suc]:
946 for suffix in cache[suc]:
948 newss = list(prefix)
947 newss = list(prefix)
949 for part in suffix:
948 for part in suffix:
950 # do not duplicated entry in successors set
949 # do not duplicated entry in successors set
951 # first entry wins.
950 # first entry wins.
952 if part not in newss:
951 if part not in newss:
953 newss.append(part)
952 newss.append(part)
954 productresult.append(newss)
953 productresult.append(newss)
955 markss = productresult
954 markss = productresult
956 succssets.extend(markss)
955 succssets.extend(markss)
957 # remove duplicated and subset
956 # remove duplicated and subset
958 seen = []
957 seen = []
959 final = []
958 final = []
960 candidate = sorted(((set(s), s) for s in succssets if s),
959 candidate = sorted(((set(s), s) for s in succssets if s),
961 key=lambda x: len(x[1]), reverse=True)
960 key=lambda x: len(x[1]), reverse=True)
962 for setversion, listversion in candidate:
961 for setversion, listversion in candidate:
963 for seenset in seen:
962 for seenset in seen:
964 if setversion.issubset(seenset):
963 if setversion.issubset(seenset):
965 break
964 break
966 else:
965 else:
967 final.append(listversion)
966 final.append(listversion)
968 seen.append(setversion)
967 seen.append(setversion)
969 final.reverse() # put small successors set first
968 final.reverse() # put small successors set first
970 cache[current] = final
969 cache[current] = final
971 return cache[initialnode]
970 return cache[initialnode]
972
971
973 def _knownrevs(repo, nodes):
972 def _knownrevs(repo, nodes):
974 """yield revision numbers of known nodes passed in parameters
973 """yield revision numbers of known nodes passed in parameters
975
974
976 Unknown revisions are silently ignored."""
975 Unknown revisions are silently ignored."""
977 torev = repo.changelog.nodemap.get
976 torev = repo.changelog.nodemap.get
978 for n in nodes:
977 for n in nodes:
979 rev = torev(n)
978 rev = torev(n)
980 if rev is not None:
979 if rev is not None:
981 yield rev
980 yield rev
982
981
983 # mapping of 'set-name' -> <function to compute this set>
982 # mapping of 'set-name' -> <function to compute this set>
984 cachefuncs = {}
983 cachefuncs = {}
985 def cachefor(name):
984 def cachefor(name):
986 """Decorator to register a function as computing the cache for a set"""
985 """Decorator to register a function as computing the cache for a set"""
987 def decorator(func):
986 def decorator(func):
988 assert name not in cachefuncs
987 assert name not in cachefuncs
989 cachefuncs[name] = func
988 cachefuncs[name] = func
990 return func
989 return func
991 return decorator
990 return decorator
992
991
993 def getrevs(repo, name):
992 def getrevs(repo, name):
994 """Return the set of revision that belong to the <name> set
993 """Return the set of revision that belong to the <name> set
995
994
996 Such access may compute the set and cache it for future use"""
995 Such access may compute the set and cache it for future use"""
997 repo = repo.unfiltered()
996 repo = repo.unfiltered()
998 if not repo.obsstore:
997 if not repo.obsstore:
999 return frozenset()
998 return frozenset()
1000 if name not in repo.obsstore.caches:
999 if name not in repo.obsstore.caches:
1001 repo.obsstore.caches[name] = cachefuncs[name](repo)
1000 repo.obsstore.caches[name] = cachefuncs[name](repo)
1002 return repo.obsstore.caches[name]
1001 return repo.obsstore.caches[name]
1003
1002
1004 # To be simple we need to invalidate obsolescence cache when:
1003 # To be simple we need to invalidate obsolescence cache when:
1005 #
1004 #
1006 # - new changeset is added:
1005 # - new changeset is added:
1007 # - public phase is changed
1006 # - public phase is changed
1008 # - obsolescence marker are added
1007 # - obsolescence marker are added
1009 # - strip is used a repo
1008 # - strip is used a repo
1010 def clearobscaches(repo):
1009 def clearobscaches(repo):
1011 """Remove all obsolescence related cache from a repo
1010 """Remove all obsolescence related cache from a repo
1012
1011
1013 This remove all cache in obsstore is the obsstore already exist on the
1012 This remove all cache in obsstore is the obsstore already exist on the
1014 repo.
1013 repo.
1015
1014
1016 (We could be smarter here given the exact event that trigger the cache
1015 (We could be smarter here given the exact event that trigger the cache
1017 clearing)"""
1016 clearing)"""
1018 # only clear cache is there is obsstore data in this repo
1017 # only clear cache is there is obsstore data in this repo
1019 if 'obsstore' in repo._filecache:
1018 if 'obsstore' in repo._filecache:
1020 repo.obsstore.caches.clear()
1019 repo.obsstore.caches.clear()
1021
1020
1022 @cachefor('obsolete')
1021 @cachefor('obsolete')
1023 def _computeobsoleteset(repo):
1022 def _computeobsoleteset(repo):
1024 """the set of obsolete revisions"""
1023 """the set of obsolete revisions"""
1025 obs = set()
1024 obs = set()
1026 getrev = repo.changelog.nodemap.get
1025 getrev = repo.changelog.nodemap.get
1027 getphase = repo._phasecache.phase
1026 getphase = repo._phasecache.phase
1028 for n in repo.obsstore.successors:
1027 for n in repo.obsstore.successors:
1029 rev = getrev(n)
1028 rev = getrev(n)
1030 if rev is not None and getphase(repo, rev):
1029 if rev is not None and getphase(repo, rev):
1031 obs.add(rev)
1030 obs.add(rev)
1032 return obs
1031 return obs
1033
1032
1034 @cachefor('unstable')
1033 @cachefor('unstable')
1035 def _computeunstableset(repo):
1034 def _computeunstableset(repo):
1036 """the set of non obsolete revisions with obsolete parents"""
1035 """the set of non obsolete revisions with obsolete parents"""
1037 # revset is not efficient enough here
1036 # revset is not efficient enough here
1038 # we do (obsolete()::) - obsolete() by hand
1037 # we do (obsolete()::) - obsolete() by hand
1039 obs = getrevs(repo, 'obsolete')
1038 obs = getrevs(repo, 'obsolete')
1040 if not obs:
1039 if not obs:
1041 return set()
1040 return set()
1042 cl = repo.changelog
1041 cl = repo.changelog
1043 return set(r for r in cl.descendants(obs) if r not in obs)
1042 return set(r for r in cl.descendants(obs) if r not in obs)
1044
1043
1045 @cachefor('suspended')
1044 @cachefor('suspended')
1046 def _computesuspendedset(repo):
1045 def _computesuspendedset(repo):
1047 """the set of obsolete parents with non obsolete descendants"""
1046 """the set of obsolete parents with non obsolete descendants"""
1048 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1047 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
1049 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1048 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
1050
1049
1051 @cachefor('extinct')
1050 @cachefor('extinct')
1052 def _computeextinctset(repo):
1051 def _computeextinctset(repo):
1053 """the set of obsolete parents without non obsolete descendants"""
1052 """the set of obsolete parents without non obsolete descendants"""
1054 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1053 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
1055
1054
1056
1055
1057 @cachefor('bumped')
1056 @cachefor('bumped')
1058 def _computebumpedset(repo):
1057 def _computebumpedset(repo):
1059 """the set of revs trying to obsolete public revisions"""
1058 """the set of revs trying to obsolete public revisions"""
1060 bumped = set()
1059 bumped = set()
1061 # util function (avoid attribute lookup in the loop)
1060 # util function (avoid attribute lookup in the loop)
1062 phase = repo._phasecache.phase # would be faster to grab the full list
1061 phase = repo._phasecache.phase # would be faster to grab the full list
1063 public = phases.public
1062 public = phases.public
1064 cl = repo.changelog
1063 cl = repo.changelog
1065 torev = cl.nodemap.get
1064 torev = cl.nodemap.get
1066 obs = getrevs(repo, 'obsolete')
1065 obs = getrevs(repo, 'obsolete')
1067 for rev in repo:
1066 for rev in repo:
1068 # We only evaluate mutable, non-obsolete revision
1067 # We only evaluate mutable, non-obsolete revision
1069 if (public < phase(repo, rev)) and (rev not in obs):
1068 if (public < phase(repo, rev)) and (rev not in obs):
1070 node = cl.node(rev)
1069 node = cl.node(rev)
1071 # (future) A cache of precursors may worth if split is very common
1070 # (future) A cache of precursors may worth if split is very common
1072 for pnode in allprecursors(repo.obsstore, [node],
1071 for pnode in allprecursors(repo.obsstore, [node],
1073 ignoreflags=bumpedfix):
1072 ignoreflags=bumpedfix):
1074 prev = torev(pnode) # unfiltered! but so is phasecache
1073 prev = torev(pnode) # unfiltered! but so is phasecache
1075 if (prev is not None) and (phase(repo, prev) <= public):
1074 if (prev is not None) and (phase(repo, prev) <= public):
1076 # we have a public precursors
1075 # we have a public precursors
1077 bumped.add(rev)
1076 bumped.add(rev)
1078 break # Next draft!
1077 break # Next draft!
1079 return bumped
1078 return bumped
1080
1079
1081 @cachefor('divergent')
1080 @cachefor('divergent')
1082 def _computedivergentset(repo):
1081 def _computedivergentset(repo):
1083 """the set of rev that compete to be the final successors of some revision.
1082 """the set of rev that compete to be the final successors of some revision.
1084 """
1083 """
1085 divergent = set()
1084 divergent = set()
1086 obsstore = repo.obsstore
1085 obsstore = repo.obsstore
1087 newermap = {}
1086 newermap = {}
1088 for ctx in repo.set('(not public()) - obsolete()'):
1087 for ctx in repo.set('(not public()) - obsolete()'):
1089 mark = obsstore.precursors.get(ctx.node(), ())
1088 mark = obsstore.precursors.get(ctx.node(), ())
1090 toprocess = set(mark)
1089 toprocess = set(mark)
1091 while toprocess:
1090 while toprocess:
1092 prec = toprocess.pop()[0]
1091 prec = toprocess.pop()[0]
1093 if prec not in newermap:
1092 if prec not in newermap:
1094 successorssets(repo, prec, newermap)
1093 successorssets(repo, prec, newermap)
1095 newer = [n for n in newermap[prec] if n]
1094 newer = [n for n in newermap[prec] if n]
1096 if len(newer) > 1:
1095 if len(newer) > 1:
1097 divergent.add(ctx.rev())
1096 divergent.add(ctx.rev())
1098 break
1097 break
1099 toprocess.update(obsstore.precursors.get(prec, ()))
1098 toprocess.update(obsstore.precursors.get(prec, ()))
1100 return divergent
1099 return divergent
1101
1100
1102
1101
1103 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
1102 def createmarkers(repo, relations, flag=0, date=None, metadata=None):
1104 """Add obsolete markers between changesets in a repo
1103 """Add obsolete markers between changesets in a repo
1105
1104
1106 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1105 <relations> must be an iterable of (<old>, (<new>, ...)[,{metadata}])
1107 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1106 tuple. `old` and `news` are changectx. metadata is an optional dictionary
1108 containing metadata for this marker only. It is merged with the global
1107 containing metadata for this marker only. It is merged with the global
1109 metadata specified through the `metadata` argument of this function,
1108 metadata specified through the `metadata` argument of this function,
1110
1109
1111 Trying to obsolete a public changeset will raise an exception.
1110 Trying to obsolete a public changeset will raise an exception.
1112
1111
1113 Current user and date are used except if specified otherwise in the
1112 Current user and date are used except if specified otherwise in the
1114 metadata attribute.
1113 metadata attribute.
1115
1114
1116 This function operates within a transaction of its own, but does
1115 This function operates within a transaction of its own, but does
1117 not take any lock on the repo.
1116 not take any lock on the repo.
1118 """
1117 """
1119 # prepare metadata
1118 # prepare metadata
1120 if metadata is None:
1119 if metadata is None:
1121 metadata = {}
1120 metadata = {}
1122 if 'user' not in metadata:
1121 if 'user' not in metadata:
1123 metadata['user'] = repo.ui.username()
1122 metadata['user'] = repo.ui.username()
1124 tr = repo.transaction('add-obsolescence-marker')
1123 tr = repo.transaction('add-obsolescence-marker')
1125 try:
1124 try:
1126 for rel in relations:
1125 for rel in relations:
1127 prec = rel[0]
1126 prec = rel[0]
1128 sucs = rel[1]
1127 sucs = rel[1]
1129 localmetadata = metadata.copy()
1128 localmetadata = metadata.copy()
1130 if 2 < len(rel):
1129 if 2 < len(rel):
1131 localmetadata.update(rel[2])
1130 localmetadata.update(rel[2])
1132
1131
1133 if not prec.mutable():
1132 if not prec.mutable():
1134 raise util.Abort("cannot obsolete immutable changeset: %s"
1133 raise util.Abort("cannot obsolete immutable changeset: %s"
1135 % prec)
1134 % prec)
1136 nprec = prec.node()
1135 nprec = prec.node()
1137 nsucs = tuple(s.node() for s in sucs)
1136 nsucs = tuple(s.node() for s in sucs)
1138 npare = None
1137 npare = None
1139 if not nsucs:
1138 if not nsucs:
1140 npare = tuple(p.node() for p in prec.parents())
1139 npare = tuple(p.node() for p in prec.parents())
1141 if nprec in nsucs:
1140 if nprec in nsucs:
1142 raise util.Abort("changeset %s cannot obsolete itself" % prec)
1141 raise util.Abort("changeset %s cannot obsolete itself" % prec)
1143 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1142 repo.obsstore.create(tr, nprec, nsucs, flag, parents=npare,
1144 date=date, metadata=localmetadata)
1143 date=date, metadata=localmetadata)
1145 repo.filteredrevcache.clear()
1144 repo.filteredrevcache.clear()
1146 tr.close()
1145 tr.close()
1147 finally:
1146 finally:
1148 tr.release()
1147 tr.release()
1149
1148
1150 def isenabled(repo, option):
1149 def isenabled(repo, option):
1151 """Returns True if the given repository has the given obsolete option
1150 """Returns True if the given repository has the given obsolete option
1152 enabled.
1151 enabled.
1153 """
1152 """
1154 result = set(repo.ui.configlist('experimental', 'evolution'))
1153 result = set(repo.ui.configlist('experimental', 'evolution'))
1155 if 'all' in result:
1154 if 'all' in result:
1156 return True
1155 return True
1157
1156
1158 # For migration purposes, temporarily return true if the config hasn't been
1157 # For migration purposes, temporarily return true if the config hasn't been
1159 # set but _enabled is true.
1158 # set but _enabled is true.
1160 if len(result) == 0 and _enabled:
1159 if len(result) == 0 and _enabled:
1161 return True
1160 return True
1162
1161
1163 # createmarkers must be enabled if other options are enabled
1162 # createmarkers must be enabled if other options are enabled
1164 if ((allowunstableopt in result or exchangeopt in result) and
1163 if ((allowunstableopt in result or exchangeopt in result) and
1165 not createmarkersopt in result):
1164 not createmarkersopt in result):
1166 raise util.Abort(_("'createmarkers' obsolete option must be enabled "
1165 raise util.Abort(_("'createmarkers' obsolete option must be enabled "
1167 "if other obsolete options are enabled"))
1166 "if other obsolete options are enabled"))
1168
1167
1169 return option in result
1168 return option in result
General Comments 0
You need to be logged in to leave comments. Login now