##// END OF EJS Templates
obsolete: process markers in a stable order...
Mads Kiilerich -
r18365:4148414d default
parent child Browse files
Show More
@@ -1,752 +1,752 b''
1 # obsolete.py - obsolete markers handling
1 # obsolete.py - obsolete markers handling
2 #
2 #
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Copyright 2012 Pierre-Yves David <pierre-yves.david@ens-lyon.org>
4 # Logilab SA <contact@logilab.fr>
4 # Logilab SA <contact@logilab.fr>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """Obsolete markers handling
9 """Obsolete markers handling
10
10
11 An obsolete marker maps an old changeset to a list of new
11 An obsolete marker maps an old changeset to a list of new
12 changesets. If the list of new changesets is empty, the old changeset
12 changesets. If the list of new changesets is empty, the old changeset
13 is said to be "killed". Otherwise, the old changeset is being
13 is said to be "killed". Otherwise, the old changeset is being
14 "replaced" by the new changesets.
14 "replaced" by the new changesets.
15
15
16 Obsolete markers can be used to record and distribute changeset graph
16 Obsolete markers can be used to record and distribute changeset graph
17 transformations performed by history rewriting operations, and help
17 transformations performed by history rewriting operations, and help
18 building new tools to reconciliate conflicting rewriting actions. To
18 building new tools to reconciliate conflicting rewriting actions. To
19 facilitate conflicts resolution, markers include various annotations
19 facilitate conflicts resolution, markers include various annotations
20 besides old and news changeset identifiers, such as creation date or
20 besides old and news changeset identifiers, such as creation date or
21 author name.
21 author name.
22
22
23 The old obsoleted changeset is called "precursor" and possible replacements are
23 The old obsoleted changeset is called "precursor" and possible replacements are
24 called "successors". Markers that used changeset X as a precursors are called
24 called "successors". Markers that used changeset X as a precursors are called
25 "successor markers of X" because they hold information about the successors of
25 "successor markers of X" because they hold information about the successors of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
26 X. Markers that use changeset Y as a successors are call "precursor markers of
27 Y" because they hold information about the precursors of Y.
27 Y" because they hold information about the precursors of Y.
28
28
29 Examples:
29 Examples:
30
30
31 - When changeset A is replacement by a changeset A', one marker is stored:
31 - When changeset A is replacement by a changeset A', one marker is stored:
32
32
33 (A, (A'))
33 (A, (A'))
34
34
35 - When changesets A and B are folded into a new changeset C two markers are
35 - When changesets A and B are folded into a new changeset C two markers are
36 stored:
36 stored:
37
37
38 (A, (C,)) and (B, (C,))
38 (A, (C,)) and (B, (C,))
39
39
40 - When changeset A is simply "pruned" from the graph, a marker in create:
40 - When changeset A is simply "pruned" from the graph, a marker in create:
41
41
42 (A, ())
42 (A, ())
43
43
44 - When changeset A is split into B and C, a single marker are used:
44 - When changeset A is split into B and C, a single marker are used:
45
45
46 (A, (C, C))
46 (A, (C, C))
47
47
48 We use a single marker to distinct the "split" case from the "divergence"
48 We use a single marker to distinct the "split" case from the "divergence"
49 case. If two independants operation rewrite the same changeset A in to A' and
49 case. If two independants operation rewrite the same changeset A in to A' and
50 A'' when have an error case: divergent rewriting. We can detect it because
50 A'' when have an error case: divergent rewriting. We can detect it because
51 two markers will be created independently:
51 two markers will be created independently:
52
52
53 (A, (B,)) and (A, (C,))
53 (A, (B,)) and (A, (C,))
54
54
55 Format
55 Format
56 ------
56 ------
57
57
58 Markers are stored in an append-only file stored in
58 Markers are stored in an append-only file stored in
59 '.hg/store/obsstore'.
59 '.hg/store/obsstore'.
60
60
61 The file starts with a version header:
61 The file starts with a version header:
62
62
63 - 1 unsigned byte: version number, starting at zero.
63 - 1 unsigned byte: version number, starting at zero.
64
64
65
65
66 The header is followed by the markers. Each marker is made of:
66 The header is followed by the markers. Each marker is made of:
67
67
68 - 1 unsigned byte: number of new changesets "R", could be zero.
68 - 1 unsigned byte: number of new changesets "R", could be zero.
69
69
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
70 - 1 unsigned 32-bits integer: metadata size "M" in bytes.
71
71
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
72 - 1 byte: a bit field. It is reserved for flags used in obsolete
73 markers common operations, to avoid repeated decoding of metadata
73 markers common operations, to avoid repeated decoding of metadata
74 entries.
74 entries.
75
75
76 - 20 bytes: obsoleted changeset identifier.
76 - 20 bytes: obsoleted changeset identifier.
77
77
78 - N*20 bytes: new changesets identifiers.
78 - N*20 bytes: new changesets identifiers.
79
79
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
80 - M bytes: metadata as a sequence of nul-terminated strings. Each
81 string contains a key and a value, separated by a color ':', without
81 string contains a key and a value, separated by a color ':', without
82 additional encoding. Keys cannot contain '\0' or ':' and values
82 additional encoding. Keys cannot contain '\0' or ':' and values
83 cannot contain '\0'.
83 cannot contain '\0'.
84 """
84 """
85 import struct
85 import struct
86 import util, base85, node
86 import util, base85, node
87 from i18n import _
87 from i18n import _
88
88
89 _pack = struct.pack
89 _pack = struct.pack
90 _unpack = struct.unpack
90 _unpack = struct.unpack
91
91
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
92 _SEEK_END = 2 # os.SEEK_END was introduced in Python 2.5
93
93
94 # the obsolete feature is not mature enough to be enabled by default.
94 # the obsolete feature is not mature enough to be enabled by default.
95 # you have to rely on third party extension extension to enable this.
95 # you have to rely on third party extension extension to enable this.
96 _enabled = False
96 _enabled = False
97
97
98 # data used for parsing and writing
98 # data used for parsing and writing
99 _fmversion = 0
99 _fmversion = 0
100 _fmfixed = '>BIB20s'
100 _fmfixed = '>BIB20s'
101 _fmnode = '20s'
101 _fmnode = '20s'
102 _fmfsize = struct.calcsize(_fmfixed)
102 _fmfsize = struct.calcsize(_fmfixed)
103 _fnodesize = struct.calcsize(_fmnode)
103 _fnodesize = struct.calcsize(_fmnode)
104
104
105 ### obsolescence marker flag
105 ### obsolescence marker flag
106
106
107 ## bumpedfix flag
107 ## bumpedfix flag
108 #
108 #
109 # When a changeset A' succeed to a changeset A which became public, we call A'
109 # When a changeset A' succeed to a changeset A which became public, we call A'
110 # "bumped" because it's a successors of a public changesets
110 # "bumped" because it's a successors of a public changesets
111 #
111 #
112 # o A' (bumped)
112 # o A' (bumped)
113 # |`:
113 # |`:
114 # | o A
114 # | o A
115 # |/
115 # |/
116 # o Z
116 # o Z
117 #
117 #
118 # The way to solve this situation is to create a new changeset Ad as children
118 # The way to solve this situation is to create a new changeset Ad as children
119 # of A. This changeset have the same content than A'. So the diff from A to A'
119 # of A. This changeset have the same content than A'. So the diff from A to A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
120 # is the same than the diff from A to Ad. Ad is marked as a successors of A'
121 #
121 #
122 # o Ad
122 # o Ad
123 # |`:
123 # |`:
124 # | x A'
124 # | x A'
125 # |'|
125 # |'|
126 # o | A
126 # o | A
127 # |/
127 # |/
128 # o Z
128 # o Z
129 #
129 #
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
130 # But by transitivity Ad is also a successors of A. To avoid having Ad marked
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
131 # as bumped too, we add the `bumpedfix` flag to the marker. <A', (Ad,)>.
132 # This flag mean that the successors are an interdiff that fix the bumped
132 # This flag mean that the successors are an interdiff that fix the bumped
133 # situation, breaking the transitivity of "bumped" here.
133 # situation, breaking the transitivity of "bumped" here.
134 bumpedfix = 1
134 bumpedfix = 1
135
135
136 def _readmarkers(data):
136 def _readmarkers(data):
137 """Read and enumerate markers from raw data"""
137 """Read and enumerate markers from raw data"""
138 off = 0
138 off = 0
139 diskversion = _unpack('>B', data[off:off + 1])[0]
139 diskversion = _unpack('>B', data[off:off + 1])[0]
140 off += 1
140 off += 1
141 if diskversion != _fmversion:
141 if diskversion != _fmversion:
142 raise util.Abort(_('parsing obsolete marker: unknown version %r')
142 raise util.Abort(_('parsing obsolete marker: unknown version %r')
143 % diskversion)
143 % diskversion)
144
144
145 # Loop on markers
145 # Loop on markers
146 l = len(data)
146 l = len(data)
147 while off + _fmfsize <= l:
147 while off + _fmfsize <= l:
148 # read fixed part
148 # read fixed part
149 cur = data[off:off + _fmfsize]
149 cur = data[off:off + _fmfsize]
150 off += _fmfsize
150 off += _fmfsize
151 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
151 nbsuc, mdsize, flags, pre = _unpack(_fmfixed, cur)
152 # read replacement
152 # read replacement
153 sucs = ()
153 sucs = ()
154 if nbsuc:
154 if nbsuc:
155 s = (_fnodesize * nbsuc)
155 s = (_fnodesize * nbsuc)
156 cur = data[off:off + s]
156 cur = data[off:off + s]
157 sucs = _unpack(_fmnode * nbsuc, cur)
157 sucs = _unpack(_fmnode * nbsuc, cur)
158 off += s
158 off += s
159 # read metadata
159 # read metadata
160 # (metadata will be decoded on demand)
160 # (metadata will be decoded on demand)
161 metadata = data[off:off + mdsize]
161 metadata = data[off:off + mdsize]
162 if len(metadata) != mdsize:
162 if len(metadata) != mdsize:
163 raise util.Abort(_('parsing obsolete marker: metadata is too '
163 raise util.Abort(_('parsing obsolete marker: metadata is too '
164 'short, %d bytes expected, got %d')
164 'short, %d bytes expected, got %d')
165 % (mdsize, len(metadata)))
165 % (mdsize, len(metadata)))
166 off += mdsize
166 off += mdsize
167 yield (pre, sucs, flags, metadata)
167 yield (pre, sucs, flags, metadata)
168
168
169 def encodemeta(meta):
169 def encodemeta(meta):
170 """Return encoded metadata string to string mapping.
170 """Return encoded metadata string to string mapping.
171
171
172 Assume no ':' in key and no '\0' in both key and value."""
172 Assume no ':' in key and no '\0' in both key and value."""
173 for key, value in meta.iteritems():
173 for key, value in meta.iteritems():
174 if ':' in key or '\0' in key:
174 if ':' in key or '\0' in key:
175 raise ValueError("':' and '\0' are forbidden in metadata key'")
175 raise ValueError("':' and '\0' are forbidden in metadata key'")
176 if '\0' in value:
176 if '\0' in value:
177 raise ValueError("':' are forbidden in metadata value'")
177 raise ValueError("':' are forbidden in metadata value'")
178 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
178 return '\0'.join(['%s:%s' % (k, meta[k]) for k in sorted(meta)])
179
179
180 def decodemeta(data):
180 def decodemeta(data):
181 """Return string to string dictionary from encoded version."""
181 """Return string to string dictionary from encoded version."""
182 d = {}
182 d = {}
183 for l in data.split('\0'):
183 for l in data.split('\0'):
184 if l:
184 if l:
185 key, value = l.split(':')
185 key, value = l.split(':')
186 d[key] = value
186 d[key] = value
187 return d
187 return d
188
188
189 class marker(object):
189 class marker(object):
190 """Wrap obsolete marker raw data"""
190 """Wrap obsolete marker raw data"""
191
191
192 def __init__(self, repo, data):
192 def __init__(self, repo, data):
193 # the repo argument will be used to create changectx in later version
193 # the repo argument will be used to create changectx in later version
194 self._repo = repo
194 self._repo = repo
195 self._data = data
195 self._data = data
196 self._decodedmeta = None
196 self._decodedmeta = None
197
197
198 def precnode(self):
198 def precnode(self):
199 """Precursor changeset node identifier"""
199 """Precursor changeset node identifier"""
200 return self._data[0]
200 return self._data[0]
201
201
202 def succnodes(self):
202 def succnodes(self):
203 """List of successor changesets node identifiers"""
203 """List of successor changesets node identifiers"""
204 return self._data[1]
204 return self._data[1]
205
205
206 def metadata(self):
206 def metadata(self):
207 """Decoded metadata dictionary"""
207 """Decoded metadata dictionary"""
208 if self._decodedmeta is None:
208 if self._decodedmeta is None:
209 self._decodedmeta = decodemeta(self._data[3])
209 self._decodedmeta = decodemeta(self._data[3])
210 return self._decodedmeta
210 return self._decodedmeta
211
211
212 def date(self):
212 def date(self):
213 """Creation date as (unixtime, offset)"""
213 """Creation date as (unixtime, offset)"""
214 parts = self.metadata()['date'].split(' ')
214 parts = self.metadata()['date'].split(' ')
215 return (float(parts[0]), int(parts[1]))
215 return (float(parts[0]), int(parts[1]))
216
216
217 class obsstore(object):
217 class obsstore(object):
218 """Store obsolete markers
218 """Store obsolete markers
219
219
220 Markers can be accessed with two mappings:
220 Markers can be accessed with two mappings:
221 - precursors[x] -> set(markers on precursors edges of x)
221 - precursors[x] -> set(markers on precursors edges of x)
222 - successors[x] -> set(markers on successors edges of x)
222 - successors[x] -> set(markers on successors edges of x)
223 """
223 """
224
224
225 def __init__(self, sopener):
225 def __init__(self, sopener):
226 # caches for various obsolescence related cache
226 # caches for various obsolescence related cache
227 self.caches = {}
227 self.caches = {}
228 self._all = []
228 self._all = []
229 # new markers to serialize
229 # new markers to serialize
230 self.precursors = {}
230 self.precursors = {}
231 self.successors = {}
231 self.successors = {}
232 self.sopener = sopener
232 self.sopener = sopener
233 data = sopener.tryread('obsstore')
233 data = sopener.tryread('obsstore')
234 if data:
234 if data:
235 self._load(_readmarkers(data))
235 self._load(_readmarkers(data))
236
236
237 def __iter__(self):
237 def __iter__(self):
238 return iter(self._all)
238 return iter(self._all)
239
239
240 def __nonzero__(self):
240 def __nonzero__(self):
241 return bool(self._all)
241 return bool(self._all)
242
242
243 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
243 def create(self, transaction, prec, succs=(), flag=0, metadata=None):
244 """obsolete: add a new obsolete marker
244 """obsolete: add a new obsolete marker
245
245
246 * ensuring it is hashable
246 * ensuring it is hashable
247 * check mandatory metadata
247 * check mandatory metadata
248 * encode metadata
248 * encode metadata
249 """
249 """
250 if metadata is None:
250 if metadata is None:
251 metadata = {}
251 metadata = {}
252 if len(prec) != 20:
252 if len(prec) != 20:
253 raise ValueError(prec)
253 raise ValueError(prec)
254 for succ in succs:
254 for succ in succs:
255 if len(succ) != 20:
255 if len(succ) != 20:
256 raise ValueError(succ)
256 raise ValueError(succ)
257 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
257 marker = (str(prec), tuple(succs), int(flag), encodemeta(metadata))
258 self.add(transaction, [marker])
258 self.add(transaction, [marker])
259
259
260 def add(self, transaction, markers):
260 def add(self, transaction, markers):
261 """Add new markers to the store
261 """Add new markers to the store
262
262
263 Take care of filtering duplicate.
263 Take care of filtering duplicate.
264 Return the number of new marker."""
264 Return the number of new marker."""
265 if not _enabled:
265 if not _enabled:
266 raise util.Abort('obsolete feature is not enabled on this repo')
266 raise util.Abort('obsolete feature is not enabled on this repo')
267 new = [m for m in markers if m not in self._all]
267 new = [m for m in markers if m not in self._all]
268 if new:
268 if new:
269 f = self.sopener('obsstore', 'ab')
269 f = self.sopener('obsstore', 'ab')
270 try:
270 try:
271 # Whether the file's current position is at the begin or at
271 # Whether the file's current position is at the begin or at
272 # the end after opening a file for appending is implementation
272 # the end after opening a file for appending is implementation
273 # defined. So we must seek to the end before calling tell(),
273 # defined. So we must seek to the end before calling tell(),
274 # or we may get a zero offset for non-zero sized files on
274 # or we may get a zero offset for non-zero sized files on
275 # some platforms (issue3543).
275 # some platforms (issue3543).
276 f.seek(0, _SEEK_END)
276 f.seek(0, _SEEK_END)
277 offset = f.tell()
277 offset = f.tell()
278 transaction.add('obsstore', offset)
278 transaction.add('obsstore', offset)
279 # offset == 0: new file - add the version header
279 # offset == 0: new file - add the version header
280 for bytes in _encodemarkers(new, offset == 0):
280 for bytes in _encodemarkers(new, offset == 0):
281 f.write(bytes)
281 f.write(bytes)
282 finally:
282 finally:
283 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
283 # XXX: f.close() == filecache invalidation == obsstore rebuilt.
284 # call 'filecacheentry.refresh()' here
284 # call 'filecacheentry.refresh()' here
285 f.close()
285 f.close()
286 self._load(new)
286 self._load(new)
287 # new marker *may* have changed several set. invalidate the cache.
287 # new marker *may* have changed several set. invalidate the cache.
288 self.caches.clear()
288 self.caches.clear()
289 return len(new)
289 return len(new)
290
290
291 def mergemarkers(self, transaction, data):
291 def mergemarkers(self, transaction, data):
292 markers = _readmarkers(data)
292 markers = _readmarkers(data)
293 self.add(transaction, markers)
293 self.add(transaction, markers)
294
294
295 def _load(self, markers):
295 def _load(self, markers):
296 for mark in markers:
296 for mark in markers:
297 self._all.append(mark)
297 self._all.append(mark)
298 pre, sucs = mark[:2]
298 pre, sucs = mark[:2]
299 self.successors.setdefault(pre, set()).add(mark)
299 self.successors.setdefault(pre, set()).add(mark)
300 for suc in sucs:
300 for suc in sucs:
301 self.precursors.setdefault(suc, set()).add(mark)
301 self.precursors.setdefault(suc, set()).add(mark)
302 if node.nullid in self.precursors:
302 if node.nullid in self.precursors:
303 raise util.Abort(_('bad obsolescence marker detected: '
303 raise util.Abort(_('bad obsolescence marker detected: '
304 'invalid successors nullid'))
304 'invalid successors nullid'))
305
305
306 def _encodemarkers(markers, addheader=False):
306 def _encodemarkers(markers, addheader=False):
307 # Kept separate from flushmarkers(), it will be reused for
307 # Kept separate from flushmarkers(), it will be reused for
308 # markers exchange.
308 # markers exchange.
309 if addheader:
309 if addheader:
310 yield _pack('>B', _fmversion)
310 yield _pack('>B', _fmversion)
311 for marker in markers:
311 for marker in markers:
312 yield _encodeonemarker(marker)
312 yield _encodeonemarker(marker)
313
313
314
314
315 def _encodeonemarker(marker):
315 def _encodeonemarker(marker):
316 pre, sucs, flags, metadata = marker
316 pre, sucs, flags, metadata = marker
317 nbsuc = len(sucs)
317 nbsuc = len(sucs)
318 format = _fmfixed + (_fmnode * nbsuc)
318 format = _fmfixed + (_fmnode * nbsuc)
319 data = [nbsuc, len(metadata), flags, pre]
319 data = [nbsuc, len(metadata), flags, pre]
320 data.extend(sucs)
320 data.extend(sucs)
321 return _pack(format, *data) + metadata
321 return _pack(format, *data) + metadata
322
322
323 # arbitrary picked to fit into 8K limit from HTTP server
323 # arbitrary picked to fit into 8K limit from HTTP server
324 # you have to take in account:
324 # you have to take in account:
325 # - the version header
325 # - the version header
326 # - the base85 encoding
326 # - the base85 encoding
327 _maxpayload = 5300
327 _maxpayload = 5300
328
328
329 def listmarkers(repo):
329 def listmarkers(repo):
330 """List markers over pushkey"""
330 """List markers over pushkey"""
331 if not repo.obsstore:
331 if not repo.obsstore:
332 return {}
332 return {}
333 keys = {}
333 keys = {}
334 parts = []
334 parts = []
335 currentlen = _maxpayload * 2 # ensure we create a new part
335 currentlen = _maxpayload * 2 # ensure we create a new part
336 for marker in repo.obsstore:
336 for marker in repo.obsstore:
337 nextdata = _encodeonemarker(marker)
337 nextdata = _encodeonemarker(marker)
338 if (len(nextdata) + currentlen > _maxpayload):
338 if (len(nextdata) + currentlen > _maxpayload):
339 currentpart = []
339 currentpart = []
340 currentlen = 0
340 currentlen = 0
341 parts.append(currentpart)
341 parts.append(currentpart)
342 currentpart.append(nextdata)
342 currentpart.append(nextdata)
343 currentlen += len(nextdata)
343 currentlen += len(nextdata)
344 for idx, part in enumerate(reversed(parts)):
344 for idx, part in enumerate(reversed(parts)):
345 data = ''.join([_pack('>B', _fmversion)] + part)
345 data = ''.join([_pack('>B', _fmversion)] + part)
346 keys['dump%i' % idx] = base85.b85encode(data)
346 keys['dump%i' % idx] = base85.b85encode(data)
347 return keys
347 return keys
348
348
349 def pushmarker(repo, key, old, new):
349 def pushmarker(repo, key, old, new):
350 """Push markers over pushkey"""
350 """Push markers over pushkey"""
351 if not key.startswith('dump'):
351 if not key.startswith('dump'):
352 repo.ui.warn(_('unknown key: %r') % key)
352 repo.ui.warn(_('unknown key: %r') % key)
353 return 0
353 return 0
354 if old:
354 if old:
355 repo.ui.warn(_('unexpected old value') % key)
355 repo.ui.warn(_('unexpected old value') % key)
356 return 0
356 return 0
357 data = base85.b85decode(new)
357 data = base85.b85decode(new)
358 lock = repo.lock()
358 lock = repo.lock()
359 try:
359 try:
360 tr = repo.transaction('pushkey: obsolete markers')
360 tr = repo.transaction('pushkey: obsolete markers')
361 try:
361 try:
362 repo.obsstore.mergemarkers(tr, data)
362 repo.obsstore.mergemarkers(tr, data)
363 tr.close()
363 tr.close()
364 return 1
364 return 1
365 finally:
365 finally:
366 tr.release()
366 tr.release()
367 finally:
367 finally:
368 lock.release()
368 lock.release()
369
369
370 def allmarkers(repo):
370 def allmarkers(repo):
371 """all obsolete markers known in a repository"""
371 """all obsolete markers known in a repository"""
372 for markerdata in repo.obsstore:
372 for markerdata in repo.obsstore:
373 yield marker(repo, markerdata)
373 yield marker(repo, markerdata)
374
374
375 def precursormarkers(ctx):
375 def precursormarkers(ctx):
376 """obsolete marker marking this changeset as a successors"""
376 """obsolete marker marking this changeset as a successors"""
377 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
377 for data in ctx._repo.obsstore.precursors.get(ctx.node(), ()):
378 yield marker(ctx._repo, data)
378 yield marker(ctx._repo, data)
379
379
380 def successormarkers(ctx):
380 def successormarkers(ctx):
381 """obsolete marker making this changeset obsolete"""
381 """obsolete marker making this changeset obsolete"""
382 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
382 for data in ctx._repo.obsstore.successors.get(ctx.node(), ()):
383 yield marker(ctx._repo, data)
383 yield marker(ctx._repo, data)
384
384
385 def allsuccessors(obsstore, nodes, ignoreflags=0):
385 def allsuccessors(obsstore, nodes, ignoreflags=0):
386 """Yield node for every successor of <nodes>.
386 """Yield node for every successor of <nodes>.
387
387
388 Some successors may be unknown locally.
388 Some successors may be unknown locally.
389
389
390 This is a linear yield unsuited to detecting split changesets."""
390 This is a linear yield unsuited to detecting split changesets."""
391 remaining = set(nodes)
391 remaining = set(nodes)
392 seen = set(remaining)
392 seen = set(remaining)
393 while remaining:
393 while remaining:
394 current = remaining.pop()
394 current = remaining.pop()
395 yield current
395 yield current
396 for mark in obsstore.successors.get(current, ()):
396 for mark in obsstore.successors.get(current, ()):
397 # ignore marker flagged with with specified flag
397 # ignore marker flagged with with specified flag
398 if mark[2] & ignoreflags:
398 if mark[2] & ignoreflags:
399 continue
399 continue
400 for suc in mark[1]:
400 for suc in mark[1]:
401 if suc not in seen:
401 if suc not in seen:
402 seen.add(suc)
402 seen.add(suc)
403 remaining.add(suc)
403 remaining.add(suc)
404
404
405 def successorssets(repo, initialnode, cache=None):
405 def successorssets(repo, initialnode, cache=None):
406 """Return all set of successors of initial nodes
406 """Return all set of successors of initial nodes
407
407
408 Successors set of changeset A are a group of revision that succeed A. It
408 Successors set of changeset A are a group of revision that succeed A. It
409 succeed A as a consistent whole, each revision being only partial
409 succeed A as a consistent whole, each revision being only partial
410 replacement. Successors set contains non-obsolete changeset only.
410 replacement. Successors set contains non-obsolete changeset only.
411
411
412 In most cases a changeset A have zero (changeset pruned) or a single
412 In most cases a changeset A have zero (changeset pruned) or a single
413 successors set that contains a single successor (changeset A replaced by
413 successors set that contains a single successor (changeset A replaced by
414 A')
414 A')
415
415
416 When changeset is split, it results successors set containing more than
416 When changeset is split, it results successors set containing more than
417 a single element. Divergent rewriting will result in multiple successors
417 a single element. Divergent rewriting will result in multiple successors
418 sets.
418 sets.
419
419
420 They are returned as a list of tuples containing all valid successors sets.
420 They are returned as a list of tuples containing all valid successors sets.
421
421
422 Final successors unknown locally are considered plain prune (obsoleted
422 Final successors unknown locally are considered plain prune (obsoleted
423 without successors).
423 without successors).
424
424
425 The optional `cache` parameter is a dictionary that may contains
425 The optional `cache` parameter is a dictionary that may contains
426 precomputed successors sets. It is meant to reuse the computation of
426 precomputed successors sets. It is meant to reuse the computation of
427 previous call to `successorssets` when multiple calls are made at the same
427 previous call to `successorssets` when multiple calls are made at the same
428 time. The cache dictionary is updated in place. The caller is responsible
428 time. The cache dictionary is updated in place. The caller is responsible
429 for its live spawn. Code that makes multiple calls to `successorssets`
429 for its live spawn. Code that makes multiple calls to `successorssets`
430 *must* use this cache mechanism or suffer terrible performances."""
430 *must* use this cache mechanism or suffer terrible performances."""
431
431
432 succmarkers = repo.obsstore.successors
432 succmarkers = repo.obsstore.successors
433
433
434 # Stack of nodes we search successors sets for
434 # Stack of nodes we search successors sets for
435 toproceed = [initialnode]
435 toproceed = [initialnode]
436 # set version of above list for fast loop detection
436 # set version of above list for fast loop detection
437 # element added to "toproceed" must be added here
437 # element added to "toproceed" must be added here
438 stackedset = set(toproceed)
438 stackedset = set(toproceed)
439 if cache is None:
439 if cache is None:
440 cache = {}
440 cache = {}
441
441
442 # This while loop is the flattened version of a recursive search for
442 # This while loop is the flattened version of a recursive search for
443 # successors sets
443 # successors sets
444 #
444 #
445 # def successorssets(x):
445 # def successorssets(x):
446 # successors = directsuccessors(x)
446 # successors = directsuccessors(x)
447 # ss = [[]]
447 # ss = [[]]
448 # for succ in directsuccessors(x):
448 # for succ in directsuccessors(x):
449 # # product as in itertools cartesian product
449 # # product as in itertools cartesian product
450 # ss = product(ss, successorssets(succ))
450 # ss = product(ss, successorssets(succ))
451 # return ss
451 # return ss
452 #
452 #
453 # But we can not use plain recursive calls here:
453 # But we can not use plain recursive calls here:
454 # - that would blow the python call stack
454 # - that would blow the python call stack
455 # - obsolescence markers may have cycles, we need to handle them.
455 # - obsolescence markers may have cycles, we need to handle them.
456 #
456 #
457 # The `toproceed` list act as our call stack. Every node we search
457 # The `toproceed` list act as our call stack. Every node we search
458 # successors set for are stacked there.
458 # successors set for are stacked there.
459 #
459 #
460 # The `stackedset` is set version of this stack used to check if a node is
460 # The `stackedset` is set version of this stack used to check if a node is
461 # already stacked. This check is used to detect cycles and prevent infinite
461 # already stacked. This check is used to detect cycles and prevent infinite
462 # loop.
462 # loop.
463 #
463 #
464 # successors set of all nodes are stored in the `cache` dictionary.
464 # successors set of all nodes are stored in the `cache` dictionary.
465 #
465 #
466 # After this while loop ends we use the cache to return the successors sets
466 # After this while loop ends we use the cache to return the successors sets
467 # for the node requested by the caller.
467 # for the node requested by the caller.
468 while toproceed:
468 while toproceed:
469 # Every iteration tries to compute the successors sets of the topmost
469 # Every iteration tries to compute the successors sets of the topmost
470 # node of the stack: CURRENT.
470 # node of the stack: CURRENT.
471 #
471 #
472 # There are four possible outcomes:
472 # There are four possible outcomes:
473 #
473 #
474 # 1) We already know the successors sets of CURRENT:
474 # 1) We already know the successors sets of CURRENT:
475 # -> mission accomplished, pop it from the stack.
475 # -> mission accomplished, pop it from the stack.
476 # 2) Node is not obsolete:
476 # 2) Node is not obsolete:
477 # -> the node is its own successors sets. Add it to the cache.
477 # -> the node is its own successors sets. Add it to the cache.
478 # 3) We do not know successors set of direct successors of CURRENT:
478 # 3) We do not know successors set of direct successors of CURRENT:
479 # -> We add those successors to the stack.
479 # -> We add those successors to the stack.
480 # 4) We know successors sets of all direct successors of CURRENT:
480 # 4) We know successors sets of all direct successors of CURRENT:
481 # -> We can compute CURRENT successors set and add it to the
481 # -> We can compute CURRENT successors set and add it to the
482 # cache.
482 # cache.
483 #
483 #
484 current = toproceed[-1]
484 current = toproceed[-1]
485 if current in cache:
485 if current in cache:
486 # case (1): We already know the successors sets
486 # case (1): We already know the successors sets
487 stackedset.remove(toproceed.pop())
487 stackedset.remove(toproceed.pop())
488 elif current not in succmarkers:
488 elif current not in succmarkers:
489 # case (2): The node is not obsolete.
489 # case (2): The node is not obsolete.
490 if current in repo:
490 if current in repo:
491 # We have a valid last successors.
491 # We have a valid last successors.
492 cache[current] = [(current,)]
492 cache[current] = [(current,)]
493 else:
493 else:
494 # Final obsolete version is unknown locally.
494 # Final obsolete version is unknown locally.
495 # Do not count that as a valid successors
495 # Do not count that as a valid successors
496 cache[current] = []
496 cache[current] = []
497 else:
497 else:
498 # cases (3) and (4)
498 # cases (3) and (4)
499 #
499 #
500 # We proceed in two phases. Phase 1 aims to distinguish case (3)
500 # We proceed in two phases. Phase 1 aims to distinguish case (3)
501 # from case (4):
501 # from case (4):
502 #
502 #
503 # For each direct successors of CURRENT, we check whether its
503 # For each direct successors of CURRENT, we check whether its
504 # successors sets are known. If they are not, we stack the
504 # successors sets are known. If they are not, we stack the
505 # unknown node and proceed to the next iteration of the while
505 # unknown node and proceed to the next iteration of the while
506 # loop. (case 3)
506 # loop. (case 3)
507 #
507 #
508 # During this step, we may detect obsolescence cycles: a node
508 # During this step, we may detect obsolescence cycles: a node
509 # with unknown successors sets but already in the call stack.
509 # with unknown successors sets but already in the call stack.
510 # In such a situation, we arbitrary set the successors sets of
510 # In such a situation, we arbitrary set the successors sets of
511 # the node to nothing (node pruned) to break the cycle.
511 # the node to nothing (node pruned) to break the cycle.
512 #
512 #
513 # If no break was encountered we proceeed to phase 2.
513 # If no break was encountered we proceeed to phase 2.
514 #
514 #
515 # Phase 2 computes successors sets of CURRENT (case 4); see details
515 # Phase 2 computes successors sets of CURRENT (case 4); see details
516 # in phase 2 itself.
516 # in phase 2 itself.
517 #
517 #
518 # Note the two levels of iteration in each phase.
518 # Note the two levels of iteration in each phase.
519 # - The first one handles obsolescence markers using CURRENT as
519 # - The first one handles obsolescence markers using CURRENT as
520 # precursor (successors markers of CURRENT).
520 # precursor (successors markers of CURRENT).
521 #
521 #
522 # Having multiple entry here means divergence.
522 # Having multiple entry here means divergence.
523 #
523 #
524 # - The second one handles successors defined in each marker.
524 # - The second one handles successors defined in each marker.
525 #
525 #
526 # Having none means pruned node, multiple successors means split,
526 # Having none means pruned node, multiple successors means split,
527 # single successors are standard replacement.
527 # single successors are standard replacement.
528 #
528 #
529 for mark in succmarkers[current]:
529 for mark in sorted(succmarkers[current]):
530 for suc in mark[1]:
530 for suc in mark[1]:
531 if suc not in cache:
531 if suc not in cache:
532 if suc in stackedset:
532 if suc in stackedset:
533 # cycle breaking
533 # cycle breaking
534 cache[suc] = []
534 cache[suc] = []
535 else:
535 else:
536 # case (3) If we have not computed successors sets
536 # case (3) If we have not computed successors sets
537 # of one of those successors we add it to the
537 # of one of those successors we add it to the
538 # `toproceed` stack and stop all work for this
538 # `toproceed` stack and stop all work for this
539 # iteration.
539 # iteration.
540 toproceed.append(suc)
540 toproceed.append(suc)
541 stackedset.add(suc)
541 stackedset.add(suc)
542 break
542 break
543 else:
543 else:
544 continue
544 continue
545 break
545 break
546 else:
546 else:
547 # case (4): we know all successors sets of all direct
547 # case (4): we know all successors sets of all direct
548 # successors
548 # successors
549 #
549 #
550 # Successors set contributed by each marker depends on the
550 # Successors set contributed by each marker depends on the
551 # successors sets of all its "successors" node.
551 # successors sets of all its "successors" node.
552 #
552 #
553 # Each different marker is a divergence in the obsolescence
553 # Each different marker is a divergence in the obsolescence
554 # history. It contributes successors sets dictinct from other
554 # history. It contributes successors sets dictinct from other
555 # markers.
555 # markers.
556 #
556 #
557 # Within a marker, a successor may have divergent successors
557 # Within a marker, a successor may have divergent successors
558 # sets. In such a case, the marker will contribute multiple
558 # sets. In such a case, the marker will contribute multiple
559 # divergent successors sets. If multiple successors have
559 # divergent successors sets. If multiple successors have
560 # divergents successors sets, a cartesian product is used.
560 # divergents successors sets, a cartesian product is used.
561 #
561 #
562 # At the end we post-process successors sets to remove
562 # At the end we post-process successors sets to remove
563 # duplicated entry and successors set that are strict subset of
563 # duplicated entry and successors set that are strict subset of
564 # another one.
564 # another one.
565 succssets = []
565 succssets = []
566 for mark in succmarkers[current]:
566 for mark in sorted(succmarkers[current]):
567 # successors sets contributed by this marker
567 # successors sets contributed by this marker
568 markss = [[]]
568 markss = [[]]
569 for suc in mark[1]:
569 for suc in mark[1]:
570 # cardinal product with previous successors
570 # cardinal product with previous successors
571 productresult = []
571 productresult = []
572 for prefix in markss:
572 for prefix in markss:
573 for suffix in cache[suc]:
573 for suffix in cache[suc]:
574 newss = list(prefix)
574 newss = list(prefix)
575 for part in suffix:
575 for part in suffix:
576 # do not duplicated entry in successors set
576 # do not duplicated entry in successors set
577 # first entry wins.
577 # first entry wins.
578 if part not in newss:
578 if part not in newss:
579 newss.append(part)
579 newss.append(part)
580 productresult.append(newss)
580 productresult.append(newss)
581 markss = productresult
581 markss = productresult
582 succssets.extend(markss)
582 succssets.extend(markss)
583 # remove duplicated and subset
583 # remove duplicated and subset
584 seen = []
584 seen = []
585 final = []
585 final = []
586 candidate = sorted(((set(s), s) for s in succssets if s),
586 candidate = sorted(((set(s), s) for s in succssets if s),
587 key=lambda x: len(x[1]), reverse=True)
587 key=lambda x: len(x[1]), reverse=True)
588 for setversion, listversion in candidate:
588 for setversion, listversion in candidate:
589 for seenset in seen:
589 for seenset in seen:
590 if setversion.issubset(seenset):
590 if setversion.issubset(seenset):
591 break
591 break
592 else:
592 else:
593 final.append(listversion)
593 final.append(listversion)
594 seen.append(setversion)
594 seen.append(setversion)
595 final.reverse() # put small successors set first
595 final.reverse() # put small successors set first
596 cache[current] = final
596 cache[current] = final
597 return cache[initialnode]
597 return cache[initialnode]
598
598
599 def _knownrevs(repo, nodes):
599 def _knownrevs(repo, nodes):
600 """yield revision numbers of known nodes passed in parameters
600 """yield revision numbers of known nodes passed in parameters
601
601
602 Unknown revisions are silently ignored."""
602 Unknown revisions are silently ignored."""
603 torev = repo.changelog.nodemap.get
603 torev = repo.changelog.nodemap.get
604 for n in nodes:
604 for n in nodes:
605 rev = torev(n)
605 rev = torev(n)
606 if rev is not None:
606 if rev is not None:
607 yield rev
607 yield rev
608
608
609 # mapping of 'set-name' -> <function to compute this set>
609 # mapping of 'set-name' -> <function to compute this set>
610 cachefuncs = {}
610 cachefuncs = {}
611 def cachefor(name):
611 def cachefor(name):
612 """Decorator to register a function as computing the cache for a set"""
612 """Decorator to register a function as computing the cache for a set"""
613 def decorator(func):
613 def decorator(func):
614 assert name not in cachefuncs
614 assert name not in cachefuncs
615 cachefuncs[name] = func
615 cachefuncs[name] = func
616 return func
616 return func
617 return decorator
617 return decorator
618
618
619 def getrevs(repo, name):
619 def getrevs(repo, name):
620 """Return the set of revision that belong to the <name> set
620 """Return the set of revision that belong to the <name> set
621
621
622 Such access may compute the set and cache it for future use"""
622 Such access may compute the set and cache it for future use"""
623 repo = repo.unfiltered()
623 repo = repo.unfiltered()
624 if not repo.obsstore:
624 if not repo.obsstore:
625 return ()
625 return ()
626 if name not in repo.obsstore.caches:
626 if name not in repo.obsstore.caches:
627 repo.obsstore.caches[name] = cachefuncs[name](repo)
627 repo.obsstore.caches[name] = cachefuncs[name](repo)
628 return repo.obsstore.caches[name]
628 return repo.obsstore.caches[name]
629
629
630 # To be simple we need to invalidate obsolescence cache when:
630 # To be simple we need to invalidate obsolescence cache when:
631 #
631 #
632 # - new changeset is added:
632 # - new changeset is added:
633 # - public phase is changed
633 # - public phase is changed
634 # - obsolescence marker are added
634 # - obsolescence marker are added
635 # - strip is used a repo
635 # - strip is used a repo
636 def clearobscaches(repo):
636 def clearobscaches(repo):
637 """Remove all obsolescence related cache from a repo
637 """Remove all obsolescence related cache from a repo
638
638
639 This remove all cache in obsstore is the obsstore already exist on the
639 This remove all cache in obsstore is the obsstore already exist on the
640 repo.
640 repo.
641
641
642 (We could be smarter here given the exact event that trigger the cache
642 (We could be smarter here given the exact event that trigger the cache
643 clearing)"""
643 clearing)"""
644 # only clear cache is there is obsstore data in this repo
644 # only clear cache is there is obsstore data in this repo
645 if 'obsstore' in repo._filecache:
645 if 'obsstore' in repo._filecache:
646 repo.obsstore.caches.clear()
646 repo.obsstore.caches.clear()
647
647
648 @cachefor('obsolete')
648 @cachefor('obsolete')
649 def _computeobsoleteset(repo):
649 def _computeobsoleteset(repo):
650 """the set of obsolete revisions"""
650 """the set of obsolete revisions"""
651 obs = set()
651 obs = set()
652 getrev = repo.changelog.nodemap.get
652 getrev = repo.changelog.nodemap.get
653 getphase = repo._phasecache.phase
653 getphase = repo._phasecache.phase
654 for node in repo.obsstore.successors:
654 for node in repo.obsstore.successors:
655 rev = getrev(node)
655 rev = getrev(node)
656 if rev is not None and getphase(repo, rev):
656 if rev is not None and getphase(repo, rev):
657 obs.add(rev)
657 obs.add(rev)
658 return obs
658 return obs
659
659
660 @cachefor('unstable')
660 @cachefor('unstable')
661 def _computeunstableset(repo):
661 def _computeunstableset(repo):
662 """the set of non obsolete revisions with obsolete parents"""
662 """the set of non obsolete revisions with obsolete parents"""
663 # revset is not efficient enough here
663 # revset is not efficient enough here
664 # we do (obsolete()::) - obsolete() by hand
664 # we do (obsolete()::) - obsolete() by hand
665 obs = getrevs(repo, 'obsolete')
665 obs = getrevs(repo, 'obsolete')
666 if not obs:
666 if not obs:
667 return set()
667 return set()
668 cl = repo.changelog
668 cl = repo.changelog
669 return set(r for r in cl.descendants(obs) if r not in obs)
669 return set(r for r in cl.descendants(obs) if r not in obs)
670
670
671 @cachefor('suspended')
671 @cachefor('suspended')
672 def _computesuspendedset(repo):
672 def _computesuspendedset(repo):
673 """the set of obsolete parents with non obsolete descendants"""
673 """the set of obsolete parents with non obsolete descendants"""
674 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
674 suspended = repo.changelog.ancestors(getrevs(repo, 'unstable'))
675 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
675 return set(r for r in getrevs(repo, 'obsolete') if r in suspended)
676
676
677 @cachefor('extinct')
677 @cachefor('extinct')
678 def _computeextinctset(repo):
678 def _computeextinctset(repo):
679 """the set of obsolete parents without non obsolete descendants"""
679 """the set of obsolete parents without non obsolete descendants"""
680 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
680 return getrevs(repo, 'obsolete') - getrevs(repo, 'suspended')
681
681
682
682
683 @cachefor('bumped')
683 @cachefor('bumped')
684 def _computebumpedset(repo):
684 def _computebumpedset(repo):
685 """the set of revs trying to obsolete public revisions"""
685 """the set of revs trying to obsolete public revisions"""
686 # get all possible bumped changesets
686 # get all possible bumped changesets
687 tonode = repo.changelog.node
687 tonode = repo.changelog.node
688 publicnodes = (tonode(r) for r in repo.revs('public()'))
688 publicnodes = (tonode(r) for r in repo.revs('public()'))
689 successors = allsuccessors(repo.obsstore, publicnodes,
689 successors = allsuccessors(repo.obsstore, publicnodes,
690 ignoreflags=bumpedfix)
690 ignoreflags=bumpedfix)
691 # revision public or already obsolete don't count as bumped
691 # revision public or already obsolete don't count as bumped
692 query = '%ld - obsolete() - public()'
692 query = '%ld - obsolete() - public()'
693 return set(repo.revs(query, _knownrevs(repo, successors)))
693 return set(repo.revs(query, _knownrevs(repo, successors)))
694
694
695 @cachefor('divergent')
695 @cachefor('divergent')
696 def _computedivergentset(repo):
696 def _computedivergentset(repo):
697 """the set of rev that compete to be the final successors of some revision.
697 """the set of rev that compete to be the final successors of some revision.
698 """
698 """
699 divergent = set()
699 divergent = set()
700 obsstore = repo.obsstore
700 obsstore = repo.obsstore
701 newermap = {}
701 newermap = {}
702 for ctx in repo.set('(not public()) - obsolete()'):
702 for ctx in repo.set('(not public()) - obsolete()'):
703 mark = obsstore.precursors.get(ctx.node(), ())
703 mark = obsstore.precursors.get(ctx.node(), ())
704 toprocess = set(mark)
704 toprocess = set(mark)
705 while toprocess:
705 while toprocess:
706 prec = toprocess.pop()[0]
706 prec = toprocess.pop()[0]
707 if prec not in newermap:
707 if prec not in newermap:
708 successorssets(repo, prec, newermap)
708 successorssets(repo, prec, newermap)
709 newer = [n for n in newermap[prec] if n]
709 newer = [n for n in newermap[prec] if n]
710 if len(newer) > 1:
710 if len(newer) > 1:
711 divergent.add(ctx.rev())
711 divergent.add(ctx.rev())
712 break
712 break
713 toprocess.update(obsstore.precursors.get(prec, ()))
713 toprocess.update(obsstore.precursors.get(prec, ()))
714 return divergent
714 return divergent
715
715
716
716
717 def createmarkers(repo, relations, flag=0, metadata=None):
717 def createmarkers(repo, relations, flag=0, metadata=None):
718 """Add obsolete markers between changesets in a repo
718 """Add obsolete markers between changesets in a repo
719
719
720 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
720 <relations> must be an iterable of (<old>, (<new>, ...)) tuple.
721 `old` and `news` are changectx.
721 `old` and `news` are changectx.
722
722
723 Trying to obsolete a public changeset will raise an exception.
723 Trying to obsolete a public changeset will raise an exception.
724
724
725 Current user and date are used except if specified otherwise in the
725 Current user and date are used except if specified otherwise in the
726 metadata attribute.
726 metadata attribute.
727
727
728 This function operates within a transaction of its own, but does
728 This function operates within a transaction of its own, but does
729 not take any lock on the repo.
729 not take any lock on the repo.
730 """
730 """
731 # prepare metadata
731 # prepare metadata
732 if metadata is None:
732 if metadata is None:
733 metadata = {}
733 metadata = {}
734 if 'date' not in metadata:
734 if 'date' not in metadata:
735 metadata['date'] = '%i %i' % util.makedate()
735 metadata['date'] = '%i %i' % util.makedate()
736 if 'user' not in metadata:
736 if 'user' not in metadata:
737 metadata['user'] = repo.ui.username()
737 metadata['user'] = repo.ui.username()
738 tr = repo.transaction('add-obsolescence-marker')
738 tr = repo.transaction('add-obsolescence-marker')
739 try:
739 try:
740 for prec, sucs in relations:
740 for prec, sucs in relations:
741 if not prec.mutable():
741 if not prec.mutable():
742 raise util.Abort("cannot obsolete immutable changeset: %s"
742 raise util.Abort("cannot obsolete immutable changeset: %s"
743 % prec)
743 % prec)
744 nprec = prec.node()
744 nprec = prec.node()
745 nsucs = tuple(s.node() for s in sucs)
745 nsucs = tuple(s.node() for s in sucs)
746 if nprec in nsucs:
746 if nprec in nsucs:
747 raise util.Abort("changeset %s cannot obsolete itself" % prec)
747 raise util.Abort("changeset %s cannot obsolete itself" % prec)
748 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
748 repo.obsstore.create(tr, nprec, nsucs, flag, metadata)
749 repo.filteredrevcache.clear()
749 repo.filteredrevcache.clear()
750 tr.close()
750 tr.close()
751 finally:
751 finally:
752 tr.release()
752 tr.release()
@@ -1,447 +1,447 b''
1 Test file dedicated to testing the divergent troubles from obsolete changeset.
1 Test file dedicated to testing the divergent troubles from obsolete changeset.
2
2
3 This is the most complexe troubles from far so we isolate it in a dedicated
3 This is the most complexe troubles from far so we isolate it in a dedicated
4 file.
4 file.
5
5
6 Enable obsolete
6 Enable obsolete
7
7
8 $ cat > obs.py << EOF
8 $ cat > obs.py << EOF
9 > import mercurial.obsolete
9 > import mercurial.obsolete
10 > mercurial.obsolete._enabled = True
10 > mercurial.obsolete._enabled = True
11 > EOF
11 > EOF
12 $ cat >> $HGRCPATH << EOF
12 $ cat >> $HGRCPATH << EOF
13 > [ui]
13 > [ui]
14 > logtemplate = {rev}:{node|short} {desc}\n
14 > logtemplate = {rev}:{node|short} {desc}\n
15 > [extensions]
15 > [extensions]
16 > obs=${TESTTMP}/obs.py
16 > obs=${TESTTMP}/obs.py
17 > [alias]
17 > [alias]
18 > debugobsolete = debugobsolete -d '0 0'
18 > debugobsolete = debugobsolete -d '0 0'
19 > [phases]
19 > [phases]
20 > publish=False
20 > publish=False
21 > EOF
21 > EOF
22
22
23
23
24 $ mkcommit() {
24 $ mkcommit() {
25 > echo "$1" > "$1"
25 > echo "$1" > "$1"
26 > hg add "$1"
26 > hg add "$1"
27 > hg ci -m "$1"
27 > hg ci -m "$1"
28 > }
28 > }
29 $ getid() {
29 $ getid() {
30 > hg id --debug --hidden -ir "desc('$1')"
30 > hg id --debug --hidden -ir "desc('$1')"
31 > }
31 > }
32
32
33 setup repo
33 setup repo
34
34
35 $ hg init reference
35 $ hg init reference
36 $ cd reference
36 $ cd reference
37 $ mkcommit base
37 $ mkcommit base
38 $ mkcommit A_0
38 $ mkcommit A_0
39 $ hg up 0
39 $ hg up 0
40 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
40 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
41 $ mkcommit A_1
41 $ mkcommit A_1
42 created new head
42 created new head
43 $ hg up 0
43 $ hg up 0
44 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
44 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
45 $ mkcommit A_2
45 $ mkcommit A_2
46 created new head
46 created new head
47 $ hg up 0
47 $ hg up 0
48 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
48 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
49 $ cd ..
49 $ cd ..
50
50
51
51
52 $ newcase() {
52 $ newcase() {
53 > hg clone -u 0 -q reference $1
53 > hg clone -u 0 -q reference $1
54 > cd $1
54 > cd $1
55 > }
55 > }
56
56
57 direct divergence
57 direct divergence
58 -----------------
58 -----------------
59
59
60 A_1 have two direct and divergent successors A_1 and A_1
60 A_1 have two direct and divergent successors A_1 and A_1
61
61
62 $ newcase direct
62 $ newcase direct
63 $ hg debugobsolete `getid A_0` `getid A_1`
63 $ hg debugobsolete `getid A_0` `getid A_1`
64 $ hg debugobsolete `getid A_0` `getid A_2`
64 $ hg debugobsolete `getid A_0` `getid A_2`
65 invalid branchheads cache (unserved): tip differs
65 invalid branchheads cache (unserved): tip differs
66 $ hg log -G --hidden
66 $ hg log -G --hidden
67 o 3:392fd25390da A_2
67 o 3:392fd25390da A_2
68 |
68 |
69 | o 2:82623d38b9ba A_1
69 | o 2:82623d38b9ba A_1
70 |/
70 |/
71 | x 1:007dc284c1f8 A_0
71 | x 1:007dc284c1f8 A_0
72 |/
72 |/
73 @ 0:d20a80d4def3 base
73 @ 0:d20a80d4def3 base
74
74
75 $ hg debugsuccessorssets --hidden 'all()'
75 $ hg debugsuccessorssets --hidden 'all()'
76 d20a80d4def3
76 d20a80d4def3
77 d20a80d4def3
77 d20a80d4def3
78 007dc284c1f8
78 007dc284c1f8
79 82623d38b9ba
79 392fd25390da
80 392fd25390da
80 82623d38b9ba
81 82623d38b9ba
81 82623d38b9ba
82 82623d38b9ba
82 82623d38b9ba
83 392fd25390da
83 392fd25390da
84 392fd25390da
84 392fd25390da
85 $ hg log -r 'divergent()'
85 $ hg log -r 'divergent()'
86 2:82623d38b9ba A_1
86 2:82623d38b9ba A_1
87 3:392fd25390da A_2
87 3:392fd25390da A_2
88
88
89 check that mercurial refuse to push
89 check that mercurial refuse to push
90
90
91 $ hg init ../other
91 $ hg init ../other
92 $ hg push ../other
92 $ hg push ../other
93 pushing to ../other
93 pushing to ../other
94 searching for changes
94 searching for changes
95 abort: push includes divergent changeset: 392fd25390da!
95 abort: push includes divergent changeset: 392fd25390da!
96 [255]
96 [255]
97
97
98 $ cd ..
98 $ cd ..
99
99
100
100
101 indirect divergence with known changeset
101 indirect divergence with known changeset
102 -------------------------------------------
102 -------------------------------------------
103
103
104 $ newcase indirect_known
104 $ newcase indirect_known
105 $ hg debugobsolete `getid A_0` `getid A_1`
105 $ hg debugobsolete `getid A_0` `getid A_1`
106 $ hg debugobsolete `getid A_0` `getid A_2`
106 $ hg debugobsolete `getid A_0` `getid A_2`
107 invalid branchheads cache (unserved): tip differs
107 invalid branchheads cache (unserved): tip differs
108 $ mkcommit A_3
108 $ mkcommit A_3
109 created new head
109 created new head
110 $ hg debugobsolete `getid A_2` `getid A_3`
110 $ hg debugobsolete `getid A_2` `getid A_3`
111 $ hg log -G --hidden
111 $ hg log -G --hidden
112 @ 4:01f36c5a8fda A_3
112 @ 4:01f36c5a8fda A_3
113 |
113 |
114 | x 3:392fd25390da A_2
114 | x 3:392fd25390da A_2
115 |/
115 |/
116 | o 2:82623d38b9ba A_1
116 | o 2:82623d38b9ba A_1
117 |/
117 |/
118 | x 1:007dc284c1f8 A_0
118 | x 1:007dc284c1f8 A_0
119 |/
119 |/
120 o 0:d20a80d4def3 base
120 o 0:d20a80d4def3 base
121
121
122 $ hg debugsuccessorssets --hidden 'all()'
122 $ hg debugsuccessorssets --hidden 'all()'
123 d20a80d4def3
123 d20a80d4def3
124 d20a80d4def3
124 d20a80d4def3
125 007dc284c1f8
125 007dc284c1f8
126 82623d38b9ba
126 01f36c5a8fda
127 01f36c5a8fda
127 82623d38b9ba
128 82623d38b9ba
128 82623d38b9ba
129 82623d38b9ba
129 82623d38b9ba
130 392fd25390da
130 392fd25390da
131 01f36c5a8fda
131 01f36c5a8fda
132 01f36c5a8fda
132 01f36c5a8fda
133 01f36c5a8fda
133 01f36c5a8fda
134 $ hg log -r 'divergent()'
134 $ hg log -r 'divergent()'
135 2:82623d38b9ba A_1
135 2:82623d38b9ba A_1
136 4:01f36c5a8fda A_3
136 4:01f36c5a8fda A_3
137 $ cd ..
137 $ cd ..
138
138
139
139
140 indirect divergence with known changeset
140 indirect divergence with known changeset
141 -------------------------------------------
141 -------------------------------------------
142
142
143 $ newcase indirect_unknown
143 $ newcase indirect_unknown
144 $ hg debugobsolete `getid A_0` aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
144 $ hg debugobsolete `getid A_0` aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
145 $ hg debugobsolete aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa `getid A_1`
145 $ hg debugobsolete aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa `getid A_1`
146 invalid branchheads cache (unserved): tip differs
146 invalid branchheads cache (unserved): tip differs
147 $ hg debugobsolete `getid A_0` `getid A_2`
147 $ hg debugobsolete `getid A_0` `getid A_2`
148 $ hg log -G --hidden
148 $ hg log -G --hidden
149 o 3:392fd25390da A_2
149 o 3:392fd25390da A_2
150 |
150 |
151 | o 2:82623d38b9ba A_1
151 | o 2:82623d38b9ba A_1
152 |/
152 |/
153 | x 1:007dc284c1f8 A_0
153 | x 1:007dc284c1f8 A_0
154 |/
154 |/
155 @ 0:d20a80d4def3 base
155 @ 0:d20a80d4def3 base
156
156
157 $ hg debugsuccessorssets --hidden 'all()'
157 $ hg debugsuccessorssets --hidden 'all()'
158 d20a80d4def3
158 d20a80d4def3
159 d20a80d4def3
159 d20a80d4def3
160 007dc284c1f8
160 007dc284c1f8
161 82623d38b9ba
161 392fd25390da
162 392fd25390da
162 82623d38b9ba
163 82623d38b9ba
163 82623d38b9ba
164 82623d38b9ba
164 82623d38b9ba
165 392fd25390da
165 392fd25390da
166 392fd25390da
166 392fd25390da
167 $ hg log -r 'divergent()'
167 $ hg log -r 'divergent()'
168 2:82623d38b9ba A_1
168 2:82623d38b9ba A_1
169 3:392fd25390da A_2
169 3:392fd25390da A_2
170 $ cd ..
170 $ cd ..
171
171
172 do not take unknown node in account if they are final
172 do not take unknown node in account if they are final
173 -----------------------------------------------------
173 -----------------------------------------------------
174
174
175 $ newcase final-unknown
175 $ newcase final-unknown
176 $ hg debugobsolete `getid A_0` `getid A_1`
176 $ hg debugobsolete `getid A_0` `getid A_1`
177 $ hg debugobsolete `getid A_1` `getid A_2`
177 $ hg debugobsolete `getid A_1` `getid A_2`
178 invalid branchheads cache (unserved): tip differs
178 invalid branchheads cache (unserved): tip differs
179 $ hg debugobsolete `getid A_0` bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
179 $ hg debugobsolete `getid A_0` bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
180 $ hg debugobsolete bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb cccccccccccccccccccccccccccccccccccccccc
180 $ hg debugobsolete bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb cccccccccccccccccccccccccccccccccccccccc
181 $ hg debugobsolete `getid A_1` dddddddddddddddddddddddddddddddddddddddd
181 $ hg debugobsolete `getid A_1` dddddddddddddddddddddddddddddddddddddddd
182
182
183 $ hg debugsuccessorssets --hidden 'desc('A_0')'
183 $ hg debugsuccessorssets --hidden 'desc('A_0')'
184 007dc284c1f8
184 007dc284c1f8
185 392fd25390da
185 392fd25390da
186
186
187 $ cd ..
187 $ cd ..
188
188
189 divergence that converge again is not divergence anymore
189 divergence that converge again is not divergence anymore
190 -----------------------------------------------------
190 -----------------------------------------------------
191
191
192 $ newcase converged_divergence
192 $ newcase converged_divergence
193 $ hg debugobsolete `getid A_0` `getid A_1`
193 $ hg debugobsolete `getid A_0` `getid A_1`
194 $ hg debugobsolete `getid A_0` `getid A_2`
194 $ hg debugobsolete `getid A_0` `getid A_2`
195 invalid branchheads cache (unserved): tip differs
195 invalid branchheads cache (unserved): tip differs
196 $ mkcommit A_3
196 $ mkcommit A_3
197 created new head
197 created new head
198 $ hg debugobsolete `getid A_1` `getid A_3`
198 $ hg debugobsolete `getid A_1` `getid A_3`
199 $ hg debugobsolete `getid A_2` `getid A_3`
199 $ hg debugobsolete `getid A_2` `getid A_3`
200 $ hg log -G --hidden
200 $ hg log -G --hidden
201 @ 4:01f36c5a8fda A_3
201 @ 4:01f36c5a8fda A_3
202 |
202 |
203 | x 3:392fd25390da A_2
203 | x 3:392fd25390da A_2
204 |/
204 |/
205 | x 2:82623d38b9ba A_1
205 | x 2:82623d38b9ba A_1
206 |/
206 |/
207 | x 1:007dc284c1f8 A_0
207 | x 1:007dc284c1f8 A_0
208 |/
208 |/
209 o 0:d20a80d4def3 base
209 o 0:d20a80d4def3 base
210
210
211 $ hg debugsuccessorssets --hidden 'all()'
211 $ hg debugsuccessorssets --hidden 'all()'
212 d20a80d4def3
212 d20a80d4def3
213 d20a80d4def3
213 d20a80d4def3
214 007dc284c1f8
214 007dc284c1f8
215 01f36c5a8fda
215 01f36c5a8fda
216 82623d38b9ba
216 82623d38b9ba
217 01f36c5a8fda
217 01f36c5a8fda
218 392fd25390da
218 392fd25390da
219 01f36c5a8fda
219 01f36c5a8fda
220 01f36c5a8fda
220 01f36c5a8fda
221 01f36c5a8fda
221 01f36c5a8fda
222 $ hg log -r 'divergent()'
222 $ hg log -r 'divergent()'
223 $ cd ..
223 $ cd ..
224
224
225 split is not divergences
225 split is not divergences
226 -----------------------------
226 -----------------------------
227
227
228 $ newcase split
228 $ newcase split
229 $ hg debugobsolete `getid A_0` `getid A_1` `getid A_2`
229 $ hg debugobsolete `getid A_0` `getid A_1` `getid A_2`
230 $ hg log -G --hidden
230 $ hg log -G --hidden
231 o 3:392fd25390da A_2
231 o 3:392fd25390da A_2
232 |
232 |
233 | o 2:82623d38b9ba A_1
233 | o 2:82623d38b9ba A_1
234 |/
234 |/
235 | x 1:007dc284c1f8 A_0
235 | x 1:007dc284c1f8 A_0
236 |/
236 |/
237 @ 0:d20a80d4def3 base
237 @ 0:d20a80d4def3 base
238
238
239 $ hg debugsuccessorssets --hidden 'all()'
239 $ hg debugsuccessorssets --hidden 'all()'
240 d20a80d4def3
240 d20a80d4def3
241 d20a80d4def3
241 d20a80d4def3
242 007dc284c1f8
242 007dc284c1f8
243 82623d38b9ba 392fd25390da
243 82623d38b9ba 392fd25390da
244 82623d38b9ba
244 82623d38b9ba
245 82623d38b9ba
245 82623d38b9ba
246 392fd25390da
246 392fd25390da
247 392fd25390da
247 392fd25390da
248 $ hg log -r 'divergent()'
248 $ hg log -r 'divergent()'
249
249
250 Even when subsequente rewriting happen
250 Even when subsequente rewriting happen
251
251
252 $ mkcommit A_3
252 $ mkcommit A_3
253 created new head
253 created new head
254 $ hg debugobsolete `getid A_1` `getid A_3`
254 $ hg debugobsolete `getid A_1` `getid A_3`
255 $ hg up 0
255 $ hg up 0
256 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
256 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
257 $ mkcommit A_4
257 $ mkcommit A_4
258 created new head
258 created new head
259 $ hg debugobsolete `getid A_2` `getid A_4`
259 $ hg debugobsolete `getid A_2` `getid A_4`
260 $ hg up 0
260 $ hg up 0
261 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
261 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
262 $ mkcommit A_5
262 $ mkcommit A_5
263 created new head
263 created new head
264 $ hg debugobsolete `getid A_4` `getid A_5`
264 $ hg debugobsolete `getid A_4` `getid A_5`
265 $ hg log -G --hidden
265 $ hg log -G --hidden
266 @ 6:e442cfc57690 A_5
266 @ 6:e442cfc57690 A_5
267 |
267 |
268 | x 5:6a411f0d7a0a A_4
268 | x 5:6a411f0d7a0a A_4
269 |/
269 |/
270 | o 4:01f36c5a8fda A_3
270 | o 4:01f36c5a8fda A_3
271 |/
271 |/
272 | x 3:392fd25390da A_2
272 | x 3:392fd25390da A_2
273 |/
273 |/
274 | x 2:82623d38b9ba A_1
274 | x 2:82623d38b9ba A_1
275 |/
275 |/
276 | x 1:007dc284c1f8 A_0
276 | x 1:007dc284c1f8 A_0
277 |/
277 |/
278 o 0:d20a80d4def3 base
278 o 0:d20a80d4def3 base
279
279
280 $ hg debugsuccessorssets --hidden 'all()'
280 $ hg debugsuccessorssets --hidden 'all()'
281 d20a80d4def3
281 d20a80d4def3
282 d20a80d4def3
282 d20a80d4def3
283 007dc284c1f8
283 007dc284c1f8
284 01f36c5a8fda e442cfc57690
284 01f36c5a8fda e442cfc57690
285 82623d38b9ba
285 82623d38b9ba
286 01f36c5a8fda
286 01f36c5a8fda
287 392fd25390da
287 392fd25390da
288 e442cfc57690
288 e442cfc57690
289 01f36c5a8fda
289 01f36c5a8fda
290 01f36c5a8fda
290 01f36c5a8fda
291 6a411f0d7a0a
291 6a411f0d7a0a
292 e442cfc57690
292 e442cfc57690
293 e442cfc57690
293 e442cfc57690
294 e442cfc57690
294 e442cfc57690
295 $ hg log -r 'divergent()'
295 $ hg log -r 'divergent()'
296
296
297 Check more complexe obsolescence graft (with divergence)
297 Check more complexe obsolescence graft (with divergence)
298
298
299 $ mkcommit B_0; hg up 0
299 $ mkcommit B_0; hg up 0
300 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
300 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
301 $ hg debugobsolete `getid B_0` `getid A_2`
301 $ hg debugobsolete `getid B_0` `getid A_2`
302 $ mkcommit A_7; hg up 0
302 $ mkcommit A_7; hg up 0
303 created new head
303 created new head
304 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
304 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
305 $ mkcommit A_8; hg up 0
305 $ mkcommit A_8; hg up 0
306 created new head
306 created new head
307 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
307 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
308 $ hg debugobsolete `getid A_5` `getid A_7` `getid A_8`
308 $ hg debugobsolete `getid A_5` `getid A_7` `getid A_8`
309 $ mkcommit A_9; hg up 0
309 $ mkcommit A_9; hg up 0
310 created new head
310 created new head
311 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
311 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
312 $ hg debugobsolete `getid A_5` `getid A_9`
312 $ hg debugobsolete `getid A_5` `getid A_9`
313 $ hg log -G --hidden
313 $ hg log -G --hidden
314 o 10:bed64f5d2f5a A_9
314 o 10:bed64f5d2f5a A_9
315 |
315 |
316 | o 9:14608b260df8 A_8
316 | o 9:14608b260df8 A_8
317 |/
317 |/
318 | o 8:7ae126973a96 A_7
318 | o 8:7ae126973a96 A_7
319 |/
319 |/
320 | x 7:3750ebee865d B_0
320 | x 7:3750ebee865d B_0
321 | |
321 | |
322 | x 6:e442cfc57690 A_5
322 | x 6:e442cfc57690 A_5
323 |/
323 |/
324 | x 5:6a411f0d7a0a A_4
324 | x 5:6a411f0d7a0a A_4
325 |/
325 |/
326 | o 4:01f36c5a8fda A_3
326 | o 4:01f36c5a8fda A_3
327 |/
327 |/
328 | x 3:392fd25390da A_2
328 | x 3:392fd25390da A_2
329 |/
329 |/
330 | x 2:82623d38b9ba A_1
330 | x 2:82623d38b9ba A_1
331 |/
331 |/
332 | x 1:007dc284c1f8 A_0
332 | x 1:007dc284c1f8 A_0
333 |/
333 |/
334 @ 0:d20a80d4def3 base
334 @ 0:d20a80d4def3 base
335
335
336 $ hg debugsuccessorssets --hidden 'all()'
336 $ hg debugsuccessorssets --hidden 'all()'
337 d20a80d4def3
337 d20a80d4def3
338 d20a80d4def3
338 d20a80d4def3
339 007dc284c1f8
339 007dc284c1f8
340 01f36c5a8fda bed64f5d2f5a
340 01f36c5a8fda bed64f5d2f5a
341 01f36c5a8fda 7ae126973a96 14608b260df8
341 01f36c5a8fda 7ae126973a96 14608b260df8
342 82623d38b9ba
342 82623d38b9ba
343 01f36c5a8fda
343 01f36c5a8fda
344 392fd25390da
344 392fd25390da
345 bed64f5d2f5a
345 bed64f5d2f5a
346 7ae126973a96 14608b260df8
346 7ae126973a96 14608b260df8
347 01f36c5a8fda
347 01f36c5a8fda
348 01f36c5a8fda
348 01f36c5a8fda
349 6a411f0d7a0a
349 6a411f0d7a0a
350 bed64f5d2f5a
350 bed64f5d2f5a
351 7ae126973a96 14608b260df8
351 7ae126973a96 14608b260df8
352 e442cfc57690
352 e442cfc57690
353 bed64f5d2f5a
353 bed64f5d2f5a
354 7ae126973a96 14608b260df8
354 7ae126973a96 14608b260df8
355 3750ebee865d
355 3750ebee865d
356 bed64f5d2f5a
356 bed64f5d2f5a
357 7ae126973a96 14608b260df8
357 7ae126973a96 14608b260df8
358 7ae126973a96
358 7ae126973a96
359 7ae126973a96
359 7ae126973a96
360 14608b260df8
360 14608b260df8
361 14608b260df8
361 14608b260df8
362 bed64f5d2f5a
362 bed64f5d2f5a
363 bed64f5d2f5a
363 bed64f5d2f5a
364 $ hg log -r 'divergent()'
364 $ hg log -r 'divergent()'
365 4:01f36c5a8fda A_3
365 4:01f36c5a8fda A_3
366 8:7ae126973a96 A_7
366 8:7ae126973a96 A_7
367 9:14608b260df8 A_8
367 9:14608b260df8 A_8
368 10:bed64f5d2f5a A_9
368 10:bed64f5d2f5a A_9
369
369
370 fix the divergence
370 fix the divergence
371
371
372 $ mkcommit A_A; hg up 0
372 $ mkcommit A_A; hg up 0
373 created new head
373 created new head
374 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
374 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
375 $ hg debugobsolete `getid A_9` `getid A_A`
375 $ hg debugobsolete `getid A_9` `getid A_A`
376 $ hg debugobsolete `getid A_7` `getid A_A`
376 $ hg debugobsolete `getid A_7` `getid A_A`
377 $ hg debugobsolete `getid A_8` `getid A_A`
377 $ hg debugobsolete `getid A_8` `getid A_A`
378 $ hg log -G --hidden
378 $ hg log -G --hidden
379 o 11:a139f71be9da A_A
379 o 11:a139f71be9da A_A
380 |
380 |
381 | x 10:bed64f5d2f5a A_9
381 | x 10:bed64f5d2f5a A_9
382 |/
382 |/
383 | x 9:14608b260df8 A_8
383 | x 9:14608b260df8 A_8
384 |/
384 |/
385 | x 8:7ae126973a96 A_7
385 | x 8:7ae126973a96 A_7
386 |/
386 |/
387 | x 7:3750ebee865d B_0
387 | x 7:3750ebee865d B_0
388 | |
388 | |
389 | x 6:e442cfc57690 A_5
389 | x 6:e442cfc57690 A_5
390 |/
390 |/
391 | x 5:6a411f0d7a0a A_4
391 | x 5:6a411f0d7a0a A_4
392 |/
392 |/
393 | o 4:01f36c5a8fda A_3
393 | o 4:01f36c5a8fda A_3
394 |/
394 |/
395 | x 3:392fd25390da A_2
395 | x 3:392fd25390da A_2
396 |/
396 |/
397 | x 2:82623d38b9ba A_1
397 | x 2:82623d38b9ba A_1
398 |/
398 |/
399 | x 1:007dc284c1f8 A_0
399 | x 1:007dc284c1f8 A_0
400 |/
400 |/
401 @ 0:d20a80d4def3 base
401 @ 0:d20a80d4def3 base
402
402
403 $ hg debugsuccessorssets --hidden 'all()'
403 $ hg debugsuccessorssets --hidden 'all()'
404 d20a80d4def3
404 d20a80d4def3
405 d20a80d4def3
405 d20a80d4def3
406 007dc284c1f8
406 007dc284c1f8
407 01f36c5a8fda a139f71be9da
407 01f36c5a8fda a139f71be9da
408 82623d38b9ba
408 82623d38b9ba
409 01f36c5a8fda
409 01f36c5a8fda
410 392fd25390da
410 392fd25390da
411 a139f71be9da
411 a139f71be9da
412 01f36c5a8fda
412 01f36c5a8fda
413 01f36c5a8fda
413 01f36c5a8fda
414 6a411f0d7a0a
414 6a411f0d7a0a
415 a139f71be9da
415 a139f71be9da
416 e442cfc57690
416 e442cfc57690
417 a139f71be9da
417 a139f71be9da
418 3750ebee865d
418 3750ebee865d
419 a139f71be9da
419 a139f71be9da
420 7ae126973a96
420 7ae126973a96
421 a139f71be9da
421 a139f71be9da
422 14608b260df8
422 14608b260df8
423 a139f71be9da
423 a139f71be9da
424 bed64f5d2f5a
424 bed64f5d2f5a
425 a139f71be9da
425 a139f71be9da
426 a139f71be9da
426 a139f71be9da
427 a139f71be9da
427 a139f71be9da
428 $ hg log -r 'divergent()'
428 $ hg log -r 'divergent()'
429
429
430 $ cd ..
430 $ cd ..
431
431
432
432
433 Subset does not diverge
433 Subset does not diverge
434 ------------------------------
434 ------------------------------
435
435
436 Do not report divergent successors-set if it is a subset of another
436 Do not report divergent successors-set if it is a subset of another
437 successors-set. (report [A,B] not [A] + [A,B])
437 successors-set. (report [A,B] not [A] + [A,B])
438
438
439 $ newcase subset
439 $ newcase subset
440 $ hg debugobsolete `getid A_0` `getid A_2`
440 $ hg debugobsolete `getid A_0` `getid A_2`
441 $ hg debugobsolete `getid A_0` `getid A_1` `getid A_2`
441 $ hg debugobsolete `getid A_0` `getid A_1` `getid A_2`
442 invalid branchheads cache (unserved): tip differs
442 invalid branchheads cache (unserved): tip differs
443 $ hg debugsuccessorssets --hidden 'desc('A_0')'
443 $ hg debugsuccessorssets --hidden 'desc('A_0')'
444 007dc284c1f8
444 007dc284c1f8
445 82623d38b9ba 392fd25390da
445 82623d38b9ba 392fd25390da
446
446
447 $ cd ..
447 $ cd ..
General Comments 0
You need to be logged in to leave comments. Login now