##// END OF EJS Templates
graphlog: add a way to test the 'groupbranchiter' function...
Pierre-Yves David -
r23565:996c01bf default
parent child Browse files
Show More
@@ -0,0 +1,58 b''
1 This test file aims at test topological iteration and the various configuration it can has.
2
3 $ cat >> $HGRCPATH << EOF
4 > [ui]
5 > logtemplate={rev}\n
6 > EOF
7
8 On this simple example, all topological branch are displayed in turn until we
9 can finally display 0. this implies skipping from 8 to 3 and coming back to 7
10 later.
11
12 $ hg init test01
13 $ cd test01
14 $ hg unbundle $TESTDIR/bundles/remote.hg
15 adding changesets
16 adding manifests
17 adding file changes
18 added 9 changesets with 7 changes to 4 files (+1 heads)
19 (run 'hg heads' to see heads, 'hg merge' to merge)
20
21 $ hg log -G
22 o 8
23 |
24 | o 7
25 | |
26 | o 6
27 | |
28 | o 5
29 | |
30 | o 4
31 | |
32 o | 3
33 | |
34 o | 2
35 | |
36 o | 1
37 |/
38 o 0
39
40 $ hg --config experimental.graph-topological=1 log -G
41 o 8
42 |
43 o 3
44 |
45 o 2
46 |
47 o 1
48 |
49 | o 7
50 | |
51 | o 6
52 | |
53 | o 5
54 | |
55 | o 4
56 |/
57 o 0
58
@@ -1,529 +1,532 b''
1 # Revision graph generator for Mercurial
1 # Revision graph generator for Mercurial
2 #
2 #
3 # Copyright 2008 Dirkjan Ochtman <dirkjan@ochtman.nl>
3 # Copyright 2008 Dirkjan Ochtman <dirkjan@ochtman.nl>
4 # Copyright 2007 Joel Rosdahl <joel@rosdahl.net>
4 # Copyright 2007 Joel Rosdahl <joel@rosdahl.net>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 """supports walking the history as DAGs suitable for graphical output
9 """supports walking the history as DAGs suitable for graphical output
10
10
11 The most basic format we use is that of::
11 The most basic format we use is that of::
12
12
13 (id, type, data, [parentids])
13 (id, type, data, [parentids])
14
14
15 The node and parent ids are arbitrary integers which identify a node in the
15 The node and parent ids are arbitrary integers which identify a node in the
16 context of the graph returned. Type is a constant specifying the node type.
16 context of the graph returned. Type is a constant specifying the node type.
17 Data depends on type.
17 Data depends on type.
18 """
18 """
19
19
20 from mercurial.node import nullrev
20 from mercurial.node import nullrev
21 import util
21 import util
22
22
23 CHANGESET = 'C'
23 CHANGESET = 'C'
24
24
25 def groupbranchiter(revs, parentsfunc):
25 def groupbranchiter(revs, parentsfunc):
26 """yield revision from heads to roots one (topo) branch after the other.
26 """yield revision from heads to roots one (topo) branch after the other.
27
27
28 This function aims to be used by a graph generator that wishes to minimize
28 This function aims to be used by a graph generator that wishes to minimize
29 the amount of parallel branches and their interleaving.
29 the amount of parallel branches and their interleaving.
30
30
31 Example iteration order:
31 Example iteration order:
32
32
33 o 4
33 o 4
34 |
34 |
35 o 1
35 o 1
36 |
36 |
37 | o 3
37 | o 3
38 | |
38 | |
39 | o 2
39 | o 2
40 |/
40 |/
41 o 0
41 o 0
42
42
43 Currently does not handle non-contiguous <revs> input.
43 Currently does not handle non-contiguous <revs> input.
44
44
45 Currently consider every changeset under a merge to be on the same branch
45 Currently consider every changeset under a merge to be on the same branch
46 using revision number to sort them.
46 using revision number to sort them.
47
47
48 Could be easily extend to give priority to an initial branch."""
48 Could be easily extend to give priority to an initial branch."""
49 ### Quick summary of the algorithm
49 ### Quick summary of the algorithm
50 #
50 #
51 # This function is based around a "retention" principle. We keep revisions
51 # This function is based around a "retention" principle. We keep revisions
52 # in memory until we are ready to emit a whole branch that immediately
52 # in memory until we are ready to emit a whole branch that immediately
53 # "merge" into an existing one. This reduce the number of branch "ongoing"
53 # "merge" into an existing one. This reduce the number of branch "ongoing"
54 # at the same time.
54 # at the same time.
55 #
55 #
56 # During iteration revs are split into two groups:
56 # During iteration revs are split into two groups:
57 # A) revision already emitted
57 # A) revision already emitted
58 # B) revision in "retention". They are stored as different subgroups.
58 # B) revision in "retention". They are stored as different subgroups.
59 #
59 #
60 # for each REV, we do the follow logic:
60 # for each REV, we do the follow logic:
61 #
61 #
62 # a) if REV is a parent of (A), we will emit it. But before emitting it,
62 # a) if REV is a parent of (A), we will emit it. But before emitting it,
63 # we'll "free" all the revs from subgroup in (B) that were waiting for
63 # we'll "free" all the revs from subgroup in (B) that were waiting for
64 # REV to be available. So we emit all revision of such subgroup before
64 # REV to be available. So we emit all revision of such subgroup before
65 # emitting REV
65 # emitting REV
66 #
66 #
67 # b) else, we'll search for a subgroup in (B) awaiting for REV to be
67 # b) else, we'll search for a subgroup in (B) awaiting for REV to be
68 # available, if such subgroup exist, we add REV to it and the subgroup is
68 # available, if such subgroup exist, we add REV to it and the subgroup is
69 # now awaiting for REV.parents() to be available.
69 # now awaiting for REV.parents() to be available.
70 #
70 #
71 # c) finally if no such group existed in (B), we create a new subgroup.
71 # c) finally if no such group existed in (B), we create a new subgroup.
72 #
72 #
73 #
73 #
74 # To bootstrap the algorithm, we emit the tipmost revision.
74 # To bootstrap the algorithm, we emit the tipmost revision.
75
75
76 revs.sort(reverse=True)
76 revs.sort(reverse=True)
77
77
78 # Set of parents of revision that have been yield. They can be considered
78 # Set of parents of revision that have been yield. They can be considered
79 # unblocked as the graph generator is already aware of them so there is no
79 # unblocked as the graph generator is already aware of them so there is no
80 # need to delay the one that reference them.
80 # need to delay the one that reference them.
81 unblocked = set()
81 unblocked = set()
82
82
83 # list of group waiting to be displayed, each group is defined by:
83 # list of group waiting to be displayed, each group is defined by:
84 #
84 #
85 # (revs: lists of revs waiting to be displayed,
85 # (revs: lists of revs waiting to be displayed,
86 # blocked: set of that cannot be displayed before those in 'revs')
86 # blocked: set of that cannot be displayed before those in 'revs')
87 #
87 #
88 # The second value ('blocked') correspond to parents of any revision in the
88 # The second value ('blocked') correspond to parents of any revision in the
89 # group ('revs') that is not itself contained in the group. The main idea
89 # group ('revs') that is not itself contained in the group. The main idea
90 # of this algorithm is to delay as much as possible the emission of any
90 # of this algorithm is to delay as much as possible the emission of any
91 # revision. This means waiting for the moment we are about to display
91 # revision. This means waiting for the moment we are about to display
92 # theses parents to display the revs in a group.
92 # theses parents to display the revs in a group.
93 #
93 #
94 # This first implementation is smart until it meet a merge: it will emit
94 # This first implementation is smart until it meet a merge: it will emit
95 # revs as soon as any parents is about to be emitted and can grow an
95 # revs as soon as any parents is about to be emitted and can grow an
96 # arbitrary number of revs in 'blocked'. In practice this mean we properly
96 # arbitrary number of revs in 'blocked'. In practice this mean we properly
97 # retains new branches but give up on any special ordering for ancestors of
97 # retains new branches but give up on any special ordering for ancestors of
98 # merges. The implementation can be improved to handle this better.
98 # merges. The implementation can be improved to handle this better.
99 #
99 #
100 # The first subgroup is special. It correspond to all the revision that
100 # The first subgroup is special. It correspond to all the revision that
101 # were already emitted. The 'revs' lists is expected to be empty and the
101 # were already emitted. The 'revs' lists is expected to be empty and the
102 # 'blocked' set contains the parents revisions of already emitted revision.
102 # 'blocked' set contains the parents revisions of already emitted revision.
103 #
103 #
104 # You could pre-seed the <parents> set of groups[0] to a specific
104 # You could pre-seed the <parents> set of groups[0] to a specific
105 # changesets to select what the first emitted branch should be.
105 # changesets to select what the first emitted branch should be.
106 #
106 #
107 # We do not support revisions will hole yet, but adding such support would
107 # We do not support revisions will hole yet, but adding such support would
108 # be easy. The iteration will have to be done using both input revision and
108 # be easy. The iteration will have to be done using both input revision and
109 # parents (see cl.ancestors function + a few tweaks) but only revisions
109 # parents (see cl.ancestors function + a few tweaks) but only revisions
110 # parts of the initial set should be emitted.
110 # parts of the initial set should be emitted.
111 groups = [([], unblocked)]
111 groups = [([], unblocked)]
112 for current in revs:
112 for current in revs:
113 # Look for a subgroup blocked, waiting for the current revision.
113 # Look for a subgroup blocked, waiting for the current revision.
114 matching = [i for i, g in enumerate(groups) if current in g[1]]
114 matching = [i for i, g in enumerate(groups) if current in g[1]]
115
115
116 if matching:
116 if matching:
117 # The main idea is to gather together all sets that await on the
117 # The main idea is to gather together all sets that await on the
118 # same revision.
118 # same revision.
119 #
119 #
120 # This merging is done at the time we are about to add this common
120 # This merging is done at the time we are about to add this common
121 # awaited to the subgroup for simplicity purpose. Such merge could
121 # awaited to the subgroup for simplicity purpose. Such merge could
122 # happen sooner when we update the "blocked" set of revision.
122 # happen sooner when we update the "blocked" set of revision.
123 #
123 #
124 # We also always keep the oldest subgroup first. We can probably
124 # We also always keep the oldest subgroup first. We can probably
125 # improve the behavior by having the longuest set first. That way,
125 # improve the behavior by having the longuest set first. That way,
126 # graph algorythms could minimise the length of parallele lines
126 # graph algorythms could minimise the length of parallele lines
127 # their draw. This is currently not done.
127 # their draw. This is currently not done.
128 targetidx = matching.pop(0)
128 targetidx = matching.pop(0)
129 trevs, tparents = groups[targetidx]
129 trevs, tparents = groups[targetidx]
130 for i in matching:
130 for i in matching:
131 gr = groups[i]
131 gr = groups[i]
132 trevs.extend(gr[0])
132 trevs.extend(gr[0])
133 tparents |= gr[1]
133 tparents |= gr[1]
134 # delete all merged subgroups (but the one we keep)
134 # delete all merged subgroups (but the one we keep)
135 # (starting from the last subgroup for performance and sanity reason)
135 # (starting from the last subgroup for performance and sanity reason)
136 for i in reversed(matching):
136 for i in reversed(matching):
137 del groups[i]
137 del groups[i]
138 else:
138 else:
139 # This is a new head. We create a new subgroup for it.
139 # This is a new head. We create a new subgroup for it.
140 targetidx = len(groups)
140 targetidx = len(groups)
141 groups.append(([], set([current])))
141 groups.append(([], set([current])))
142
142
143 gr = groups[targetidx]
143 gr = groups[targetidx]
144
144
145 # We now adds the current nodes to this subgroups. This is done after
145 # We now adds the current nodes to this subgroups. This is done after
146 # the subgroup merging because all elements from a subgroup that relied
146 # the subgroup merging because all elements from a subgroup that relied
147 # on this rev must preceed it.
147 # on this rev must preceed it.
148 #
148 #
149 # we also update the <parents> set to includes the parents on the
149 # we also update the <parents> set to includes the parents on the
150 # new nodes.
150 # new nodes.
151 gr[0].append(current)
151 gr[0].append(current)
152 gr[1].remove(current)
152 gr[1].remove(current)
153 gr[1].update([p for p in parentsfunc(current) if p > nullrev])
153 gr[1].update([p for p in parentsfunc(current) if p > nullrev])
154
154
155 # Look for a subgroup to display
155 # Look for a subgroup to display
156 #
156 #
157 # When unblocked is empty (if clause), We are not waiting over any
157 # When unblocked is empty (if clause), We are not waiting over any
158 # revision during the first iteration (if no priority was given) or if
158 # revision during the first iteration (if no priority was given) or if
159 # we outputed a whole disconnected sets of the graph (reached a root).
159 # we outputed a whole disconnected sets of the graph (reached a root).
160 # In that case we arbitrarily takes the oldest known subgroup. The
160 # In that case we arbitrarily takes the oldest known subgroup. The
161 # heuristique could probably be better.
161 # heuristique could probably be better.
162 #
162 #
163 # Otherwise (elif clause) this mean we have some emitted revision. if
163 # Otherwise (elif clause) this mean we have some emitted revision. if
164 # the subgroup awaits on the same revision that the outputed ones, we
164 # the subgroup awaits on the same revision that the outputed ones, we
165 # can safely output it.
165 # can safely output it.
166 if not unblocked:
166 if not unblocked:
167 if len(groups) > 1: # display other subset
167 if len(groups) > 1: # display other subset
168 targetidx = 1
168 targetidx = 1
169 gr = groups[1]
169 gr = groups[1]
170 elif not gr[1] & unblocked:
170 elif not gr[1] & unblocked:
171 gr = None
171 gr = None
172
172
173 if gr is not None:
173 if gr is not None:
174 # update the set of awaited revisions with the one from the
174 # update the set of awaited revisions with the one from the
175 # subgroup
175 # subgroup
176 unblocked |= gr[1]
176 unblocked |= gr[1]
177 # output all revisions in the subgroup
177 # output all revisions in the subgroup
178 for r in gr[0]:
178 for r in gr[0]:
179 yield r
179 yield r
180 # delete the subgroup that you just output
180 # delete the subgroup that you just output
181 # unless it is groups[0] in which case you just empty it.
181 # unless it is groups[0] in which case you just empty it.
182 if targetidx:
182 if targetidx:
183 del groups[targetidx]
183 del groups[targetidx]
184 else:
184 else:
185 gr[0][:] = []
185 gr[0][:] = []
186
186
187 def dagwalker(repo, revs):
187 def dagwalker(repo, revs):
188 """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples
188 """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples
189
189
190 This generator function walks through revisions (which should be ordered
190 This generator function walks through revisions (which should be ordered
191 from bigger to lower). It returns a tuple for each node. The node and parent
191 from bigger to lower). It returns a tuple for each node. The node and parent
192 ids are arbitrary integers which identify a node in the context of the graph
192 ids are arbitrary integers which identify a node in the context of the graph
193 returned.
193 returned.
194 """
194 """
195 if not revs:
195 if not revs:
196 return
196 return
197
197
198 cl = repo.changelog
198 cl = repo.changelog
199 lowestrev = revs.min()
199 lowestrev = revs.min()
200 gpcache = {}
200 gpcache = {}
201
201
202 if repo.ui.configbool('experimental', 'graph-topological', False):
203 revs = list(groupbranchiter(revs, repo.changelog.parentrevs))
204
202 for rev in revs:
205 for rev in revs:
203 ctx = repo[rev]
206 ctx = repo[rev]
204 parents = sorted(set([p.rev() for p in ctx.parents()
207 parents = sorted(set([p.rev() for p in ctx.parents()
205 if p.rev() in revs]))
208 if p.rev() in revs]))
206 mpars = [p.rev() for p in ctx.parents() if
209 mpars = [p.rev() for p in ctx.parents() if
207 p.rev() != nullrev and p.rev() not in parents]
210 p.rev() != nullrev and p.rev() not in parents]
208
211
209 for mpar in mpars:
212 for mpar in mpars:
210 gp = gpcache.get(mpar)
213 gp = gpcache.get(mpar)
211 if gp is None:
214 if gp is None:
212 gp = gpcache[mpar] = grandparent(cl, lowestrev, revs, mpar)
215 gp = gpcache[mpar] = grandparent(cl, lowestrev, revs, mpar)
213 if not gp:
216 if not gp:
214 parents.append(mpar)
217 parents.append(mpar)
215 else:
218 else:
216 parents.extend(g for g in gp if g not in parents)
219 parents.extend(g for g in gp if g not in parents)
217
220
218 yield (ctx.rev(), CHANGESET, ctx, parents)
221 yield (ctx.rev(), CHANGESET, ctx, parents)
219
222
220 def nodes(repo, nodes):
223 def nodes(repo, nodes):
221 """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples
224 """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples
222
225
223 This generator function walks the given nodes. It only returns parents
226 This generator function walks the given nodes. It only returns parents
224 that are in nodes, too.
227 that are in nodes, too.
225 """
228 """
226 include = set(nodes)
229 include = set(nodes)
227 for node in nodes:
230 for node in nodes:
228 ctx = repo[node]
231 ctx = repo[node]
229 parents = set([p.rev() for p in ctx.parents() if p.node() in include])
232 parents = set([p.rev() for p in ctx.parents() if p.node() in include])
230 yield (ctx.rev(), CHANGESET, ctx, sorted(parents))
233 yield (ctx.rev(), CHANGESET, ctx, sorted(parents))
231
234
232 def colored(dag, repo):
235 def colored(dag, repo):
233 """annotates a DAG with colored edge information
236 """annotates a DAG with colored edge information
234
237
235 For each DAG node this function emits tuples::
238 For each DAG node this function emits tuples::
236
239
237 (id, type, data, (col, color), [(col, nextcol, color)])
240 (id, type, data, (col, color), [(col, nextcol, color)])
238
241
239 with the following new elements:
242 with the following new elements:
240
243
241 - Tuple (col, color) with column and color index for the current node
244 - Tuple (col, color) with column and color index for the current node
242 - A list of tuples indicating the edges between the current node and its
245 - A list of tuples indicating the edges between the current node and its
243 parents.
246 parents.
244 """
247 """
245 seen = []
248 seen = []
246 colors = {}
249 colors = {}
247 newcolor = 1
250 newcolor = 1
248 config = {}
251 config = {}
249
252
250 for key, val in repo.ui.configitems('graph'):
253 for key, val in repo.ui.configitems('graph'):
251 if '.' in key:
254 if '.' in key:
252 branch, setting = key.rsplit('.', 1)
255 branch, setting = key.rsplit('.', 1)
253 # Validation
256 # Validation
254 if setting == "width" and val.isdigit():
257 if setting == "width" and val.isdigit():
255 config.setdefault(branch, {})[setting] = int(val)
258 config.setdefault(branch, {})[setting] = int(val)
256 elif setting == "color" and val.isalnum():
259 elif setting == "color" and val.isalnum():
257 config.setdefault(branch, {})[setting] = val
260 config.setdefault(branch, {})[setting] = val
258
261
259 if config:
262 if config:
260 getconf = util.lrucachefunc(
263 getconf = util.lrucachefunc(
261 lambda rev: config.get(repo[rev].branch(), {}))
264 lambda rev: config.get(repo[rev].branch(), {}))
262 else:
265 else:
263 getconf = lambda rev: {}
266 getconf = lambda rev: {}
264
267
265 for (cur, type, data, parents) in dag:
268 for (cur, type, data, parents) in dag:
266
269
267 # Compute seen and next
270 # Compute seen and next
268 if cur not in seen:
271 if cur not in seen:
269 seen.append(cur) # new head
272 seen.append(cur) # new head
270 colors[cur] = newcolor
273 colors[cur] = newcolor
271 newcolor += 1
274 newcolor += 1
272
275
273 col = seen.index(cur)
276 col = seen.index(cur)
274 color = colors.pop(cur)
277 color = colors.pop(cur)
275 next = seen[:]
278 next = seen[:]
276
279
277 # Add parents to next
280 # Add parents to next
278 addparents = [p for p in parents if p not in next]
281 addparents = [p for p in parents if p not in next]
279 next[col:col + 1] = addparents
282 next[col:col + 1] = addparents
280
283
281 # Set colors for the parents
284 # Set colors for the parents
282 for i, p in enumerate(addparents):
285 for i, p in enumerate(addparents):
283 if not i:
286 if not i:
284 colors[p] = color
287 colors[p] = color
285 else:
288 else:
286 colors[p] = newcolor
289 colors[p] = newcolor
287 newcolor += 1
290 newcolor += 1
288
291
289 # Add edges to the graph
292 # Add edges to the graph
290 edges = []
293 edges = []
291 for ecol, eid in enumerate(seen):
294 for ecol, eid in enumerate(seen):
292 if eid in next:
295 if eid in next:
293 bconf = getconf(eid)
296 bconf = getconf(eid)
294 edges.append((
297 edges.append((
295 ecol, next.index(eid), colors[eid],
298 ecol, next.index(eid), colors[eid],
296 bconf.get('width', -1),
299 bconf.get('width', -1),
297 bconf.get('color', '')))
300 bconf.get('color', '')))
298 elif eid == cur:
301 elif eid == cur:
299 for p in parents:
302 for p in parents:
300 bconf = getconf(p)
303 bconf = getconf(p)
301 edges.append((
304 edges.append((
302 ecol, next.index(p), color,
305 ecol, next.index(p), color,
303 bconf.get('width', -1),
306 bconf.get('width', -1),
304 bconf.get('color', '')))
307 bconf.get('color', '')))
305
308
306 # Yield and move on
309 # Yield and move on
307 yield (cur, type, data, (col, color), edges)
310 yield (cur, type, data, (col, color), edges)
308 seen = next
311 seen = next
309
312
310 def grandparent(cl, lowestrev, roots, head):
313 def grandparent(cl, lowestrev, roots, head):
311 """Return all ancestors of head in roots which revision is
314 """Return all ancestors of head in roots which revision is
312 greater or equal to lowestrev.
315 greater or equal to lowestrev.
313 """
316 """
314 pending = set([head])
317 pending = set([head])
315 seen = set()
318 seen = set()
316 kept = set()
319 kept = set()
317 llowestrev = max(nullrev, lowestrev)
320 llowestrev = max(nullrev, lowestrev)
318 while pending:
321 while pending:
319 r = pending.pop()
322 r = pending.pop()
320 if r >= llowestrev and r not in seen:
323 if r >= llowestrev and r not in seen:
321 if r in roots:
324 if r in roots:
322 kept.add(r)
325 kept.add(r)
323 else:
326 else:
324 pending.update([p for p in cl.parentrevs(r)])
327 pending.update([p for p in cl.parentrevs(r)])
325 seen.add(r)
328 seen.add(r)
326 return sorted(kept)
329 return sorted(kept)
327
330
328 def asciiedges(type, char, lines, seen, rev, parents):
331 def asciiedges(type, char, lines, seen, rev, parents):
329 """adds edge info to changelog DAG walk suitable for ascii()"""
332 """adds edge info to changelog DAG walk suitable for ascii()"""
330 if rev not in seen:
333 if rev not in seen:
331 seen.append(rev)
334 seen.append(rev)
332 nodeidx = seen.index(rev)
335 nodeidx = seen.index(rev)
333
336
334 knownparents = []
337 knownparents = []
335 newparents = []
338 newparents = []
336 for parent in parents:
339 for parent in parents:
337 if parent in seen:
340 if parent in seen:
338 knownparents.append(parent)
341 knownparents.append(parent)
339 else:
342 else:
340 newparents.append(parent)
343 newparents.append(parent)
341
344
342 ncols = len(seen)
345 ncols = len(seen)
343 nextseen = seen[:]
346 nextseen = seen[:]
344 nextseen[nodeidx:nodeidx + 1] = newparents
347 nextseen[nodeidx:nodeidx + 1] = newparents
345 edges = [(nodeidx, nextseen.index(p)) for p in knownparents if p != nullrev]
348 edges = [(nodeidx, nextseen.index(p)) for p in knownparents if p != nullrev]
346
349
347 while len(newparents) > 2:
350 while len(newparents) > 2:
348 # ascii() only knows how to add or remove a single column between two
351 # ascii() only knows how to add or remove a single column between two
349 # calls. Nodes with more than two parents break this constraint so we
352 # calls. Nodes with more than two parents break this constraint so we
350 # introduce intermediate expansion lines to grow the active node list
353 # introduce intermediate expansion lines to grow the active node list
351 # slowly.
354 # slowly.
352 edges.append((nodeidx, nodeidx))
355 edges.append((nodeidx, nodeidx))
353 edges.append((nodeidx, nodeidx + 1))
356 edges.append((nodeidx, nodeidx + 1))
354 nmorecols = 1
357 nmorecols = 1
355 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
358 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
356 char = '\\'
359 char = '\\'
357 lines = []
360 lines = []
358 nodeidx += 1
361 nodeidx += 1
359 ncols += 1
362 ncols += 1
360 edges = []
363 edges = []
361 del newparents[0]
364 del newparents[0]
362
365
363 if len(newparents) > 0:
366 if len(newparents) > 0:
364 edges.append((nodeidx, nodeidx))
367 edges.append((nodeidx, nodeidx))
365 if len(newparents) > 1:
368 if len(newparents) > 1:
366 edges.append((nodeidx, nodeidx + 1))
369 edges.append((nodeidx, nodeidx + 1))
367 nmorecols = len(nextseen) - ncols
370 nmorecols = len(nextseen) - ncols
368 seen[:] = nextseen
371 seen[:] = nextseen
369 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
372 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
370
373
371 def _fixlongrightedges(edges):
374 def _fixlongrightedges(edges):
372 for (i, (start, end)) in enumerate(edges):
375 for (i, (start, end)) in enumerate(edges):
373 if end > start:
376 if end > start:
374 edges[i] = (start, end + 1)
377 edges[i] = (start, end + 1)
375
378
376 def _getnodelineedgestail(
379 def _getnodelineedgestail(
377 node_index, p_node_index, n_columns, n_columns_diff, p_diff, fix_tail):
380 node_index, p_node_index, n_columns, n_columns_diff, p_diff, fix_tail):
378 if fix_tail and n_columns_diff == p_diff and n_columns_diff != 0:
381 if fix_tail and n_columns_diff == p_diff and n_columns_diff != 0:
379 # Still going in the same non-vertical direction.
382 # Still going in the same non-vertical direction.
380 if n_columns_diff == -1:
383 if n_columns_diff == -1:
381 start = max(node_index + 1, p_node_index)
384 start = max(node_index + 1, p_node_index)
382 tail = ["|", " "] * (start - node_index - 1)
385 tail = ["|", " "] * (start - node_index - 1)
383 tail.extend(["/", " "] * (n_columns - start))
386 tail.extend(["/", " "] * (n_columns - start))
384 return tail
387 return tail
385 else:
388 else:
386 return ["\\", " "] * (n_columns - node_index - 1)
389 return ["\\", " "] * (n_columns - node_index - 1)
387 else:
390 else:
388 return ["|", " "] * (n_columns - node_index - 1)
391 return ["|", " "] * (n_columns - node_index - 1)
389
392
390 def _drawedges(edges, nodeline, interline):
393 def _drawedges(edges, nodeline, interline):
391 for (start, end) in edges:
394 for (start, end) in edges:
392 if start == end + 1:
395 if start == end + 1:
393 interline[2 * end + 1] = "/"
396 interline[2 * end + 1] = "/"
394 elif start == end - 1:
397 elif start == end - 1:
395 interline[2 * start + 1] = "\\"
398 interline[2 * start + 1] = "\\"
396 elif start == end:
399 elif start == end:
397 interline[2 * start] = "|"
400 interline[2 * start] = "|"
398 else:
401 else:
399 if 2 * end >= len(nodeline):
402 if 2 * end >= len(nodeline):
400 continue
403 continue
401 nodeline[2 * end] = "+"
404 nodeline[2 * end] = "+"
402 if start > end:
405 if start > end:
403 (start, end) = (end, start)
406 (start, end) = (end, start)
404 for i in range(2 * start + 1, 2 * end):
407 for i in range(2 * start + 1, 2 * end):
405 if nodeline[i] != "+":
408 if nodeline[i] != "+":
406 nodeline[i] = "-"
409 nodeline[i] = "-"
407
410
408 def _getpaddingline(ni, n_columns, edges):
411 def _getpaddingline(ni, n_columns, edges):
409 line = []
412 line = []
410 line.extend(["|", " "] * ni)
413 line.extend(["|", " "] * ni)
411 if (ni, ni - 1) in edges or (ni, ni) in edges:
414 if (ni, ni - 1) in edges or (ni, ni) in edges:
412 # (ni, ni - 1) (ni, ni)
415 # (ni, ni - 1) (ni, ni)
413 # | | | | | | | |
416 # | | | | | | | |
414 # +---o | | o---+
417 # +---o | | o---+
415 # | | c | | c | |
418 # | | c | | c | |
416 # | |/ / | |/ /
419 # | |/ / | |/ /
417 # | | | | | |
420 # | | | | | |
418 c = "|"
421 c = "|"
419 else:
422 else:
420 c = " "
423 c = " "
421 line.extend([c, " "])
424 line.extend([c, " "])
422 line.extend(["|", " "] * (n_columns - ni - 1))
425 line.extend(["|", " "] * (n_columns - ni - 1))
423 return line
426 return line
424
427
425 def asciistate():
428 def asciistate():
426 """returns the initial value for the "state" argument to ascii()"""
429 """returns the initial value for the "state" argument to ascii()"""
427 return [0, 0]
430 return [0, 0]
428
431
429 def ascii(ui, state, type, char, text, coldata):
432 def ascii(ui, state, type, char, text, coldata):
430 """prints an ASCII graph of the DAG
433 """prints an ASCII graph of the DAG
431
434
432 takes the following arguments (one call per node in the graph):
435 takes the following arguments (one call per node in the graph):
433
436
434 - ui to write to
437 - ui to write to
435 - Somewhere to keep the needed state in (init to asciistate())
438 - Somewhere to keep the needed state in (init to asciistate())
436 - Column of the current node in the set of ongoing edges.
439 - Column of the current node in the set of ongoing edges.
437 - Type indicator of node data, usually 'C' for changesets.
440 - Type indicator of node data, usually 'C' for changesets.
438 - Payload: (char, lines):
441 - Payload: (char, lines):
439 - Character to use as node's symbol.
442 - Character to use as node's symbol.
440 - List of lines to display as the node's text.
443 - List of lines to display as the node's text.
441 - Edges; a list of (col, next_col) indicating the edges between
444 - Edges; a list of (col, next_col) indicating the edges between
442 the current node and its parents.
445 the current node and its parents.
443 - Number of columns (ongoing edges) in the current revision.
446 - Number of columns (ongoing edges) in the current revision.
444 - The difference between the number of columns (ongoing edges)
447 - The difference between the number of columns (ongoing edges)
445 in the next revision and the number of columns (ongoing edges)
448 in the next revision and the number of columns (ongoing edges)
446 in the current revision. That is: -1 means one column removed;
449 in the current revision. That is: -1 means one column removed;
447 0 means no columns added or removed; 1 means one column added.
450 0 means no columns added or removed; 1 means one column added.
448 """
451 """
449
452
450 idx, edges, ncols, coldiff = coldata
453 idx, edges, ncols, coldiff = coldata
451 assert -2 < coldiff < 2
454 assert -2 < coldiff < 2
452 if coldiff == -1:
455 if coldiff == -1:
453 # Transform
456 # Transform
454 #
457 #
455 # | | | | | |
458 # | | | | | |
456 # o | | into o---+
459 # o | | into o---+
457 # |X / |/ /
460 # |X / |/ /
458 # | | | |
461 # | | | |
459 _fixlongrightedges(edges)
462 _fixlongrightedges(edges)
460
463
461 # add_padding_line says whether to rewrite
464 # add_padding_line says whether to rewrite
462 #
465 #
463 # | | | | | | | |
466 # | | | | | | | |
464 # | o---+ into | o---+
467 # | o---+ into | o---+
465 # | / / | | | # <--- padding line
468 # | / / | | | # <--- padding line
466 # o | | | / /
469 # o | | | / /
467 # o | |
470 # o | |
468 add_padding_line = (len(text) > 2 and coldiff == -1 and
471 add_padding_line = (len(text) > 2 and coldiff == -1 and
469 [x for (x, y) in edges if x + 1 < y])
472 [x for (x, y) in edges if x + 1 < y])
470
473
471 # fix_nodeline_tail says whether to rewrite
474 # fix_nodeline_tail says whether to rewrite
472 #
475 #
473 # | | o | | | | o | |
476 # | | o | | | | o | |
474 # | | |/ / | | |/ /
477 # | | |/ / | | |/ /
475 # | o | | into | o / / # <--- fixed nodeline tail
478 # | o | | into | o / / # <--- fixed nodeline tail
476 # | |/ / | |/ /
479 # | |/ / | |/ /
477 # o | | o | |
480 # o | | o | |
478 fix_nodeline_tail = len(text) <= 2 and not add_padding_line
481 fix_nodeline_tail = len(text) <= 2 and not add_padding_line
479
482
480 # nodeline is the line containing the node character (typically o)
483 # nodeline is the line containing the node character (typically o)
481 nodeline = ["|", " "] * idx
484 nodeline = ["|", " "] * idx
482 nodeline.extend([char, " "])
485 nodeline.extend([char, " "])
483
486
484 nodeline.extend(
487 nodeline.extend(
485 _getnodelineedgestail(idx, state[1], ncols, coldiff,
488 _getnodelineedgestail(idx, state[1], ncols, coldiff,
486 state[0], fix_nodeline_tail))
489 state[0], fix_nodeline_tail))
487
490
488 # shift_interline is the line containing the non-vertical
491 # shift_interline is the line containing the non-vertical
489 # edges between this entry and the next
492 # edges between this entry and the next
490 shift_interline = ["|", " "] * idx
493 shift_interline = ["|", " "] * idx
491 if coldiff == -1:
494 if coldiff == -1:
492 n_spaces = 1
495 n_spaces = 1
493 edge_ch = "/"
496 edge_ch = "/"
494 elif coldiff == 0:
497 elif coldiff == 0:
495 n_spaces = 2
498 n_spaces = 2
496 edge_ch = "|"
499 edge_ch = "|"
497 else:
500 else:
498 n_spaces = 3
501 n_spaces = 3
499 edge_ch = "\\"
502 edge_ch = "\\"
500 shift_interline.extend(n_spaces * [" "])
503 shift_interline.extend(n_spaces * [" "])
501 shift_interline.extend([edge_ch, " "] * (ncols - idx - 1))
504 shift_interline.extend([edge_ch, " "] * (ncols - idx - 1))
502
505
503 # draw edges from the current node to its parents
506 # draw edges from the current node to its parents
504 _drawedges(edges, nodeline, shift_interline)
507 _drawedges(edges, nodeline, shift_interline)
505
508
506 # lines is the list of all graph lines to print
509 # lines is the list of all graph lines to print
507 lines = [nodeline]
510 lines = [nodeline]
508 if add_padding_line:
511 if add_padding_line:
509 lines.append(_getpaddingline(idx, ncols, edges))
512 lines.append(_getpaddingline(idx, ncols, edges))
510 lines.append(shift_interline)
513 lines.append(shift_interline)
511
514
512 # make sure that there are as many graph lines as there are
515 # make sure that there are as many graph lines as there are
513 # log strings
516 # log strings
514 while len(text) < len(lines):
517 while len(text) < len(lines):
515 text.append("")
518 text.append("")
516 if len(lines) < len(text):
519 if len(lines) < len(text):
517 extra_interline = ["|", " "] * (ncols + coldiff)
520 extra_interline = ["|", " "] * (ncols + coldiff)
518 while len(lines) < len(text):
521 while len(lines) < len(text):
519 lines.append(extra_interline)
522 lines.append(extra_interline)
520
523
521 # print lines
524 # print lines
522 indentation_level = max(ncols, ncols + coldiff)
525 indentation_level = max(ncols, ncols + coldiff)
523 for (line, logstr) in zip(lines, text):
526 for (line, logstr) in zip(lines, text):
524 ln = "%-*s %s" % (2 * indentation_level, "".join(line), logstr)
527 ln = "%-*s %s" % (2 * indentation_level, "".join(line), logstr)
525 ui.write(ln.rstrip() + '\n')
528 ui.write(ln.rstrip() + '\n')
526
529
527 # ... and start over
530 # ... and start over
528 state[0] = coldiff
531 state[0] = coldiff
529 state[1] = idx
532 state[1] = idx
General Comments 0
You need to be logged in to leave comments. Login now