##// END OF EJS Templates
graphmod: rename graph-topological config to graph-group-branches...
Augie Fackler -
r23569:3ecbcffd default
parent child Browse files
Show More
@@ -1,566 +1,566
1 1 # Revision graph generator for Mercurial
2 2 #
3 3 # Copyright 2008 Dirkjan Ochtman <dirkjan@ochtman.nl>
4 4 # Copyright 2007 Joel Rosdahl <joel@rosdahl.net>
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 """supports walking the history as DAGs suitable for graphical output
10 10
11 11 The most basic format we use is that of::
12 12
13 13 (id, type, data, [parentids])
14 14
15 15 The node and parent ids are arbitrary integers which identify a node in the
16 16 context of the graph returned. Type is a constant specifying the node type.
17 17 Data depends on type.
18 18 """
19 19
20 20 from mercurial.node import nullrev
21 21 import util
22 22
23 23 import heapq
24 24
25 25 CHANGESET = 'C'
26 26
27 27 def groupbranchiter(revs, parentsfunc, firstbranch=()):
28 28 """yield revision from heads to roots one (topo) branch after the other.
29 29
30 30 This function aims to be used by a graph generator that wishes to minimize
31 31 the amount of parallel branches and their interleaving.
32 32
33 33 Example iteration order:
34 34
35 35 o 4
36 36 |
37 37 o 1
38 38 |
39 39 | o 3
40 40 | |
41 41 | o 2
42 42 |/
43 43 o 0
44 44
45 45 Currently consider every changeset under a merge to be on the same branch
46 46 using revision number to sort them.
47 47 """
48 48
49 49 ### Quick summary of the algorithm
50 50 #
51 51 # This function is based around a "retention" principle. We keep revisions
52 52 # in memory until we are ready to emit a whole branch that immediately
53 53 # "merge" into an existing one. This reduce the number of branch "ongoing"
54 54 # at the same time.
55 55 #
56 56 # During iteration revs are split into two groups:
57 57 # A) revision already emitted
58 58 # B) revision in "retention". They are stored as different subgroups.
59 59 #
60 60 # for each REV, we do the follow logic:
61 61 #
62 62 # a) if REV is a parent of (A), we will emit it. But before emitting it,
63 63 # we'll "free" all the revs from subgroup in (B) that were waiting for
64 64 # REV to be available. So we emit all revision of such subgroup before
65 65 # emitting REV
66 66 #
67 67 # b) else, we'll search for a subgroup in (B) awaiting for REV to be
68 68 # available, if such subgroup exist, we add REV to it and the subgroup is
69 69 # now awaiting for REV.parents() to be available.
70 70 #
71 71 # c) finally if no such group existed in (B), we create a new subgroup.
72 72 #
73 73 #
74 74 # To bootstrap the algorithm, we emit the tipmost revision.
75 75
76 76 revs.sort(reverse=True)
77 77
78 78 # Set of parents of revision that have been yield. They can be considered
79 79 # unblocked as the graph generator is already aware of them so there is no
80 80 # need to delay the one that reference them.
81 81 #
82 82 # If someone wants to prioritize a branch over the others, pre-filling this
83 83 # set will force all other branches to wait until this branch is ready to be
84 84 # outputed.
85 85 unblocked = set(firstbranch)
86 86
87 87 # list of group waiting to be displayed, each group is defined by:
88 88 #
89 89 # (revs: lists of revs waiting to be displayed,
90 90 # blocked: set of that cannot be displayed before those in 'revs')
91 91 #
92 92 # The second value ('blocked') correspond to parents of any revision in the
93 93 # group ('revs') that is not itself contained in the group. The main idea
94 94 # of this algorithm is to delay as much as possible the emission of any
95 95 # revision. This means waiting for the moment we are about to display
96 96 # theses parents to display the revs in a group.
97 97 #
98 98 # This first implementation is smart until it meet a merge: it will emit
99 99 # revs as soon as any parents is about to be emitted and can grow an
100 100 # arbitrary number of revs in 'blocked'. In practice this mean we properly
101 101 # retains new branches but give up on any special ordering for ancestors of
102 102 # merges. The implementation can be improved to handle this better.
103 103 #
104 104 # The first subgroup is special. It correspond to all the revision that
105 105 # were already emitted. The 'revs' lists is expected to be empty and the
106 106 # 'blocked' set contains the parents revisions of already emitted revision.
107 107 #
108 108 # You could pre-seed the <parents> set of groups[0] to a specific
109 109 # changesets to select what the first emitted branch should be.
110 110 groups = [([], unblocked)]
111 111 pendingheap = []
112 112 pendingset = set()
113 113
114 114 heapq.heapify(pendingheap)
115 115 heappop = heapq.heappop
116 116 heappush = heapq.heappush
117 117 for currentrev in revs:
118 118 # Heap works with smallest element, we want highest so we invert
119 119 if currentrev not in pendingset:
120 120 heappush(pendingheap, -currentrev)
121 121 pendingset.add(currentrev)
122 122 # iterates on pending rev until after the current rev have been
123 123 # processeed.
124 124 rev = None
125 125 while rev != currentrev:
126 126 rev = -heappop(pendingheap)
127 127 pendingset.remove(rev)
128 128
129 129 # Seek for a subgroup blocked, waiting for the current revision.
130 130 matching = [i for i, g in enumerate(groups) if rev in g[1]]
131 131
132 132 if matching:
133 133 # The main idea is to gather together all sets that await on
134 134 # the same revision.
135 135 #
136 136 # This merging is done at the time we are about to add this
137 137 # common awaited to the subgroup for simplicity purpose. Such
138 138 # merge could happen sooner when we update the "blocked" set of
139 139 # revision.
140 140 #
141 141 # We also always keep the oldest subgroup first. We can
142 142 # probably improve the behavior by having the longuest set
143 143 # first. That way, graph algorythms could minimise the length
144 144 # of parallele lines their draw. This is currently not done.
145 145 targetidx = matching.pop(0)
146 146 trevs, tparents = groups[targetidx]
147 147 for i in matching:
148 148 gr = groups[i]
149 149 trevs.extend(gr[0])
150 150 tparents |= gr[1]
151 151 # delete all merged subgroups (but the one we keep) (starting
152 152 # from the last subgroup for performance and sanity reason)
153 153 for i in reversed(matching):
154 154 del groups[i]
155 155 else:
156 156 # This is a new head. We create a new subgroup for it.
157 157 targetidx = len(groups)
158 158 groups.append(([], set([rev])))
159 159
160 160 gr = groups[targetidx]
161 161
162 162 # We now adds the current nodes to this subgroups. This is done
163 163 # after the subgroup merging because all elements from a subgroup
164 164 # that relied on this rev must preceed it.
165 165 #
166 166 # we also update the <parents> set to includes the parents on the
167 167 # new nodes.
168 168 if rev == currentrev: # only display stuff in rev
169 169 gr[0].append(rev)
170 170 gr[1].remove(rev)
171 171 parents = [p for p in parentsfunc(rev) if p > nullrev]
172 172 gr[1].update(parents)
173 173 for p in parents:
174 174 if p not in pendingset:
175 175 pendingset.add(p)
176 176 heappush(pendingheap, -p)
177 177
178 178 # Look for a subgroup to display
179 179 #
180 180 # When unblocked is empty (if clause), We are not waiting over any
181 181 # revision during the first iteration (if no priority was given) or
182 182 # if we outputed a whole disconnected sets of the graph (reached a
183 183 # root). In that case we arbitrarily takes the oldest known
184 184 # subgroup. The heuristique could probably be better.
185 185 #
186 186 # Otherwise (elif clause) this mean we have some emitted revision.
187 187 # if the subgroup awaits on the same revision that the outputed
188 188 # ones, we can safely output it.
189 189 if not unblocked:
190 190 if len(groups) > 1: # display other subset
191 191 targetidx = 1
192 192 gr = groups[1]
193 193 elif not gr[1] & unblocked:
194 194 gr = None
195 195
196 196 if gr is not None:
197 197 # update the set of awaited revisions with the one from the
198 198 # subgroup
199 199 unblocked |= gr[1]
200 200 # output all revisions in the subgroup
201 201 for r in gr[0]:
202 202 yield r
203 203 # delete the subgroup that you just output
204 204 # unless it is groups[0] in which case you just empty it.
205 205 if targetidx:
206 206 del groups[targetidx]
207 207 else:
208 208 gr[0][:] = []
209 209 # Check if we have some subgroup waiting for revision we are not going to
210 210 # iterate over
211 211 for g in groups:
212 212 for r in g[0]:
213 213 yield r
214 214
215 215 def dagwalker(repo, revs):
216 216 """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples
217 217
218 218 This generator function walks through revisions (which should be ordered
219 219 from bigger to lower). It returns a tuple for each node. The node and parent
220 220 ids are arbitrary integers which identify a node in the context of the graph
221 221 returned.
222 222 """
223 223 if not revs:
224 224 return
225 225
226 226 cl = repo.changelog
227 227 lowestrev = revs.min()
228 228 gpcache = {}
229 229
230 if repo.ui.configbool('experimental', 'graph-topological', False):
230 if repo.ui.configbool('experimental', 'graph-group-branches', False):
231 231 firstbranch = ()
232 firstbranchrevset = repo.ui.config('experimental',
233 'graph-topological.firstbranch', '')
232 firstbranchrevset = repo.ui.config(
233 'experimental', 'graph-group-branches.firstbranch', '')
234 234 if firstbranchrevset:
235 235 firstbranch = repo.revs(firstbranchrevset)
236 236 parentrevs = repo.changelog.parentrevs
237 237 revs = list(groupbranchiter(revs, parentrevs, firstbranch))
238 238
239 239 for rev in revs:
240 240 ctx = repo[rev]
241 241 parents = sorted(set([p.rev() for p in ctx.parents()
242 242 if p.rev() in revs]))
243 243 mpars = [p.rev() for p in ctx.parents() if
244 244 p.rev() != nullrev and p.rev() not in parents]
245 245
246 246 for mpar in mpars:
247 247 gp = gpcache.get(mpar)
248 248 if gp is None:
249 249 gp = gpcache[mpar] = grandparent(cl, lowestrev, revs, mpar)
250 250 if not gp:
251 251 parents.append(mpar)
252 252 else:
253 253 parents.extend(g for g in gp if g not in parents)
254 254
255 255 yield (ctx.rev(), CHANGESET, ctx, parents)
256 256
257 257 def nodes(repo, nodes):
258 258 """cset DAG generator yielding (id, CHANGESET, ctx, [parentids]) tuples
259 259
260 260 This generator function walks the given nodes. It only returns parents
261 261 that are in nodes, too.
262 262 """
263 263 include = set(nodes)
264 264 for node in nodes:
265 265 ctx = repo[node]
266 266 parents = set([p.rev() for p in ctx.parents() if p.node() in include])
267 267 yield (ctx.rev(), CHANGESET, ctx, sorted(parents))
268 268
269 269 def colored(dag, repo):
270 270 """annotates a DAG with colored edge information
271 271
272 272 For each DAG node this function emits tuples::
273 273
274 274 (id, type, data, (col, color), [(col, nextcol, color)])
275 275
276 276 with the following new elements:
277 277
278 278 - Tuple (col, color) with column and color index for the current node
279 279 - A list of tuples indicating the edges between the current node and its
280 280 parents.
281 281 """
282 282 seen = []
283 283 colors = {}
284 284 newcolor = 1
285 285 config = {}
286 286
287 287 for key, val in repo.ui.configitems('graph'):
288 288 if '.' in key:
289 289 branch, setting = key.rsplit('.', 1)
290 290 # Validation
291 291 if setting == "width" and val.isdigit():
292 292 config.setdefault(branch, {})[setting] = int(val)
293 293 elif setting == "color" and val.isalnum():
294 294 config.setdefault(branch, {})[setting] = val
295 295
296 296 if config:
297 297 getconf = util.lrucachefunc(
298 298 lambda rev: config.get(repo[rev].branch(), {}))
299 299 else:
300 300 getconf = lambda rev: {}
301 301
302 302 for (cur, type, data, parents) in dag:
303 303
304 304 # Compute seen and next
305 305 if cur not in seen:
306 306 seen.append(cur) # new head
307 307 colors[cur] = newcolor
308 308 newcolor += 1
309 309
310 310 col = seen.index(cur)
311 311 color = colors.pop(cur)
312 312 next = seen[:]
313 313
314 314 # Add parents to next
315 315 addparents = [p for p in parents if p not in next]
316 316 next[col:col + 1] = addparents
317 317
318 318 # Set colors for the parents
319 319 for i, p in enumerate(addparents):
320 320 if not i:
321 321 colors[p] = color
322 322 else:
323 323 colors[p] = newcolor
324 324 newcolor += 1
325 325
326 326 # Add edges to the graph
327 327 edges = []
328 328 for ecol, eid in enumerate(seen):
329 329 if eid in next:
330 330 bconf = getconf(eid)
331 331 edges.append((
332 332 ecol, next.index(eid), colors[eid],
333 333 bconf.get('width', -1),
334 334 bconf.get('color', '')))
335 335 elif eid == cur:
336 336 for p in parents:
337 337 bconf = getconf(p)
338 338 edges.append((
339 339 ecol, next.index(p), color,
340 340 bconf.get('width', -1),
341 341 bconf.get('color', '')))
342 342
343 343 # Yield and move on
344 344 yield (cur, type, data, (col, color), edges)
345 345 seen = next
346 346
347 347 def grandparent(cl, lowestrev, roots, head):
348 348 """Return all ancestors of head in roots which revision is
349 349 greater or equal to lowestrev.
350 350 """
351 351 pending = set([head])
352 352 seen = set()
353 353 kept = set()
354 354 llowestrev = max(nullrev, lowestrev)
355 355 while pending:
356 356 r = pending.pop()
357 357 if r >= llowestrev and r not in seen:
358 358 if r in roots:
359 359 kept.add(r)
360 360 else:
361 361 pending.update([p for p in cl.parentrevs(r)])
362 362 seen.add(r)
363 363 return sorted(kept)
364 364
365 365 def asciiedges(type, char, lines, seen, rev, parents):
366 366 """adds edge info to changelog DAG walk suitable for ascii()"""
367 367 if rev not in seen:
368 368 seen.append(rev)
369 369 nodeidx = seen.index(rev)
370 370
371 371 knownparents = []
372 372 newparents = []
373 373 for parent in parents:
374 374 if parent in seen:
375 375 knownparents.append(parent)
376 376 else:
377 377 newparents.append(parent)
378 378
379 379 ncols = len(seen)
380 380 nextseen = seen[:]
381 381 nextseen[nodeidx:nodeidx + 1] = newparents
382 382 edges = [(nodeidx, nextseen.index(p)) for p in knownparents if p != nullrev]
383 383
384 384 while len(newparents) > 2:
385 385 # ascii() only knows how to add or remove a single column between two
386 386 # calls. Nodes with more than two parents break this constraint so we
387 387 # introduce intermediate expansion lines to grow the active node list
388 388 # slowly.
389 389 edges.append((nodeidx, nodeidx))
390 390 edges.append((nodeidx, nodeidx + 1))
391 391 nmorecols = 1
392 392 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
393 393 char = '\\'
394 394 lines = []
395 395 nodeidx += 1
396 396 ncols += 1
397 397 edges = []
398 398 del newparents[0]
399 399
400 400 if len(newparents) > 0:
401 401 edges.append((nodeidx, nodeidx))
402 402 if len(newparents) > 1:
403 403 edges.append((nodeidx, nodeidx + 1))
404 404 nmorecols = len(nextseen) - ncols
405 405 seen[:] = nextseen
406 406 yield (type, char, lines, (nodeidx, edges, ncols, nmorecols))
407 407
408 408 def _fixlongrightedges(edges):
409 409 for (i, (start, end)) in enumerate(edges):
410 410 if end > start:
411 411 edges[i] = (start, end + 1)
412 412
413 413 def _getnodelineedgestail(
414 414 node_index, p_node_index, n_columns, n_columns_diff, p_diff, fix_tail):
415 415 if fix_tail and n_columns_diff == p_diff and n_columns_diff != 0:
416 416 # Still going in the same non-vertical direction.
417 417 if n_columns_diff == -1:
418 418 start = max(node_index + 1, p_node_index)
419 419 tail = ["|", " "] * (start - node_index - 1)
420 420 tail.extend(["/", " "] * (n_columns - start))
421 421 return tail
422 422 else:
423 423 return ["\\", " "] * (n_columns - node_index - 1)
424 424 else:
425 425 return ["|", " "] * (n_columns - node_index - 1)
426 426
427 427 def _drawedges(edges, nodeline, interline):
428 428 for (start, end) in edges:
429 429 if start == end + 1:
430 430 interline[2 * end + 1] = "/"
431 431 elif start == end - 1:
432 432 interline[2 * start + 1] = "\\"
433 433 elif start == end:
434 434 interline[2 * start] = "|"
435 435 else:
436 436 if 2 * end >= len(nodeline):
437 437 continue
438 438 nodeline[2 * end] = "+"
439 439 if start > end:
440 440 (start, end) = (end, start)
441 441 for i in range(2 * start + 1, 2 * end):
442 442 if nodeline[i] != "+":
443 443 nodeline[i] = "-"
444 444
445 445 def _getpaddingline(ni, n_columns, edges):
446 446 line = []
447 447 line.extend(["|", " "] * ni)
448 448 if (ni, ni - 1) in edges or (ni, ni) in edges:
449 449 # (ni, ni - 1) (ni, ni)
450 450 # | | | | | | | |
451 451 # +---o | | o---+
452 452 # | | c | | c | |
453 453 # | |/ / | |/ /
454 454 # | | | | | |
455 455 c = "|"
456 456 else:
457 457 c = " "
458 458 line.extend([c, " "])
459 459 line.extend(["|", " "] * (n_columns - ni - 1))
460 460 return line
461 461
462 462 def asciistate():
463 463 """returns the initial value for the "state" argument to ascii()"""
464 464 return [0, 0]
465 465
466 466 def ascii(ui, state, type, char, text, coldata):
467 467 """prints an ASCII graph of the DAG
468 468
469 469 takes the following arguments (one call per node in the graph):
470 470
471 471 - ui to write to
472 472 - Somewhere to keep the needed state in (init to asciistate())
473 473 - Column of the current node in the set of ongoing edges.
474 474 - Type indicator of node data, usually 'C' for changesets.
475 475 - Payload: (char, lines):
476 476 - Character to use as node's symbol.
477 477 - List of lines to display as the node's text.
478 478 - Edges; a list of (col, next_col) indicating the edges between
479 479 the current node and its parents.
480 480 - Number of columns (ongoing edges) in the current revision.
481 481 - The difference between the number of columns (ongoing edges)
482 482 in the next revision and the number of columns (ongoing edges)
483 483 in the current revision. That is: -1 means one column removed;
484 484 0 means no columns added or removed; 1 means one column added.
485 485 """
486 486
487 487 idx, edges, ncols, coldiff = coldata
488 488 assert -2 < coldiff < 2
489 489 if coldiff == -1:
490 490 # Transform
491 491 #
492 492 # | | | | | |
493 493 # o | | into o---+
494 494 # |X / |/ /
495 495 # | | | |
496 496 _fixlongrightedges(edges)
497 497
498 498 # add_padding_line says whether to rewrite
499 499 #
500 500 # | | | | | | | |
501 501 # | o---+ into | o---+
502 502 # | / / | | | # <--- padding line
503 503 # o | | | / /
504 504 # o | |
505 505 add_padding_line = (len(text) > 2 and coldiff == -1 and
506 506 [x for (x, y) in edges if x + 1 < y])
507 507
508 508 # fix_nodeline_tail says whether to rewrite
509 509 #
510 510 # | | o | | | | o | |
511 511 # | | |/ / | | |/ /
512 512 # | o | | into | o / / # <--- fixed nodeline tail
513 513 # | |/ / | |/ /
514 514 # o | | o | |
515 515 fix_nodeline_tail = len(text) <= 2 and not add_padding_line
516 516
517 517 # nodeline is the line containing the node character (typically o)
518 518 nodeline = ["|", " "] * idx
519 519 nodeline.extend([char, " "])
520 520
521 521 nodeline.extend(
522 522 _getnodelineedgestail(idx, state[1], ncols, coldiff,
523 523 state[0], fix_nodeline_tail))
524 524
525 525 # shift_interline is the line containing the non-vertical
526 526 # edges between this entry and the next
527 527 shift_interline = ["|", " "] * idx
528 528 if coldiff == -1:
529 529 n_spaces = 1
530 530 edge_ch = "/"
531 531 elif coldiff == 0:
532 532 n_spaces = 2
533 533 edge_ch = "|"
534 534 else:
535 535 n_spaces = 3
536 536 edge_ch = "\\"
537 537 shift_interline.extend(n_spaces * [" "])
538 538 shift_interline.extend([edge_ch, " "] * (ncols - idx - 1))
539 539
540 540 # draw edges from the current node to its parents
541 541 _drawedges(edges, nodeline, shift_interline)
542 542
543 543 # lines is the list of all graph lines to print
544 544 lines = [nodeline]
545 545 if add_padding_line:
546 546 lines.append(_getpaddingline(idx, ncols, edges))
547 547 lines.append(shift_interline)
548 548
549 549 # make sure that there are as many graph lines as there are
550 550 # log strings
551 551 while len(text) < len(lines):
552 552 text.append("")
553 553 if len(lines) < len(text):
554 554 extra_interline = ["|", " "] * (ncols + coldiff)
555 555 while len(lines) < len(text):
556 556 lines.append(extra_interline)
557 557
558 558 # print lines
559 559 indentation_level = max(ncols, ncols + coldiff)
560 560 for (line, logstr) in zip(lines, text):
561 561 ln = "%-*s %s" % (2 * indentation_level, "".join(line), logstr)
562 562 ui.write(ln.rstrip() + '\n')
563 563
564 564 # ... and start over
565 565 state[0] = coldiff
566 566 state[1] = idx
@@ -1,101 +1,101
1 1 This test file aims at test topological iteration and the various configuration it can has.
2 2
3 3 $ cat >> $HGRCPATH << EOF
4 4 > [ui]
5 5 > logtemplate={rev}\n
6 6 > EOF
7 7
8 8 On this simple example, all topological branch are displayed in turn until we
9 9 can finally display 0. this implies skipping from 8 to 3 and coming back to 7
10 10 later.
11 11
12 12 $ hg init test01
13 13 $ cd test01
14 14 $ hg unbundle $TESTDIR/bundles/remote.hg
15 15 adding changesets
16 16 adding manifests
17 17 adding file changes
18 18 added 9 changesets with 7 changes to 4 files (+1 heads)
19 19 (run 'hg heads' to see heads, 'hg merge' to merge)
20 20
21 21 $ hg log -G
22 22 o 8
23 23 |
24 24 | o 7
25 25 | |
26 26 | o 6
27 27 | |
28 28 | o 5
29 29 | |
30 30 | o 4
31 31 | |
32 32 o | 3
33 33 | |
34 34 o | 2
35 35 | |
36 36 o | 1
37 37 |/
38 38 o 0
39 39
40 40
41 41 (display all nodes)
42 42
43 $ hg --config experimental.graph-topological=1 log -G
43 $ hg --config experimental.graph-group-branches=1 log -G
44 44 o 8
45 45 |
46 46 o 3
47 47 |
48 48 o 2
49 49 |
50 50 o 1
51 51 |
52 52 | o 7
53 53 | |
54 54 | o 6
55 55 | |
56 56 | o 5
57 57 | |
58 58 | o 4
59 59 |/
60 60 o 0
61 61
62 62
63 63 (revset skipping nodes)
64 64
65 $ hg --config experimental.graph-topological=1 log -G --rev 'not (2+6)'
65 $ hg --config experimental.graph-group-branches=1 log -G --rev 'not (2+6)'
66 66 o 8
67 67 |
68 68 o 3
69 69 |
70 70 o 1
71 71 |
72 72 | o 7
73 73 | |
74 74 | o 5
75 75 | |
76 76 | o 4
77 77 |/
78 78 o 0
79 79
80 80
81 81 (begin) from the other branch
82 82
83 $ hg --config experimental.graph-topological=1 --config experimental.graph-topological.firstbranch=5 log -G
83 $ hg --config experimental.graph-group-branches=1 --config experimental.graph-group-branches.firstbranch=5 log -G
84 84 o 7
85 85 |
86 86 o 6
87 87 |
88 88 o 5
89 89 |
90 90 o 4
91 91 |
92 92 | o 8
93 93 | |
94 94 | o 3
95 95 | |
96 96 | o 2
97 97 | |
98 98 | o 1
99 99 |/
100 100 o 0
101 101
General Comments 0
You need to be logged in to leave comments. Login now