Show More
@@ -8,25 +8,26 b' Configurations' | |||||
8 | -------- |
|
8 | -------- | |
9 |
|
9 | |||
10 | ``all-timing`` |
|
10 | ``all-timing`` | |
11 | When set, additional statistic will be reported for each benchmark: best, |
|
11 | When set, additional statistics will be reported for each benchmark: best, | |
12 | worst, median average. If not set only the best timing is reported |
|
12 | worst, median average. If not set only the best timing is reported | |
13 | (default: off). |
|
13 | (default: off). | |
14 |
|
14 | |||
15 | ``presleep`` |
|
15 | ``presleep`` | |
16 | number of second to wait before any group of run (default: 1) |
|
16 | number of second to wait before any group of runs (default: 1) | |
17 |
|
17 | |||
18 | ``run-limits`` |
|
18 | ``run-limits`` | |
19 | Control the number of run each benchmark will perform. The option value |
|
19 | Control the number of runs each benchmark will perform. The option value | |
20 | should be a list of `<time>-<numberofrun>` pairs. After each run the |
|
20 | should be a list of `<time>-<numberofrun>` pairs. After each run the | |
21 | condition are considered in order with the following logic: |
|
21 | conditions are considered in order with the following logic: | |
22 |
|
22 | |||
23 |
If benchmark ha |
|
23 | If benchmark has been running for <time> seconds, and we have performed | |
24 | <numberofrun> iterations, stop the benchmark, |
|
24 | <numberofrun> iterations, stop the benchmark, | |
25 |
|
25 | |||
26 | The default value is: `3.0-100, 10.0-3` |
|
26 | The default value is: `3.0-100, 10.0-3` | |
27 |
|
27 | |||
28 | ``stub`` |
|
28 | ``stub`` | |
29 |
When set, benchmark will only be run once, useful for testing |
|
29 | When set, benchmarks will only be run once, useful for testing | |
|
30 | (default: off) | |||
30 | ''' |
|
31 | ''' | |
31 |
|
32 | |||
32 | # "historical portability" policy of perf.py: |
|
33 | # "historical portability" policy of perf.py: | |
@@ -1217,8 +1218,8 b' def perfstartup(ui, repo, **opts):' | |||||
1217 | def perfparents(ui, repo, **opts): |
|
1218 | def perfparents(ui, repo, **opts): | |
1218 | """benchmark the time necessary to fetch one changeset's parents. |
|
1219 | """benchmark the time necessary to fetch one changeset's parents. | |
1219 |
|
1220 | |||
1220 | The fetch is done using the `node identifier`, traversing all object layer |
|
1221 | The fetch is done using the `node identifier`, traversing all object layers | |
1221 |
from the repository object. The |
|
1222 | from the repository object. The first N revisions will be used for this | |
1222 | benchmark. N is controlled by the ``perf.parentscount`` config option |
|
1223 | benchmark. N is controlled by the ``perf.parentscount`` config option | |
1223 | (default: 1000). |
|
1224 | (default: 1000). | |
1224 | """ |
|
1225 | """ |
@@ -48,25 +48,25 b' perfstatus' | |||||
48 | ------ |
|
48 | ------ | |
49 |
|
49 | |||
50 | "all-timing" |
|
50 | "all-timing" | |
51 | When set, additional statistic will be reported for each benchmark: best, |
|
51 | When set, additional statistics will be reported for each benchmark: best, | |
52 | worst, median average. If not set only the best timing is reported |
|
52 | worst, median average. If not set only the best timing is reported | |
53 | (default: off). |
|
53 | (default: off). | |
54 |
|
54 | |||
55 | "presleep" |
|
55 | "presleep" | |
56 | number of second to wait before any group of run (default: 1) |
|
56 | number of second to wait before any group of runs (default: 1) | |
57 |
|
57 | |||
58 | "run-limits" |
|
58 | "run-limits" | |
59 | Control the number of run each benchmark will perform. The option value |
|
59 | Control the number of runs each benchmark will perform. The option value | |
60 | should be a list of '<time>-<numberofrun>' pairs. After each run the |
|
60 | should be a list of '<time>-<numberofrun>' pairs. After each run the | |
61 | condition are considered in order with the following logic: |
|
61 | conditions are considered in order with the following logic: | |
62 |
|
62 | |||
63 |
If benchmark ha |
|
63 | If benchmark has been running for <time> seconds, and we have performed | |
64 | <numberofrun> iterations, stop the benchmark, |
|
64 | <numberofrun> iterations, stop the benchmark, | |
65 |
|
65 | |||
66 | The default value is: '3.0-100, 10.0-3' |
|
66 | The default value is: '3.0-100, 10.0-3' | |
67 |
|
67 | |||
68 | "stub" |
|
68 | "stub" | |
69 | When set, benchmark will only be run once, useful for testing (default: |
|
69 | When set, benchmarks will only be run once, useful for testing (default: | |
70 | off) |
|
70 | off) | |
71 |
|
71 | |||
72 | list of commands: |
|
72 | list of commands: |
General Comments 0
You need to be logged in to leave comments.
Login now