##// END OF EJS Templates
branching: merge default into stable for 6.8rc0
Raphaël Gomès -
r52541:6454c117 merge 6.8rc0 stable
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

1 NO CONTENT: new file 100644
NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
@@ -1,159 +1,159 b''
1 # All revsets ever used with revsetbenchmarks.py script
1 # All revsets ever used with revsetbenchmarks.py script
2 #
2 #
3 # The goal of this file is to gather all revsets ever used for benchmarking
3 # The goal of this file is to gather all revsets ever used for benchmarking
4 # revset's performance. It should be used to gather revsets that test a
4 # revset's performance. It should be used to gather revsets that test a
5 # specific usecase or a specific implementation of revset predicates.
5 # specific usecase or a specific implementation of revset predicates.
6 # If you are working on the smartset implementation itself, check
6 # If you are working on the smartset implementation itself, check
7 # 'base-revsets.txt'.
7 # 'base-revsets.txt'.
8 #
8 #
9 # Please update this file with any revsets you use for benchmarking a change so
9 # Please update this file with any revsets you use for benchmarking a change so
10 # that future contributors can easily find and retest it when doing further
10 # that future contributors can easily find and retest it when doing further
11 # modification. Feel free to highlight interesting variants if needed.
11 # modification. Feel free to highlight interesting variants if needed.
12
12
13
13
14 ## Revset from this section are all extracted from changelog when this file was
14 ## Revset from this section are all extracted from changelog when this file was
15 # created. Feel free to dig and improve documentation.
15 # created. Feel free to dig and improve documentation.
16
16
17 # Used in revision da05fe01170b
17 # Used in revision da05fe01170b
18 (20000::) - (20000)
18 (20000::) - (20000)
19 # Used in revision 95af98616aa7
19 # Used in revision 95af98616aa7
20 parents(20000)
20 parents(20000)
21 # Used in revision 186fd06283b4
21 # Used in revision 186fd06283b4
22 (_intlist('20000\x0020001')) and merge()
22 (_intlist('20000\x0020001')) and merge()
23 # Used in revision 911f5a6579d1
23 # Used in revision 911f5a6579d1
24 p1(20000)
24 p1(20000)
25 p2(10000)
25 p2(10000)
26 # Used in revision b6dc3b79bb25
26 # Used in revision b6dc3b79bb25
27 0::
27 0::
28 # Used in revision faf4f63533ff
28 # Used in revision faf4f63533ff
29 bookmark()
29 bookmark()
30 # Used in revision 22ba2c0825da
30 # Used in revision 22ba2c0825da
31 tip~25
31 tip~25
32 # Used in revision 0cf46b8298fe
32 # Used in revision 0cf46b8298fe
33 bisect(range)
33 bisect(range)
34 # Used in revision 5b65429721d5
34 # Used in revision 5b65429721d5
35 divergent()
35 divergent()
36 # Used in revision 6261b9c549a2
36 # Used in revision 6261b9c549a2
37 file(COPYING)
37 file(COPYING)
38 # Used in revision 44f471102f3a
38 # Used in revision 44f471102f3a
39 follow(COPYING)
39 follow(COPYING)
40 # Used in revision 8040a44aab1c
40 # Used in revision 8040a44aab1c
41 origin(tip)
41 origin(tip)
42 # Used in revision bbf4f3dfd700
42 # Used in revision bbf4f3dfd700
43 rev(25)
43 rev(25)
44 # Used in revision a428db9ab61d
44 # Used in revision a428db9ab61d
45 p1()
45 p1()
46 # Used in revision c1546d7400ef
46 # Used in revision c1546d7400ef
47 min(0::)
47 min(0::)
48 # Used in revision 546fa6576815
48 # Used in revision 546fa6576815
49 author(lmoscovicz) or author(olivia)
49 author(lmoscovicz) or author("pierre-yves")
50 author(olivia) or author(lmoscovicz)
50 author("pierre-yves") or author(lmoscovicz)
51 # Used in revision 9bfe68357c01
51 # Used in revision 9bfe68357c01
52 public() and id("d82e2223f132")
52 public() and id("d82e2223f132")
53 # Used in revision ba89f7b542c9
53 # Used in revision ba89f7b542c9
54 rev(25)
54 rev(25)
55 # Used in revision eb763217152a
55 # Used in revision eb763217152a
56 rev(210000)
56 rev(210000)
57 # Used in revision 69524a05a7fa
57 # Used in revision 69524a05a7fa
58 10:100
58 10:100
59 parents(10):parents(100)
59 parents(10):parents(100)
60 # Used in revision 6f1b8b3f12fd
60 # Used in revision 6f1b8b3f12fd
61 100~5
61 100~5
62 parents(100)~5
62 parents(100)~5
63 (100~5)~5
63 (100~5)~5
64 # Used in revision 7a42e5d4c418
64 # Used in revision 7a42e5d4c418
65 children(tip~100)
65 children(tip~100)
66 # Used in revision 7e8737e6ab08
66 # Used in revision 7e8737e6ab08
67 100^1
67 100^1
68 parents(100)^1
68 parents(100)^1
69 (100^1)^1
69 (100^1)^1
70 # Used in revision 30e0dcd7c5ff
70 # Used in revision 30e0dcd7c5ff
71 matching(100)
71 matching(100)
72 matching(parents(100))
72 matching(parents(100))
73 # Used in revision aafeaba22826
73 # Used in revision aafeaba22826
74 0|1|2|3|4|5|6|7|8|9
74 0|1|2|3|4|5|6|7|8|9
75 # Used in revision 33c7a94d4dd0
75 # Used in revision 33c7a94d4dd0
76 tip:0
76 tip:0
77 # Used in revision 7d369fae098e
77 # Used in revision 7d369fae098e
78 (0:100000)
78 (0:100000)
79 # Used in revision b333ca94403d
79 # Used in revision b333ca94403d
80 0 + 1 + 2 + ... + 200
80 0 + 1 + 2 + ... + 200
81 0 + 1 + 2 + ... + 1000
81 0 + 1 + 2 + ... + 1000
82 sort(0 + 1 + 2 + ... + 200)
82 sort(0 + 1 + 2 + ... + 200)
83 sort(0 + 1 + 2 + ... + 1000)
83 sort(0 + 1 + 2 + ... + 1000)
84 # Used in revision 7fbef7932af9
84 # Used in revision 7fbef7932af9
85 first(0 + 1 + 2 + ... + 1000)
85 first(0 + 1 + 2 + ... + 1000)
86 # Used in revision ceaf04bb14ff
86 # Used in revision ceaf04bb14ff
87 0:1000
87 0:1000
88 # Used in revision 262e6ad93885
88 # Used in revision 262e6ad93885
89 not public()
89 not public()
90 (tip~1000::) - public()
90 (tip~1000::) - public()
91 not public() and branch("default")
91 not public() and branch("default")
92 # Used in revision 15412bba5a68
92 # Used in revision 15412bba5a68
93 0::tip
93 0::tip
94
94
95 ## all the revsets from this section have been taken from the former central file
95 ## all the revsets from this section have been taken from the former central file
96 # for revset's benchmarking, they are undocumented for this reason.
96 # for revset's benchmarking, they are undocumented for this reason.
97 all()
97 all()
98 draft()
98 draft()
99 ::tip
99 ::tip
100 draft() and ::tip
100 draft() and ::tip
101 ::tip and draft()
101 ::tip and draft()
102 author(lmoscovicz)
102 author(lmoscovicz)
103 author(olivia)
103 author("pierre-yves")
104 ::p1(p1(tip))::
104 ::p1(p1(tip))::
105 public()
105 public()
106 :10000 and public()
106 :10000 and public()
107 :10000 and draft()
107 :10000 and draft()
108 (not public() - obsolete())
108 (not public() - obsolete())
109
109
110 # The one below is used by rebase
110 # The one below is used by rebase
111 (children(ancestor(tip~5, tip)) and ::(tip~5))::
111 (children(ancestor(tip~5, tip)) and ::(tip~5))::
112
112
113 # those two `roots(...)` inputs are close to what phase movement use.
113 # those two `roots(...)` inputs are close to what phase movement use.
114 roots((tip~100::) - (tip~100::tip))
114 roots((tip~100::) - (tip~100::tip))
115 roots((0::) - (0::tip))
115 roots((0::) - (0::tip))
116
116
117 # more roots testing
117 # more roots testing
118 roots(tip~100:)
118 roots(tip~100:)
119 roots(:42)
119 roots(:42)
120 roots(not public())
120 roots(not public())
121 roots((0:tip)::)
121 roots((0:tip)::)
122 roots(0::tip)
122 roots(0::tip)
123 42:68 and roots(42:tip)
123 42:68 and roots(42:tip)
124 # Used in revision f140d6207cca
124 # Used in revision f140d6207cca
125 roots(0:tip)
125 roots(0:tip)
126 # test disjoint set with multiple roots
126 # test disjoint set with multiple roots
127 roots((:42) + (tip~42:))
127 roots((:42) + (tip~42:))
128
128
129 # Testing the behavior of "head()" in various situations
129 # Testing the behavior of "head()" in various situations
130 head()
130 head()
131 head() - public()
131 head() - public()
132 draft() and head()
132 draft() and head()
133 head() and author("olivia")
133 head() and author("pierre-yves")
134
134
135 # testing the mutable phases set
135 # testing the mutable phases set
136 draft()
136 draft()
137 secret()
137 secret()
138
138
139 # test finding common ancestors
139 # test finding common ancestors
140 heads(commonancestors(last(head(), 2)))
140 heads(commonancestors(last(head(), 2)))
141 heads(commonancestors(head()))
141 heads(commonancestors(head()))
142
142
143 # more heads testing
143 # more heads testing
144 heads(all())
144 heads(all())
145 heads(-10000:-1)
145 heads(-10000:-1)
146 (-5000:-1000) and heads(-10000:-1)
146 (-5000:-1000) and heads(-10000:-1)
147 heads(matching(tip, "author"))
147 heads(matching(tip, "author"))
148 heads(matching(tip, "author")) and -10000:-1
148 heads(matching(tip, "author")) and -10000:-1
149 (-10000:-1) and heads(matching(tip, "author"))
149 (-10000:-1) and heads(matching(tip, "author"))
150 # more roots testing
150 # more roots testing
151 roots(all())
151 roots(all())
152 roots(-10000:-1)
152 roots(-10000:-1)
153 (-5000:-1000) and roots(-10000:-1)
153 (-5000:-1000) and roots(-10000:-1)
154 roots(matching(tip, "author"))
154 roots(matching(tip, "author"))
155 roots(matching(tip, "author")) and -10000:-1
155 roots(matching(tip, "author")) and -10000:-1
156 (-10000:-1) and roots(matching(tip, "author"))
156 (-10000:-1) and roots(matching(tip, "author"))
157 only(max(head()))
157 only(max(head()))
158 only(max(head()), min(head()))
158 only(max(head()), min(head()))
159 only(max(head()), limit(head(), 1, 1))
159 only(max(head()), limit(head(), 1, 1))
@@ -1,52 +1,52 b''
1 # Base Revsets to be used with revsetbenchmarks.py script
1 # Base Revsets to be used with revsetbenchmarks.py script
2 #
2 #
3 # The goal of this file is to gather a limited amount of revsets that allow a
3 # The goal of this file is to gather a limited amount of revsets that allow a
4 # good coverage of the internal revsets mechanisms. Revsets included should not
4 # good coverage of the internal revsets mechanisms. Revsets included should not
5 # be selected for their individual implementation, but for what they reveal of
5 # be selected for their individual implementation, but for what they reveal of
6 # the internal implementation of smartsets classes (and their interactions).
6 # the internal implementation of smartsets classes (and their interactions).
7 #
7 #
8 # Use and update this file when you change internal implementation of these
8 # Use and update this file when you change internal implementation of these
9 # smartsets classes. Please include a comment explaining what each of your
9 # smartsets classes. Please include a comment explaining what each of your
10 # addition is testing. Also check if your changes to the smartset class makes
10 # addition is testing. Also check if your changes to the smartset class makes
11 # some of the tests inadequate and replace them with a new one testing the same
11 # some of the tests inadequate and replace them with a new one testing the same
12 # behavior.
12 # behavior.
13 #
13 #
14 # If you want to benchmark revsets predicate itself, check 'all-revsets.txt'.
14 # If you want to benchmark revsets predicate itself, check 'all-revsets.txt'.
15 #
15 #
16 # The current content of this file is currently likely not reaching this goal
16 # The current content of this file is currently likely not reaching this goal
17 # entirely, feel free, to audit its content and comment on each revset to
17 # entirely, feel free, to audit its content and comment on each revset to
18 # highlight what internal mechanisms they test.
18 # highlight what internal mechanisms they test.
19
19
20 all()
20 all()
21 draft()
21 draft()
22 ::tip
22 ::tip
23 draft() and ::tip
23 draft() and ::tip
24 ::tip and draft()
24 ::tip and draft()
25 0::tip
25 0::tip
26 roots(0::tip)
26 roots(0::tip)
27 author(lmoscovicz)
27 author(lmoscovicz)
28 author(olivia)
28 author("pierre-yves")
29 author(lmoscovicz) or author(olivia)
29 author(lmoscovicz) or author("pierre-yves")
30 author(olivia) or author(lmoscovicz)
30 author("pierre-yves") or author(lmoscovicz)
31 tip:0
31 tip:0
32 0::
32 0::
33 # those two `roots(...)` inputs are close to what phase movement use.
33 # those two `roots(...)` inputs are close to what phase movement use.
34 roots((tip~100::) - (tip~100::tip))
34 roots((tip~100::) - (tip~100::tip))
35 roots((0::) - (0::tip))
35 roots((0::) - (0::tip))
36 42:68 and roots(42:tip)
36 42:68 and roots(42:tip)
37 ::p1(p1(tip))::
37 ::p1(p1(tip))::
38 public()
38 public()
39 :10000 and public()
39 :10000 and public()
40 draft()
40 draft()
41 :10000 and draft()
41 :10000 and draft()
42 roots((0:tip)::)
42 roots((0:tip)::)
43 (not public() - obsolete())
43 (not public() - obsolete())
44 (_intlist('20000\x0020001')) and merge()
44 (_intlist('20000\x0020001')) and merge()
45 parents(20000)
45 parents(20000)
46 (20000::) - (20000)
46 (20000::) - (20000)
47 # The one below is used by rebase
47 # The one below is used by rebase
48 (children(ancestor(tip~5, tip)) and ::(tip~5))::
48 (children(ancestor(tip~5, tip)) and ::(tip~5))::
49 heads(commonancestors(last(head(), 2)))
49 heads(commonancestors(last(head(), 2)))
50 heads(-10000:-1)
50 heads(-10000:-1)
51 roots(-10000:-1)
51 roots(-10000:-1)
52 only(max(head()), min(head()))
52 only(max(head()), min(head()))
@@ -1,4651 +1,4725 b''
1 # perf.py - performance test routines
1 # perf.py - performance test routines
2 '''helper extension to measure performance
2 '''helper extension to measure performance
3
3
4 Configurations
4 Configurations
5 ==============
5 ==============
6
6
7 ``perf``
7 ``perf``
8 --------
8 --------
9
9
10 ``all-timing``
10 ``all-timing``
11 When set, additional statistics will be reported for each benchmark: best,
11 When set, additional statistics will be reported for each benchmark: best,
12 worst, median average. If not set only the best timing is reported
12 worst, median average. If not set only the best timing is reported
13 (default: off).
13 (default: off).
14
14
15 ``presleep``
15 ``presleep``
16 number of second to wait before any group of runs (default: 1)
16 number of second to wait before any group of runs (default: 1)
17
17
18 ``pre-run``
18 ``pre-run``
19 number of run to perform before starting measurement.
19 number of run to perform before starting measurement.
20
20
21 ``profile-benchmark``
21 ``profile-benchmark``
22 Enable profiling for the benchmarked section.
22 Enable profiling for the benchmarked section.
23 (The first iteration is benchmarked)
23 (by default, the first iteration is benchmarked)
24
25 ``profiled-runs``
26 list of iteration to profile (starting from 0)
24
27
25 ``run-limits``
28 ``run-limits``
26 Control the number of runs each benchmark will perform. The option value
29 Control the number of runs each benchmark will perform. The option value
27 should be a list of `<time>-<numberofrun>` pairs. After each run the
30 should be a list of `<time>-<numberofrun>` pairs. After each run the
28 conditions are considered in order with the following logic:
31 conditions are considered in order with the following logic:
29
32
30 If benchmark has been running for <time> seconds, and we have performed
33 If benchmark has been running for <time> seconds, and we have performed
31 <numberofrun> iterations, stop the benchmark,
34 <numberofrun> iterations, stop the benchmark,
32
35
33 The default value is: `3.0-100, 10.0-3`
36 The default value is: `3.0-100, 10.0-3`
34
37
35 ``stub``
38 ``stub``
36 When set, benchmarks will only be run once, useful for testing
39 When set, benchmarks will only be run once, useful for testing
37 (default: off)
40 (default: off)
38 '''
41 '''
39
42
40 # "historical portability" policy of perf.py:
43 # "historical portability" policy of perf.py:
41 #
44 #
42 # We have to do:
45 # We have to do:
43 # - make perf.py "loadable" with as wide Mercurial version as possible
46 # - make perf.py "loadable" with as wide Mercurial version as possible
44 # This doesn't mean that perf commands work correctly with that Mercurial.
47 # This doesn't mean that perf commands work correctly with that Mercurial.
45 # BTW, perf.py itself has been available since 1.1 (or eb240755386d).
48 # BTW, perf.py itself has been available since 1.1 (or eb240755386d).
46 # - make historical perf command work correctly with as wide Mercurial
49 # - make historical perf command work correctly with as wide Mercurial
47 # version as possible
50 # version as possible
48 #
51 #
49 # We have to do, if possible with reasonable cost:
52 # We have to do, if possible with reasonable cost:
50 # - make recent perf command for historical feature work correctly
53 # - make recent perf command for historical feature work correctly
51 # with early Mercurial
54 # with early Mercurial
52 #
55 #
53 # We don't have to do:
56 # We don't have to do:
54 # - make perf command for recent feature work correctly with early
57 # - make perf command for recent feature work correctly with early
55 # Mercurial
58 # Mercurial
56
59
57 import contextlib
60 import contextlib
58 import functools
61 import functools
59 import gc
62 import gc
60 import os
63 import os
61 import random
64 import random
62 import shutil
65 import shutil
63 import struct
66 import struct
64 import sys
67 import sys
65 import tempfile
68 import tempfile
66 import threading
69 import threading
67 import time
70 import time
68
71
69 import mercurial.revlog
72 import mercurial.revlog
70 from mercurial import (
73 from mercurial import (
71 changegroup,
74 changegroup,
72 cmdutil,
75 cmdutil,
73 commands,
76 commands,
74 copies,
77 copies,
75 error,
78 error,
76 extensions,
79 extensions,
77 hg,
80 hg,
78 mdiff,
81 mdiff,
79 merge,
82 merge,
80 util,
83 util,
81 )
84 )
82
85
83 # for "historical portability":
86 # for "historical portability":
84 # try to import modules separately (in dict order), and ignore
87 # try to import modules separately (in dict order), and ignore
85 # failure, because these aren't available with early Mercurial
88 # failure, because these aren't available with early Mercurial
86 try:
89 try:
87 from mercurial import branchmap # since 2.5 (or bcee63733aad)
90 from mercurial import branchmap # since 2.5 (or bcee63733aad)
88 except ImportError:
91 except ImportError:
89 pass
92 pass
90 try:
93 try:
91 from mercurial import obsolete # since 2.3 (or ad0d6c2b3279)
94 from mercurial import obsolete # since 2.3 (or ad0d6c2b3279)
92 except ImportError:
95 except ImportError:
93 pass
96 pass
94 try:
97 try:
95 from mercurial import registrar # since 3.7 (or 37d50250b696)
98 from mercurial import registrar # since 3.7 (or 37d50250b696)
96
99
97 dir(registrar) # forcibly load it
100 dir(registrar) # forcibly load it
98 except ImportError:
101 except ImportError:
99 registrar = None
102 registrar = None
100 try:
103 try:
101 from mercurial import repoview # since 2.5 (or 3a6ddacb7198)
104 from mercurial import repoview # since 2.5 (or 3a6ddacb7198)
102 except ImportError:
105 except ImportError:
103 pass
106 pass
104 try:
107 try:
105 from mercurial.utils import repoviewutil # since 5.0
108 from mercurial.utils import repoviewutil # since 5.0
106 except ImportError:
109 except ImportError:
107 repoviewutil = None
110 repoviewutil = None
108 try:
111 try:
109 from mercurial import scmutil # since 1.9 (or 8b252e826c68)
112 from mercurial import scmutil # since 1.9 (or 8b252e826c68)
110 except ImportError:
113 except ImportError:
111 pass
114 pass
112 try:
115 try:
113 from mercurial import setdiscovery # since 1.9 (or cb98fed52495)
116 from mercurial import setdiscovery # since 1.9 (or cb98fed52495)
114 except ImportError:
117 except ImportError:
115 pass
118 pass
116
119
117 try:
120 try:
118 from mercurial import profiling
121 from mercurial import profiling
119 except ImportError:
122 except ImportError:
120 profiling = None
123 profiling = None
121
124
122 try:
125 try:
123 from mercurial.revlogutils import constants as revlog_constants
126 from mercurial.revlogutils import constants as revlog_constants
124
127
125 perf_rl_kind = (revlog_constants.KIND_OTHER, b'created-by-perf')
128 perf_rl_kind = (revlog_constants.KIND_OTHER, b'created-by-perf')
126
129
127 def revlog(opener, *args, **kwargs):
130 def revlog(opener, *args, **kwargs):
128 return mercurial.revlog.revlog(opener, perf_rl_kind, *args, **kwargs)
131 return mercurial.revlog.revlog(opener, perf_rl_kind, *args, **kwargs)
129
132
130
133
131 except (ImportError, AttributeError):
134 except (ImportError, AttributeError):
132 perf_rl_kind = None
135 perf_rl_kind = None
133
136
134 def revlog(opener, *args, **kwargs):
137 def revlog(opener, *args, **kwargs):
135 return mercurial.revlog.revlog(opener, *args, **kwargs)
138 return mercurial.revlog.revlog(opener, *args, **kwargs)
136
139
137
140
138 def identity(a):
141 def identity(a):
139 return a
142 return a
140
143
141
144
142 try:
145 try:
143 from mercurial import pycompat
146 from mercurial import pycompat
144
147
145 getargspec = pycompat.getargspec # added to module after 4.5
148 getargspec = pycompat.getargspec # added to module after 4.5
146 _byteskwargs = pycompat.byteskwargs # since 4.1 (or fbc3f73dc802)
149 _byteskwargs = pycompat.byteskwargs # since 4.1 (or fbc3f73dc802)
147 _sysstr = pycompat.sysstr # since 4.0 (or 2219f4f82ede)
150 _sysstr = pycompat.sysstr # since 4.0 (or 2219f4f82ede)
148 _bytestr = pycompat.bytestr # since 4.2 (or b70407bd84d5)
151 _bytestr = pycompat.bytestr # since 4.2 (or b70407bd84d5)
149 _xrange = pycompat.xrange # since 4.8 (or 7eba8f83129b)
152 _xrange = pycompat.xrange # since 4.8 (or 7eba8f83129b)
150 fsencode = pycompat.fsencode # since 3.9 (or f4a5e0e86a7e)
153 fsencode = pycompat.fsencode # since 3.9 (or f4a5e0e86a7e)
151 if pycompat.ispy3:
154 if pycompat.ispy3:
152 _maxint = sys.maxsize # per py3 docs for replacing maxint
155 _maxint = sys.maxsize # per py3 docs for replacing maxint
153 else:
156 else:
154 _maxint = sys.maxint
157 _maxint = sys.maxint
155 except (NameError, ImportError, AttributeError):
158 except (NameError, ImportError, AttributeError):
156 import inspect
159 import inspect
157
160
158 getargspec = inspect.getargspec
161 getargspec = inspect.getargspec
159 _byteskwargs = identity
162 _byteskwargs = identity
160 _bytestr = str
163 _bytestr = str
161 fsencode = identity # no py3 support
164 fsencode = identity # no py3 support
162 _maxint = sys.maxint # no py3 support
165 _maxint = sys.maxint # no py3 support
163 _sysstr = lambda x: x # no py3 support
166 _sysstr = lambda x: x # no py3 support
164 _xrange = xrange
167 _xrange = xrange
165
168
166 try:
169 try:
167 # 4.7+
170 # 4.7+
168 queue = pycompat.queue.Queue
171 queue = pycompat.queue.Queue
169 except (NameError, AttributeError, ImportError):
172 except (NameError, AttributeError, ImportError):
170 # <4.7.
173 # <4.7.
171 try:
174 try:
172 queue = pycompat.queue
175 queue = pycompat.queue
173 except (NameError, AttributeError, ImportError):
176 except (NameError, AttributeError, ImportError):
174 import Queue as queue
177 import Queue as queue
175
178
176 try:
179 try:
177 from mercurial import logcmdutil
180 from mercurial import logcmdutil
178
181
179 makelogtemplater = logcmdutil.maketemplater
182 makelogtemplater = logcmdutil.maketemplater
180 except (AttributeError, ImportError):
183 except (AttributeError, ImportError):
181 try:
184 try:
182 makelogtemplater = cmdutil.makelogtemplater
185 makelogtemplater = cmdutil.makelogtemplater
183 except (AttributeError, ImportError):
186 except (AttributeError, ImportError):
184 makelogtemplater = None
187 makelogtemplater = None
185
188
186 # for "historical portability":
189 # for "historical portability":
187 # define util.safehasattr forcibly, because util.safehasattr has been
190 # define util.safehasattr forcibly, because util.safehasattr has been
188 # available since 1.9.3 (or 94b200a11cf7)
191 # available since 1.9.3 (or 94b200a11cf7)
189 _undefined = object()
192 _undefined = object()
190
193
191
194
192 def safehasattr(thing, attr):
195 def safehasattr(thing, attr):
193 return getattr(thing, _sysstr(attr), _undefined) is not _undefined
196 return getattr(thing, _sysstr(attr), _undefined) is not _undefined
194
197
195
198
196 setattr(util, 'safehasattr', safehasattr)
199 setattr(util, 'safehasattr', safehasattr)
197
200
198 # for "historical portability":
201 # for "historical portability":
199 # define util.timer forcibly, because util.timer has been available
202 # define util.timer forcibly, because util.timer has been available
200 # since ae5d60bb70c9
203 # since ae5d60bb70c9
201 if safehasattr(time, 'perf_counter'):
204 if safehasattr(time, 'perf_counter'):
202 util.timer = time.perf_counter
205 util.timer = time.perf_counter
203 elif os.name == b'nt':
206 elif os.name == b'nt':
204 util.timer = time.clock
207 util.timer = time.clock
205 else:
208 else:
206 util.timer = time.time
209 util.timer = time.time
207
210
208 # for "historical portability":
211 # for "historical portability":
209 # use locally defined empty option list, if formatteropts isn't
212 # use locally defined empty option list, if formatteropts isn't
210 # available, because commands.formatteropts has been available since
213 # available, because commands.formatteropts has been available since
211 # 3.2 (or 7a7eed5176a4), even though formatting itself has been
214 # 3.2 (or 7a7eed5176a4), even though formatting itself has been
212 # available since 2.2 (or ae5f92e154d3)
215 # available since 2.2 (or ae5f92e154d3)
213 formatteropts = getattr(
216 formatteropts = getattr(
214 cmdutil, "formatteropts", getattr(commands, "formatteropts", [])
217 cmdutil, "formatteropts", getattr(commands, "formatteropts", [])
215 )
218 )
216
219
217 # for "historical portability":
220 # for "historical portability":
218 # use locally defined option list, if debugrevlogopts isn't available,
221 # use locally defined option list, if debugrevlogopts isn't available,
219 # because commands.debugrevlogopts has been available since 3.7 (or
222 # because commands.debugrevlogopts has been available since 3.7 (or
220 # 5606f7d0d063), even though cmdutil.openrevlog() has been available
223 # 5606f7d0d063), even though cmdutil.openrevlog() has been available
221 # since 1.9 (or a79fea6b3e77).
224 # since 1.9 (or a79fea6b3e77).
222 revlogopts = getattr(
225 revlogopts = getattr(
223 cmdutil,
226 cmdutil,
224 "debugrevlogopts",
227 "debugrevlogopts",
225 getattr(
228 getattr(
226 commands,
229 commands,
227 "debugrevlogopts",
230 "debugrevlogopts",
228 [
231 [
229 (b'c', b'changelog', False, b'open changelog'),
232 (b'c', b'changelog', False, b'open changelog'),
230 (b'm', b'manifest', False, b'open manifest'),
233 (b'm', b'manifest', False, b'open manifest'),
231 (b'', b'dir', False, b'open directory manifest'),
234 (b'', b'dir', False, b'open directory manifest'),
232 ],
235 ],
233 ),
236 ),
234 )
237 )
235
238
236 cmdtable = {}
239 cmdtable = {}
237
240
238
241
239 # for "historical portability":
242 # for "historical portability":
240 # define parsealiases locally, because cmdutil.parsealiases has been
243 # define parsealiases locally, because cmdutil.parsealiases has been
241 # available since 1.5 (or 6252852b4332)
244 # available since 1.5 (or 6252852b4332)
242 def parsealiases(cmd):
245 def parsealiases(cmd):
243 return cmd.split(b"|")
246 return cmd.split(b"|")
244
247
245
248
246 if safehasattr(registrar, 'command'):
249 if safehasattr(registrar, 'command'):
247 command = registrar.command(cmdtable)
250 command = registrar.command(cmdtable)
248 elif safehasattr(cmdutil, 'command'):
251 elif safehasattr(cmdutil, 'command'):
249 command = cmdutil.command(cmdtable)
252 command = cmdutil.command(cmdtable)
250 if 'norepo' not in getargspec(command).args:
253 if 'norepo' not in getargspec(command).args:
251 # for "historical portability":
254 # for "historical portability":
252 # wrap original cmdutil.command, because "norepo" option has
255 # wrap original cmdutil.command, because "norepo" option has
253 # been available since 3.1 (or 75a96326cecb)
256 # been available since 3.1 (or 75a96326cecb)
254 _command = command
257 _command = command
255
258
256 def command(name, options=(), synopsis=None, norepo=False):
259 def command(name, options=(), synopsis=None, norepo=False):
257 if norepo:
260 if norepo:
258 commands.norepo += b' %s' % b' '.join(parsealiases(name))
261 commands.norepo += b' %s' % b' '.join(parsealiases(name))
259 return _command(name, list(options), synopsis)
262 return _command(name, list(options), synopsis)
260
263
261
264
262 else:
265 else:
263 # for "historical portability":
266 # for "historical portability":
264 # define "@command" annotation locally, because cmdutil.command
267 # define "@command" annotation locally, because cmdutil.command
265 # has been available since 1.9 (or 2daa5179e73f)
268 # has been available since 1.9 (or 2daa5179e73f)
266 def command(name, options=(), synopsis=None, norepo=False):
269 def command(name, options=(), synopsis=None, norepo=False):
267 def decorator(func):
270 def decorator(func):
268 if synopsis:
271 if synopsis:
269 cmdtable[name] = func, list(options), synopsis
272 cmdtable[name] = func, list(options), synopsis
270 else:
273 else:
271 cmdtable[name] = func, list(options)
274 cmdtable[name] = func, list(options)
272 if norepo:
275 if norepo:
273 commands.norepo += b' %s' % b' '.join(parsealiases(name))
276 commands.norepo += b' %s' % b' '.join(parsealiases(name))
274 return func
277 return func
275
278
276 return decorator
279 return decorator
277
280
278
281
279 try:
282 try:
280 import mercurial.registrar
283 import mercurial.registrar
281 import mercurial.configitems
284 import mercurial.configitems
282
285
283 configtable = {}
286 configtable = {}
284 configitem = mercurial.registrar.configitem(configtable)
287 configitem = mercurial.registrar.configitem(configtable)
285 configitem(
288 configitem(
286 b'perf',
289 b'perf',
287 b'presleep',
290 b'presleep',
288 default=mercurial.configitems.dynamicdefault,
291 default=mercurial.configitems.dynamicdefault,
289 experimental=True,
292 experimental=True,
290 )
293 )
291 configitem(
294 configitem(
292 b'perf',
295 b'perf',
293 b'stub',
296 b'stub',
294 default=mercurial.configitems.dynamicdefault,
297 default=mercurial.configitems.dynamicdefault,
295 experimental=True,
298 experimental=True,
296 )
299 )
297 configitem(
300 configitem(
298 b'perf',
301 b'perf',
299 b'parentscount',
302 b'parentscount',
300 default=mercurial.configitems.dynamicdefault,
303 default=mercurial.configitems.dynamicdefault,
301 experimental=True,
304 experimental=True,
302 )
305 )
303 configitem(
306 configitem(
304 b'perf',
307 b'perf',
305 b'all-timing',
308 b'all-timing',
306 default=mercurial.configitems.dynamicdefault,
309 default=mercurial.configitems.dynamicdefault,
307 experimental=True,
310 experimental=True,
308 )
311 )
309 configitem(
312 configitem(
310 b'perf',
313 b'perf',
311 b'pre-run',
314 b'pre-run',
312 default=mercurial.configitems.dynamicdefault,
315 default=mercurial.configitems.dynamicdefault,
313 )
316 )
314 configitem(
317 configitem(
315 b'perf',
318 b'perf',
316 b'profile-benchmark',
319 b'profile-benchmark',
317 default=mercurial.configitems.dynamicdefault,
320 default=mercurial.configitems.dynamicdefault,
318 )
321 )
319 configitem(
322 configitem(
320 b'perf',
323 b'perf',
324 b'profiled-runs',
325 default=mercurial.configitems.dynamicdefault,
326 )
327 configitem(
328 b'perf',
321 b'run-limits',
329 b'run-limits',
322 default=mercurial.configitems.dynamicdefault,
330 default=mercurial.configitems.dynamicdefault,
323 experimental=True,
331 experimental=True,
324 )
332 )
325 except (ImportError, AttributeError):
333 except (ImportError, AttributeError):
326 pass
334 pass
327 except TypeError:
335 except TypeError:
328 # compatibility fix for a11fd395e83f
336 # compatibility fix for a11fd395e83f
329 # hg version: 5.2
337 # hg version: 5.2
330 configitem(
338 configitem(
331 b'perf',
339 b'perf',
332 b'presleep',
340 b'presleep',
333 default=mercurial.configitems.dynamicdefault,
341 default=mercurial.configitems.dynamicdefault,
334 )
342 )
335 configitem(
343 configitem(
336 b'perf',
344 b'perf',
337 b'stub',
345 b'stub',
338 default=mercurial.configitems.dynamicdefault,
346 default=mercurial.configitems.dynamicdefault,
339 )
347 )
340 configitem(
348 configitem(
341 b'perf',
349 b'perf',
342 b'parentscount',
350 b'parentscount',
343 default=mercurial.configitems.dynamicdefault,
351 default=mercurial.configitems.dynamicdefault,
344 )
352 )
345 configitem(
353 configitem(
346 b'perf',
354 b'perf',
347 b'all-timing',
355 b'all-timing',
348 default=mercurial.configitems.dynamicdefault,
356 default=mercurial.configitems.dynamicdefault,
349 )
357 )
350 configitem(
358 configitem(
351 b'perf',
359 b'perf',
352 b'pre-run',
360 b'pre-run',
353 default=mercurial.configitems.dynamicdefault,
361 default=mercurial.configitems.dynamicdefault,
354 )
362 )
355 configitem(
363 configitem(
356 b'perf',
364 b'perf',
357 b'profile-benchmark',
365 b'profiled-runs',
358 default=mercurial.configitems.dynamicdefault,
366 default=mercurial.configitems.dynamicdefault,
359 )
367 )
360 configitem(
368 configitem(
361 b'perf',
369 b'perf',
362 b'run-limits',
370 b'run-limits',
363 default=mercurial.configitems.dynamicdefault,
371 default=mercurial.configitems.dynamicdefault,
364 )
372 )
365
373
366
374
367 def getlen(ui):
375 def getlen(ui):
368 if ui.configbool(b"perf", b"stub", False):
376 if ui.configbool(b"perf", b"stub", False):
369 return lambda x: 1
377 return lambda x: 1
370 return len
378 return len
371
379
372
380
373 class noop:
381 class noop:
374 """dummy context manager"""
382 """dummy context manager"""
375
383
376 def __enter__(self):
384 def __enter__(self):
377 pass
385 pass
378
386
379 def __exit__(self, *args):
387 def __exit__(self, *args):
380 pass
388 pass
381
389
382
390
383 NOOPCTX = noop()
391 NOOPCTX = noop()
384
392
385
393
386 def gettimer(ui, opts=None):
394 def gettimer(ui, opts=None):
387 """return a timer function and formatter: (timer, formatter)
395 """return a timer function and formatter: (timer, formatter)
388
396
389 This function exists to gather the creation of formatter in a single
397 This function exists to gather the creation of formatter in a single
390 place instead of duplicating it in all performance commands."""
398 place instead of duplicating it in all performance commands."""
391
399
392 # enforce an idle period before execution to counteract power management
400 # enforce an idle period before execution to counteract power management
393 # experimental config: perf.presleep
401 # experimental config: perf.presleep
394 time.sleep(getint(ui, b"perf", b"presleep", 1))
402 time.sleep(getint(ui, b"perf", b"presleep", 1))
395
403
396 if opts is None:
404 if opts is None:
397 opts = {}
405 opts = {}
398 # redirect all to stderr unless buffer api is in use
406 # redirect all to stderr unless buffer api is in use
399 if not ui._buffers:
407 if not ui._buffers:
400 ui = ui.copy()
408 ui = ui.copy()
401 uifout = safeattrsetter(ui, b'fout', ignoremissing=True)
409 uifout = safeattrsetter(ui, b'fout', ignoremissing=True)
402 if uifout:
410 if uifout:
403 # for "historical portability":
411 # for "historical portability":
404 # ui.fout/ferr have been available since 1.9 (or 4e1ccd4c2b6d)
412 # ui.fout/ferr have been available since 1.9 (or 4e1ccd4c2b6d)
405 uifout.set(ui.ferr)
413 uifout.set(ui.ferr)
406
414
407 # get a formatter
415 # get a formatter
408 uiformatter = getattr(ui, 'formatter', None)
416 uiformatter = getattr(ui, 'formatter', None)
409 if uiformatter:
417 if uiformatter:
410 fm = uiformatter(b'perf', opts)
418 fm = uiformatter(b'perf', opts)
411 else:
419 else:
412 # for "historical portability":
420 # for "historical portability":
413 # define formatter locally, because ui.formatter has been
421 # define formatter locally, because ui.formatter has been
414 # available since 2.2 (or ae5f92e154d3)
422 # available since 2.2 (or ae5f92e154d3)
415 from mercurial import node
423 from mercurial import node
416
424
417 class defaultformatter:
425 class defaultformatter:
418 """Minimized composition of baseformatter and plainformatter"""
426 """Minimized composition of baseformatter and plainformatter"""
419
427
420 def __init__(self, ui, topic, opts):
428 def __init__(self, ui, topic, opts):
421 self._ui = ui
429 self._ui = ui
422 if ui.debugflag:
430 if ui.debugflag:
423 self.hexfunc = node.hex
431 self.hexfunc = node.hex
424 else:
432 else:
425 self.hexfunc = node.short
433 self.hexfunc = node.short
426
434
427 def __nonzero__(self):
435 def __nonzero__(self):
428 return False
436 return False
429
437
430 __bool__ = __nonzero__
438 __bool__ = __nonzero__
431
439
432 def startitem(self):
440 def startitem(self):
433 pass
441 pass
434
442
435 def data(self, **data):
443 def data(self, **data):
436 pass
444 pass
437
445
438 def write(self, fields, deftext, *fielddata, **opts):
446 def write(self, fields, deftext, *fielddata, **opts):
439 self._ui.write(deftext % fielddata, **opts)
447 self._ui.write(deftext % fielddata, **opts)
440
448
441 def condwrite(self, cond, fields, deftext, *fielddata, **opts):
449 def condwrite(self, cond, fields, deftext, *fielddata, **opts):
442 if cond:
450 if cond:
443 self._ui.write(deftext % fielddata, **opts)
451 self._ui.write(deftext % fielddata, **opts)
444
452
445 def plain(self, text, **opts):
453 def plain(self, text, **opts):
446 self._ui.write(text, **opts)
454 self._ui.write(text, **opts)
447
455
448 def end(self):
456 def end(self):
449 pass
457 pass
450
458
451 fm = defaultformatter(ui, b'perf', opts)
459 fm = defaultformatter(ui, b'perf', opts)
452
460
453 # stub function, runs code only once instead of in a loop
461 # stub function, runs code only once instead of in a loop
454 # experimental config: perf.stub
462 # experimental config: perf.stub
455 if ui.configbool(b"perf", b"stub", False):
463 if ui.configbool(b"perf", b"stub", False):
456 return functools.partial(stub_timer, fm), fm
464 return functools.partial(stub_timer, fm), fm
457
465
458 # experimental config: perf.all-timing
466 # experimental config: perf.all-timing
459 displayall = ui.configbool(b"perf", b"all-timing", True)
467 displayall = ui.configbool(b"perf", b"all-timing", True)
460
468
461 # experimental config: perf.run-limits
469 # experimental config: perf.run-limits
462 limitspec = ui.configlist(b"perf", b"run-limits", [])
470 limitspec = ui.configlist(b"perf", b"run-limits", [])
463 limits = []
471 limits = []
464 for item in limitspec:
472 for item in limitspec:
465 parts = item.split(b'-', 1)
473 parts = item.split(b'-', 1)
466 if len(parts) < 2:
474 if len(parts) < 2:
467 ui.warn((b'malformatted run limit entry, missing "-": %s\n' % item))
475 ui.warn((b'malformatted run limit entry, missing "-": %s\n' % item))
468 continue
476 continue
469 try:
477 try:
470 time_limit = float(_sysstr(parts[0]))
478 time_limit = float(_sysstr(parts[0]))
471 except ValueError as e:
479 except ValueError as e:
472 ui.warn(
480 ui.warn(
473 (
481 (
474 b'malformatted run limit entry, %s: %s\n'
482 b'malformatted run limit entry, %s: %s\n'
475 % (_bytestr(e), item)
483 % (_bytestr(e), item)
476 )
484 )
477 )
485 )
478 continue
486 continue
479 try:
487 try:
480 run_limit = int(_sysstr(parts[1]))
488 run_limit = int(_sysstr(parts[1]))
481 except ValueError as e:
489 except ValueError as e:
482 ui.warn(
490 ui.warn(
483 (
491 (
484 b'malformatted run limit entry, %s: %s\n'
492 b'malformatted run limit entry, %s: %s\n'
485 % (_bytestr(e), item)
493 % (_bytestr(e), item)
486 )
494 )
487 )
495 )
488 continue
496 continue
489 limits.append((time_limit, run_limit))
497 limits.append((time_limit, run_limit))
490 if not limits:
498 if not limits:
491 limits = DEFAULTLIMITS
499 limits = DEFAULTLIMITS
492
500
493 profiler = None
501 profiler = None
502 profiled_runs = set()
494 if profiling is not None:
503 if profiling is not None:
495 if ui.configbool(b"perf", b"profile-benchmark", False):
504 if ui.configbool(b"perf", b"profile-benchmark", False):
496 profiler = profiling.profile(ui)
505 profiler = lambda: profiling.profile(ui)
506 for run in ui.configlist(b"perf", b"profiled-runs", [0]):
507 profiled_runs.add(int(run))
497
508
498 prerun = getint(ui, b"perf", b"pre-run", 0)
509 prerun = getint(ui, b"perf", b"pre-run", 0)
499 t = functools.partial(
510 t = functools.partial(
500 _timer,
511 _timer,
501 fm,
512 fm,
502 displayall=displayall,
513 displayall=displayall,
503 limits=limits,
514 limits=limits,
504 prerun=prerun,
515 prerun=prerun,
505 profiler=profiler,
516 profiler=profiler,
517 profiled_runs=profiled_runs,
506 )
518 )
507 return t, fm
519 return t, fm
508
520
509
521
510 def stub_timer(fm, func, setup=None, title=None):
522 def stub_timer(fm, func, setup=None, title=None):
511 if setup is not None:
523 if setup is not None:
512 setup()
524 setup()
513 func()
525 func()
514
526
515
527
516 @contextlib.contextmanager
528 @contextlib.contextmanager
517 def timeone():
529 def timeone():
518 r = []
530 r = []
519 ostart = os.times()
531 ostart = os.times()
520 cstart = util.timer()
532 cstart = util.timer()
521 yield r
533 yield r
522 cstop = util.timer()
534 cstop = util.timer()
523 ostop = os.times()
535 ostop = os.times()
524 a, b = ostart, ostop
536 a, b = ostart, ostop
525 r.append((cstop - cstart, b[0] - a[0], b[1] - a[1]))
537 r.append((cstop - cstart, b[0] - a[0], b[1] - a[1]))
526
538
527
539
528 # list of stop condition (elapsed time, minimal run count)
540 # list of stop condition (elapsed time, minimal run count)
529 DEFAULTLIMITS = (
541 DEFAULTLIMITS = (
530 (3.0, 100),
542 (3.0, 100),
531 (10.0, 3),
543 (10.0, 3),
532 )
544 )
533
545
534
546
535 @contextlib.contextmanager
547 @contextlib.contextmanager
536 def noop_context():
548 def noop_context():
537 yield
549 yield
538
550
539
551
540 def _timer(
552 def _timer(
541 fm,
553 fm,
542 func,
554 func,
543 setup=None,
555 setup=None,
544 context=noop_context,
556 context=noop_context,
545 title=None,
557 title=None,
546 displayall=False,
558 displayall=False,
547 limits=DEFAULTLIMITS,
559 limits=DEFAULTLIMITS,
548 prerun=0,
560 prerun=0,
549 profiler=None,
561 profiler=None,
562 profiled_runs=(0,),
550 ):
563 ):
551 gc.collect()
564 gc.collect()
552 results = []
565 results = []
553 begin = util.timer()
554 count = 0
566 count = 0
555 if profiler is None:
567 if profiler is None:
556 profiler = NOOPCTX
568 profiler = lambda: NOOPCTX
557 for i in range(prerun):
569 for i in range(prerun):
558 if setup is not None:
570 if setup is not None:
559 setup()
571 setup()
560 with context():
572 with context():
561 func()
573 func()
574 begin = util.timer()
562 keepgoing = True
575 keepgoing = True
563 while keepgoing:
576 while keepgoing:
577 if count in profiled_runs:
578 prof = profiler()
579 else:
580 prof = NOOPCTX
564 if setup is not None:
581 if setup is not None:
565 setup()
582 setup()
566 with context():
583 with context():
567 with profiler:
584 gc.collect()
585 with prof:
568 with timeone() as item:
586 with timeone() as item:
569 r = func()
587 r = func()
570 profiler = NOOPCTX
571 count += 1
588 count += 1
572 results.append(item[0])
589 results.append(item[0])
573 cstop = util.timer()
590 cstop = util.timer()
574 # Look for a stop condition.
591 # Look for a stop condition.
575 elapsed = cstop - begin
592 elapsed = cstop - begin
576 for t, mincount in limits:
593 for t, mincount in limits:
577 if elapsed >= t and count >= mincount:
594 if elapsed >= t and count >= mincount:
578 keepgoing = False
595 keepgoing = False
579 break
596 break
580
597
581 formatone(fm, results, title=title, result=r, displayall=displayall)
598 formatone(fm, results, title=title, result=r, displayall=displayall)
582
599
583
600
584 def formatone(fm, timings, title=None, result=None, displayall=False):
601 def formatone(fm, timings, title=None, result=None, displayall=False):
585 count = len(timings)
602 count = len(timings)
586
603
587 fm.startitem()
604 fm.startitem()
588
605
589 if title:
606 if title:
590 fm.write(b'title', b'! %s\n', title)
607 fm.write(b'title', b'! %s\n', title)
591 if result:
608 if result:
592 fm.write(b'result', b'! result: %s\n', result)
609 fm.write(b'result', b'! result: %s\n', result)
593
610
594 def display(role, entry):
611 def display(role, entry):
595 prefix = b''
612 prefix = b''
596 if role != b'best':
613 if role != b'best':
597 prefix = b'%s.' % role
614 prefix = b'%s.' % role
598 fm.plain(b'!')
615 fm.plain(b'!')
599 fm.write(prefix + b'wall', b' wall %f', entry[0])
616 fm.write(prefix + b'wall', b' wall %f', entry[0])
600 fm.write(prefix + b'comb', b' comb %f', entry[1] + entry[2])
617 fm.write(prefix + b'comb', b' comb %f', entry[1] + entry[2])
601 fm.write(prefix + b'user', b' user %f', entry[1])
618 fm.write(prefix + b'user', b' user %f', entry[1])
602 fm.write(prefix + b'sys', b' sys %f', entry[2])
619 fm.write(prefix + b'sys', b' sys %f', entry[2])
603 fm.write(prefix + b'count', b' (%s of %%d)' % role, count)
620 fm.write(prefix + b'count', b' (%s of %%d)' % role, count)
604 fm.plain(b'\n')
621 fm.plain(b'\n')
605
622
606 timings.sort()
623 timings.sort()
607 min_val = timings[0]
624 min_val = timings[0]
608 display(b'best', min_val)
625 display(b'best', min_val)
609 if displayall:
626 if displayall:
610 max_val = timings[-1]
627 max_val = timings[-1]
611 display(b'max', max_val)
628 display(b'max', max_val)
612 avg = tuple([sum(x) / count for x in zip(*timings)])
629 avg = tuple([sum(x) / count for x in zip(*timings)])
613 display(b'avg', avg)
630 display(b'avg', avg)
614 median = timings[len(timings) // 2]
631 median = timings[len(timings) // 2]
615 display(b'median', median)
632 display(b'median', median)
616
633
617
634
618 # utilities for historical portability
635 # utilities for historical portability
619
636
620
637
621 def getint(ui, section, name, default):
638 def getint(ui, section, name, default):
622 # for "historical portability":
639 # for "historical portability":
623 # ui.configint has been available since 1.9 (or fa2b596db182)
640 # ui.configint has been available since 1.9 (or fa2b596db182)
624 v = ui.config(section, name, None)
641 v = ui.config(section, name, None)
625 if v is None:
642 if v is None:
626 return default
643 return default
627 try:
644 try:
628 return int(v)
645 return int(v)
629 except ValueError:
646 except ValueError:
630 raise error.ConfigError(
647 raise error.ConfigError(
631 b"%s.%s is not an integer ('%s')" % (section, name, v)
648 b"%s.%s is not an integer ('%s')" % (section, name, v)
632 )
649 )
633
650
634
651
635 def safeattrsetter(obj, name, ignoremissing=False):
652 def safeattrsetter(obj, name, ignoremissing=False):
636 """Ensure that 'obj' has 'name' attribute before subsequent setattr
653 """Ensure that 'obj' has 'name' attribute before subsequent setattr
637
654
638 This function is aborted, if 'obj' doesn't have 'name' attribute
655 This function is aborted, if 'obj' doesn't have 'name' attribute
639 at runtime. This avoids overlooking removal of an attribute, which
656 at runtime. This avoids overlooking removal of an attribute, which
640 breaks assumption of performance measurement, in the future.
657 breaks assumption of performance measurement, in the future.
641
658
642 This function returns the object to (1) assign a new value, and
659 This function returns the object to (1) assign a new value, and
643 (2) restore an original value to the attribute.
660 (2) restore an original value to the attribute.
644
661
645 If 'ignoremissing' is true, missing 'name' attribute doesn't cause
662 If 'ignoremissing' is true, missing 'name' attribute doesn't cause
646 abortion, and this function returns None. This is useful to
663 abortion, and this function returns None. This is useful to
647 examine an attribute, which isn't ensured in all Mercurial
664 examine an attribute, which isn't ensured in all Mercurial
648 versions.
665 versions.
649 """
666 """
650 if not util.safehasattr(obj, name):
667 if not util.safehasattr(obj, name):
651 if ignoremissing:
668 if ignoremissing:
652 return None
669 return None
653 raise error.Abort(
670 raise error.Abort(
654 (
671 (
655 b"missing attribute %s of %s might break assumption"
672 b"missing attribute %s of %s might break assumption"
656 b" of performance measurement"
673 b" of performance measurement"
657 )
674 )
658 % (name, obj)
675 % (name, obj)
659 )
676 )
660
677
661 origvalue = getattr(obj, _sysstr(name))
678 origvalue = getattr(obj, _sysstr(name))
662
679
663 class attrutil:
680 class attrutil:
664 def set(self, newvalue):
681 def set(self, newvalue):
665 setattr(obj, _sysstr(name), newvalue)
682 setattr(obj, _sysstr(name), newvalue)
666
683
667 def restore(self):
684 def restore(self):
668 setattr(obj, _sysstr(name), origvalue)
685 setattr(obj, _sysstr(name), origvalue)
669
686
670 return attrutil()
687 return attrutil()
671
688
672
689
673 # utilities to examine each internal API changes
690 # utilities to examine each internal API changes
674
691
675
692
676 def getbranchmapsubsettable():
693 def getbranchmapsubsettable():
677 # for "historical portability":
694 # for "historical portability":
678 # subsettable is defined in:
695 # subsettable is defined in:
679 # - branchmap since 2.9 (or 175c6fd8cacc)
696 # - branchmap since 2.9 (or 175c6fd8cacc)
680 # - repoview since 2.5 (or 59a9f18d4587)
697 # - repoview since 2.5 (or 59a9f18d4587)
681 # - repoviewutil since 5.0
698 # - repoviewutil since 5.0
682 for mod in (branchmap, repoview, repoviewutil):
699 for mod in (branchmap, repoview, repoviewutil):
683 subsettable = getattr(mod, 'subsettable', None)
700 subsettable = getattr(mod, 'subsettable', None)
684 if subsettable:
701 if subsettable:
685 return subsettable
702 return subsettable
686
703
687 # bisecting in bcee63733aad::59a9f18d4587 can reach here (both
704 # bisecting in bcee63733aad::59a9f18d4587 can reach here (both
688 # branchmap and repoview modules exist, but subsettable attribute
705 # branchmap and repoview modules exist, but subsettable attribute
689 # doesn't)
706 # doesn't)
690 raise error.Abort(
707 raise error.Abort(
691 b"perfbranchmap not available with this Mercurial",
708 b"perfbranchmap not available with this Mercurial",
692 hint=b"use 2.5 or later",
709 hint=b"use 2.5 or later",
693 )
710 )
694
711
695
712
696 def getsvfs(repo):
713 def getsvfs(repo):
697 """Return appropriate object to access files under .hg/store"""
714 """Return appropriate object to access files under .hg/store"""
698 # for "historical portability":
715 # for "historical portability":
699 # repo.svfs has been available since 2.3 (or 7034365089bf)
716 # repo.svfs has been available since 2.3 (or 7034365089bf)
700 svfs = getattr(repo, 'svfs', None)
717 svfs = getattr(repo, 'svfs', None)
701 if svfs:
718 if svfs:
702 return svfs
719 return svfs
703 else:
720 else:
704 return getattr(repo, 'sopener')
721 return getattr(repo, 'sopener')
705
722
706
723
707 def getvfs(repo):
724 def getvfs(repo):
708 """Return appropriate object to access files under .hg"""
725 """Return appropriate object to access files under .hg"""
709 # for "historical portability":
726 # for "historical portability":
710 # repo.vfs has been available since 2.3 (or 7034365089bf)
727 # repo.vfs has been available since 2.3 (or 7034365089bf)
711 vfs = getattr(repo, 'vfs', None)
728 vfs = getattr(repo, 'vfs', None)
712 if vfs:
729 if vfs:
713 return vfs
730 return vfs
714 else:
731 else:
715 return getattr(repo, 'opener')
732 return getattr(repo, 'opener')
716
733
717
734
718 def repocleartagscachefunc(repo):
735 def repocleartagscachefunc(repo):
719 """Return the function to clear tags cache according to repo internal API"""
736 """Return the function to clear tags cache according to repo internal API"""
720 if util.safehasattr(repo, b'_tagscache'): # since 2.0 (or 9dca7653b525)
737 if util.safehasattr(repo, b'_tagscache'): # since 2.0 (or 9dca7653b525)
721 # in this case, setattr(repo, '_tagscache', None) or so isn't
738 # in this case, setattr(repo, '_tagscache', None) or so isn't
722 # correct way to clear tags cache, because existing code paths
739 # correct way to clear tags cache, because existing code paths
723 # expect _tagscache to be a structured object.
740 # expect _tagscache to be a structured object.
724 def clearcache():
741 def clearcache():
725 # _tagscache has been filteredpropertycache since 2.5 (or
742 # _tagscache has been filteredpropertycache since 2.5 (or
726 # 98c867ac1330), and delattr() can't work in such case
743 # 98c867ac1330), and delattr() can't work in such case
727 if '_tagscache' in vars(repo):
744 if '_tagscache' in vars(repo):
728 del repo.__dict__['_tagscache']
745 del repo.__dict__['_tagscache']
729
746
730 return clearcache
747 return clearcache
731
748
732 repotags = safeattrsetter(repo, b'_tags', ignoremissing=True)
749 repotags = safeattrsetter(repo, b'_tags', ignoremissing=True)
733 if repotags: # since 1.4 (or 5614a628d173)
750 if repotags: # since 1.4 (or 5614a628d173)
734 return lambda: repotags.set(None)
751 return lambda: repotags.set(None)
735
752
736 repotagscache = safeattrsetter(repo, b'tagscache', ignoremissing=True)
753 repotagscache = safeattrsetter(repo, b'tagscache', ignoremissing=True)
737 if repotagscache: # since 0.6 (or d7df759d0e97)
754 if repotagscache: # since 0.6 (or d7df759d0e97)
738 return lambda: repotagscache.set(None)
755 return lambda: repotagscache.set(None)
739
756
740 # Mercurial earlier than 0.6 (or d7df759d0e97) logically reaches
757 # Mercurial earlier than 0.6 (or d7df759d0e97) logically reaches
741 # this point, but it isn't so problematic, because:
758 # this point, but it isn't so problematic, because:
742 # - repo.tags of such Mercurial isn't "callable", and repo.tags()
759 # - repo.tags of such Mercurial isn't "callable", and repo.tags()
743 # in perftags() causes failure soon
760 # in perftags() causes failure soon
744 # - perf.py itself has been available since 1.1 (or eb240755386d)
761 # - perf.py itself has been available since 1.1 (or eb240755386d)
745 raise error.Abort(b"tags API of this hg command is unknown")
762 raise error.Abort(b"tags API of this hg command is unknown")
746
763
747
764
748 # utilities to clear cache
765 # utilities to clear cache
749
766
750
767
751 def clearfilecache(obj, attrname):
768 def clearfilecache(obj, attrname):
752 unfiltered = getattr(obj, 'unfiltered', None)
769 unfiltered = getattr(obj, 'unfiltered', None)
753 if unfiltered is not None:
770 if unfiltered is not None:
754 obj = obj.unfiltered()
771 obj = obj.unfiltered()
755 if attrname in vars(obj):
772 if attrname in vars(obj):
756 delattr(obj, attrname)
773 delattr(obj, attrname)
757 obj._filecache.pop(attrname, None)
774 obj._filecache.pop(attrname, None)
758
775
759
776
760 def clearchangelog(repo):
777 def clearchangelog(repo):
761 if repo is not repo.unfiltered():
778 if repo is not repo.unfiltered():
762 object.__setattr__(repo, '_clcachekey', None)
779 object.__setattr__(repo, '_clcachekey', None)
763 object.__setattr__(repo, '_clcache', None)
780 object.__setattr__(repo, '_clcache', None)
764 clearfilecache(repo.unfiltered(), 'changelog')
781 clearfilecache(repo.unfiltered(), 'changelog')
765
782
766
783
767 # perf commands
784 # perf commands
768
785
769
786
770 @command(b'perf::walk|perfwalk', formatteropts)
787 @command(b'perf::walk|perfwalk', formatteropts)
771 def perfwalk(ui, repo, *pats, **opts):
788 def perfwalk(ui, repo, *pats, **opts):
772 opts = _byteskwargs(opts)
789 opts = _byteskwargs(opts)
773 timer, fm = gettimer(ui, opts)
790 timer, fm = gettimer(ui, opts)
774 m = scmutil.match(repo[None], pats, {})
791 m = scmutil.match(repo[None], pats, {})
775 timer(
792 timer(
776 lambda: len(
793 lambda: len(
777 list(
794 list(
778 repo.dirstate.walk(m, subrepos=[], unknown=True, ignored=False)
795 repo.dirstate.walk(m, subrepos=[], unknown=True, ignored=False)
779 )
796 )
780 )
797 )
781 )
798 )
782 fm.end()
799 fm.end()
783
800
784
801
785 @command(b'perf::annotate|perfannotate', formatteropts)
802 @command(b'perf::annotate|perfannotate', formatteropts)
786 def perfannotate(ui, repo, f, **opts):
803 def perfannotate(ui, repo, f, **opts):
787 opts = _byteskwargs(opts)
804 opts = _byteskwargs(opts)
788 timer, fm = gettimer(ui, opts)
805 timer, fm = gettimer(ui, opts)
789 fc = repo[b'.'][f]
806 fc = repo[b'.'][f]
790 timer(lambda: len(fc.annotate(True)))
807 timer(lambda: len(fc.annotate(True)))
791 fm.end()
808 fm.end()
792
809
793
810
794 @command(
811 @command(
795 b'perf::status|perfstatus',
812 b'perf::status|perfstatus',
796 [
813 [
797 (b'u', b'unknown', False, b'ask status to look for unknown files'),
814 (b'u', b'unknown', False, b'ask status to look for unknown files'),
798 (b'', b'dirstate', False, b'benchmark the internal dirstate call'),
815 (b'', b'dirstate', False, b'benchmark the internal dirstate call'),
799 ]
816 ]
800 + formatteropts,
817 + formatteropts,
801 )
818 )
802 def perfstatus(ui, repo, **opts):
819 def perfstatus(ui, repo, **opts):
803 """benchmark the performance of a single status call
820 """benchmark the performance of a single status call
804
821
805 The repository data are preserved between each call.
822 The repository data are preserved between each call.
806
823
807 By default, only the status of the tracked file are requested. If
824 By default, only the status of the tracked file are requested. If
808 `--unknown` is passed, the "unknown" files are also tracked.
825 `--unknown` is passed, the "unknown" files are also tracked.
809 """
826 """
810 opts = _byteskwargs(opts)
827 opts = _byteskwargs(opts)
811 # m = match.always(repo.root, repo.getcwd())
828 # m = match.always(repo.root, repo.getcwd())
812 # timer(lambda: sum(map(len, repo.dirstate.status(m, [], False, False,
829 # timer(lambda: sum(map(len, repo.dirstate.status(m, [], False, False,
813 # False))))
830 # False))))
814 timer, fm = gettimer(ui, opts)
831 timer, fm = gettimer(ui, opts)
815 if opts[b'dirstate']:
832 if opts[b'dirstate']:
816 dirstate = repo.dirstate
833 dirstate = repo.dirstate
817 m = scmutil.matchall(repo)
834 m = scmutil.matchall(repo)
818 unknown = opts[b'unknown']
835 unknown = opts[b'unknown']
819
836
820 def status_dirstate():
837 def status_dirstate():
821 s = dirstate.status(
838 s = dirstate.status(
822 m, subrepos=[], ignored=False, clean=False, unknown=unknown
839 m, subrepos=[], ignored=False, clean=False, unknown=unknown
823 )
840 )
824 sum(map(bool, s))
841 sum(map(bool, s))
825
842
826 if util.safehasattr(dirstate, 'running_status'):
843 if util.safehasattr(dirstate, 'running_status'):
827 with dirstate.running_status(repo):
844 with dirstate.running_status(repo):
828 timer(status_dirstate)
845 timer(status_dirstate)
829 dirstate.invalidate()
846 dirstate.invalidate()
830 else:
847 else:
831 timer(status_dirstate)
848 timer(status_dirstate)
832 else:
849 else:
833 timer(lambda: sum(map(len, repo.status(unknown=opts[b'unknown']))))
850 timer(lambda: sum(map(len, repo.status(unknown=opts[b'unknown']))))
834 fm.end()
851 fm.end()
835
852
836
853
837 @command(b'perf::addremove|perfaddremove', formatteropts)
854 @command(b'perf::addremove|perfaddremove', formatteropts)
838 def perfaddremove(ui, repo, **opts):
855 def perfaddremove(ui, repo, **opts):
839 opts = _byteskwargs(opts)
856 opts = _byteskwargs(opts)
840 timer, fm = gettimer(ui, opts)
857 timer, fm = gettimer(ui, opts)
841 try:
858 try:
842 oldquiet = repo.ui.quiet
859 oldquiet = repo.ui.quiet
843 repo.ui.quiet = True
860 repo.ui.quiet = True
844 matcher = scmutil.match(repo[None])
861 matcher = scmutil.match(repo[None])
845 opts[b'dry_run'] = True
862 opts[b'dry_run'] = True
846 if 'uipathfn' in getargspec(scmutil.addremove).args:
863 if 'uipathfn' in getargspec(scmutil.addremove).args:
847 uipathfn = scmutil.getuipathfn(repo)
864 uipathfn = scmutil.getuipathfn(repo)
848 timer(lambda: scmutil.addremove(repo, matcher, b"", uipathfn, opts))
865 timer(lambda: scmutil.addremove(repo, matcher, b"", uipathfn, opts))
849 else:
866 else:
850 timer(lambda: scmutil.addremove(repo, matcher, b"", opts))
867 timer(lambda: scmutil.addremove(repo, matcher, b"", opts))
851 finally:
868 finally:
852 repo.ui.quiet = oldquiet
869 repo.ui.quiet = oldquiet
853 fm.end()
870 fm.end()
854
871
855
872
856 def clearcaches(cl):
873 def clearcaches(cl):
857 # behave somewhat consistently across internal API changes
874 # behave somewhat consistently across internal API changes
858 if util.safehasattr(cl, b'clearcaches'):
875 if util.safehasattr(cl, b'clearcaches'):
859 cl.clearcaches()
876 cl.clearcaches()
860 elif util.safehasattr(cl, b'_nodecache'):
877 elif util.safehasattr(cl, b'_nodecache'):
861 # <= hg-5.2
878 # <= hg-5.2
862 from mercurial.node import nullid, nullrev
879 from mercurial.node import nullid, nullrev
863
880
864 cl._nodecache = {nullid: nullrev}
881 cl._nodecache = {nullid: nullrev}
865 cl._nodepos = None
882 cl._nodepos = None
866
883
867
884
868 @command(b'perf::heads|perfheads', formatteropts)
885 @command(b'perf::heads|perfheads', formatteropts)
869 def perfheads(ui, repo, **opts):
886 def perfheads(ui, repo, **opts):
870 """benchmark the computation of a changelog heads"""
887 """benchmark the computation of a changelog heads"""
871 opts = _byteskwargs(opts)
888 opts = _byteskwargs(opts)
872 timer, fm = gettimer(ui, opts)
889 timer, fm = gettimer(ui, opts)
873 cl = repo.changelog
890 cl = repo.changelog
874
891
875 def s():
892 def s():
876 clearcaches(cl)
893 clearcaches(cl)
877
894
878 def d():
895 def d():
879 len(cl.headrevs())
896 len(cl.headrevs())
880
897
881 timer(d, setup=s)
898 timer(d, setup=s)
882 fm.end()
899 fm.end()
883
900
884
901
885 def _default_clear_on_disk_tags_cache(repo):
902 def _default_clear_on_disk_tags_cache(repo):
886 from mercurial import tags
903 from mercurial import tags
887
904
888 repo.cachevfs.tryunlink(tags._filename(repo))
905 repo.cachevfs.tryunlink(tags._filename(repo))
889
906
890
907
891 def _default_clear_on_disk_tags_fnodes_cache(repo):
908 def _default_clear_on_disk_tags_fnodes_cache(repo):
892 from mercurial import tags
909 from mercurial import tags
893
910
894 repo.cachevfs.tryunlink(tags._fnodescachefile)
911 repo.cachevfs.tryunlink(tags._fnodescachefile)
895
912
896
913
897 def _default_forget_fnodes(repo, revs):
914 def _default_forget_fnodes(repo, revs):
898 """function used by the perf extension to prune some entries from the
915 """function used by the perf extension to prune some entries from the
899 fnodes cache"""
916 fnodes cache"""
900 from mercurial import tags
917 from mercurial import tags
901
918
902 missing_1 = b'\xff' * 4
919 missing_1 = b'\xff' * 4
903 missing_2 = b'\xff' * 20
920 missing_2 = b'\xff' * 20
904 cache = tags.hgtagsfnodescache(repo.unfiltered())
921 cache = tags.hgtagsfnodescache(repo.unfiltered())
905 for r in revs:
922 for r in revs:
906 cache._writeentry(r * tags._fnodesrecsize, missing_1, missing_2)
923 cache._writeentry(r * tags._fnodesrecsize, missing_1, missing_2)
907 cache.write()
924 cache.write()
908
925
909
926
910 @command(
927 @command(
911 b'perf::tags|perftags',
928 b'perf::tags|perftags',
912 formatteropts
929 formatteropts
913 + [
930 + [
914 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
931 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
915 (
932 (
916 b'',
933 b'',
917 b'clear-on-disk-cache',
934 b'clear-on-disk-cache',
918 False,
935 False,
919 b'clear on disk tags cache (DESTRUCTIVE)',
936 b'clear on disk tags cache (DESTRUCTIVE)',
920 ),
937 ),
921 (
938 (
922 b'',
939 b'',
923 b'clear-fnode-cache-all',
940 b'clear-fnode-cache-all',
924 False,
941 False,
925 b'clear on disk file node cache (DESTRUCTIVE),',
942 b'clear on disk file node cache (DESTRUCTIVE),',
926 ),
943 ),
927 (
944 (
928 b'',
945 b'',
929 b'clear-fnode-cache-rev',
946 b'clear-fnode-cache-rev',
930 [],
947 [],
931 b'clear on disk file node cache (DESTRUCTIVE),',
948 b'clear on disk file node cache (DESTRUCTIVE),',
932 b'REVS',
949 b'REVS',
933 ),
950 ),
934 (
951 (
935 b'',
952 b'',
936 b'update-last',
953 b'update-last',
937 b'',
954 b'',
938 b'simulate an update over the last N revisions (DESTRUCTIVE),',
955 b'simulate an update over the last N revisions (DESTRUCTIVE),',
939 b'N',
956 b'N',
940 ),
957 ),
941 ],
958 ],
942 )
959 )
943 def perftags(ui, repo, **opts):
960 def perftags(ui, repo, **opts):
944 """Benchmark tags retrieval in various situation
961 """Benchmark tags retrieval in various situation
945
962
946 The option marked as (DESTRUCTIVE) will alter the on-disk cache, possibly
963 The option marked as (DESTRUCTIVE) will alter the on-disk cache, possibly
947 altering performance after the command was run. However, it does not
964 altering performance after the command was run. However, it does not
948 destroy any stored data.
965 destroy any stored data.
949 """
966 """
950 from mercurial import tags
967 from mercurial import tags
951
968
952 opts = _byteskwargs(opts)
969 opts = _byteskwargs(opts)
953 timer, fm = gettimer(ui, opts)
970 timer, fm = gettimer(ui, opts)
954 repocleartagscache = repocleartagscachefunc(repo)
971 repocleartagscache = repocleartagscachefunc(repo)
955 clearrevlogs = opts[b'clear_revlogs']
972 clearrevlogs = opts[b'clear_revlogs']
956 clear_disk = opts[b'clear_on_disk_cache']
973 clear_disk = opts[b'clear_on_disk_cache']
957 clear_fnode = opts[b'clear_fnode_cache_all']
974 clear_fnode = opts[b'clear_fnode_cache_all']
958
975
959 clear_fnode_revs = opts[b'clear_fnode_cache_rev']
976 clear_fnode_revs = opts[b'clear_fnode_cache_rev']
960 update_last_str = opts[b'update_last']
977 update_last_str = opts[b'update_last']
961 update_last = None
978 update_last = None
962 if update_last_str:
979 if update_last_str:
963 try:
980 try:
964 update_last = int(update_last_str)
981 update_last = int(update_last_str)
965 except ValueError:
982 except ValueError:
966 msg = b'could not parse value for update-last: "%s"'
983 msg = b'could not parse value for update-last: "%s"'
967 msg %= update_last_str
984 msg %= update_last_str
968 hint = b'value should be an integer'
985 hint = b'value should be an integer'
969 raise error.Abort(msg, hint=hint)
986 raise error.Abort(msg, hint=hint)
970
987
971 clear_disk_fn = getattr(
988 clear_disk_fn = getattr(
972 tags,
989 tags,
973 "clear_cache_on_disk",
990 "clear_cache_on_disk",
974 _default_clear_on_disk_tags_cache,
991 _default_clear_on_disk_tags_cache,
975 )
992 )
976 if getattr(tags, 'clear_cache_fnodes_is_working', False):
993 if getattr(tags, 'clear_cache_fnodes_is_working', False):
977 clear_fnodes_fn = tags.clear_cache_fnodes
994 clear_fnodes_fn = tags.clear_cache_fnodes
978 else:
995 else:
979 clear_fnodes_fn = _default_clear_on_disk_tags_fnodes_cache
996 clear_fnodes_fn = _default_clear_on_disk_tags_fnodes_cache
980 clear_fnodes_rev_fn = getattr(
997 clear_fnodes_rev_fn = getattr(
981 tags,
998 tags,
982 "forget_fnodes",
999 "forget_fnodes",
983 _default_forget_fnodes,
1000 _default_forget_fnodes,
984 )
1001 )
985
1002
986 clear_revs = []
1003 clear_revs = []
987 if clear_fnode_revs:
1004 if clear_fnode_revs:
988 clear_revs.extend(scmutil.revrange(repo, clear_fnode_revs))
1005 clear_revs.extend(scmutil.revrange(repo, clear_fnode_revs))
989
1006
990 if update_last:
1007 if update_last:
991 revset = b'last(all(), %d)' % update_last
1008 revset = b'last(all(), %d)' % update_last
992 last_revs = repo.unfiltered().revs(revset)
1009 last_revs = repo.unfiltered().revs(revset)
993 clear_revs.extend(last_revs)
1010 clear_revs.extend(last_revs)
994
1011
995 from mercurial import repoview
1012 from mercurial import repoview
996
1013
997 rev_filter = {(b'experimental', b'extra-filter-revs'): revset}
1014 rev_filter = {(b'experimental', b'extra-filter-revs'): revset}
998 with repo.ui.configoverride(rev_filter, source=b"perf"):
1015 with repo.ui.configoverride(rev_filter, source=b"perf"):
999 filter_id = repoview.extrafilter(repo.ui)
1016 filter_id = repoview.extrafilter(repo.ui)
1000
1017
1001 filter_name = b'%s%%%s' % (repo.filtername, filter_id)
1018 filter_name = b'%s%%%s' % (repo.filtername, filter_id)
1002 pre_repo = repo.filtered(filter_name)
1019 pre_repo = repo.filtered(filter_name)
1003 pre_repo.tags() # warm the cache
1020 pre_repo.tags() # warm the cache
1004 old_tags_path = repo.cachevfs.join(tags._filename(pre_repo))
1021 old_tags_path = repo.cachevfs.join(tags._filename(pre_repo))
1005 new_tags_path = repo.cachevfs.join(tags._filename(repo))
1022 new_tags_path = repo.cachevfs.join(tags._filename(repo))
1006
1023
1007 clear_revs = sorted(set(clear_revs))
1024 clear_revs = sorted(set(clear_revs))
1008
1025
1009 def s():
1026 def s():
1010 if update_last:
1027 if update_last:
1011 util.copyfile(old_tags_path, new_tags_path)
1028 util.copyfile(old_tags_path, new_tags_path)
1012 if clearrevlogs:
1029 if clearrevlogs:
1013 clearchangelog(repo)
1030 clearchangelog(repo)
1014 clearfilecache(repo.unfiltered(), 'manifest')
1031 clearfilecache(repo.unfiltered(), 'manifest')
1015 if clear_disk:
1032 if clear_disk:
1016 clear_disk_fn(repo)
1033 clear_disk_fn(repo)
1017 if clear_fnode:
1034 if clear_fnode:
1018 clear_fnodes_fn(repo)
1035 clear_fnodes_fn(repo)
1019 elif clear_revs:
1036 elif clear_revs:
1020 clear_fnodes_rev_fn(repo, clear_revs)
1037 clear_fnodes_rev_fn(repo, clear_revs)
1021 repocleartagscache()
1038 repocleartagscache()
1022
1039
1023 def t():
1040 def t():
1024 len(repo.tags())
1041 len(repo.tags())
1025
1042
1026 timer(t, setup=s)
1043 timer(t, setup=s)
1027 fm.end()
1044 fm.end()
1028
1045
1029
1046
1030 @command(b'perf::ancestors|perfancestors', formatteropts)
1047 @command(b'perf::ancestors|perfancestors', formatteropts)
1031 def perfancestors(ui, repo, **opts):
1048 def perfancestors(ui, repo, **opts):
1032 opts = _byteskwargs(opts)
1049 opts = _byteskwargs(opts)
1033 timer, fm = gettimer(ui, opts)
1050 timer, fm = gettimer(ui, opts)
1034 heads = repo.changelog.headrevs()
1051 heads = repo.changelog.headrevs()
1035
1052
1036 def d():
1053 def d():
1037 for a in repo.changelog.ancestors(heads):
1054 for a in repo.changelog.ancestors(heads):
1038 pass
1055 pass
1039
1056
1040 timer(d)
1057 timer(d)
1041 fm.end()
1058 fm.end()
1042
1059
1043
1060
1044 @command(b'perf::ancestorset|perfancestorset', formatteropts)
1061 @command(b'perf::ancestorset|perfancestorset', formatteropts)
1045 def perfancestorset(ui, repo, revset, **opts):
1062 def perfancestorset(ui, repo, revset, **opts):
1046 opts = _byteskwargs(opts)
1063 opts = _byteskwargs(opts)
1047 timer, fm = gettimer(ui, opts)
1064 timer, fm = gettimer(ui, opts)
1048 revs = repo.revs(revset)
1065 revs = repo.revs(revset)
1049 heads = repo.changelog.headrevs()
1066 heads = repo.changelog.headrevs()
1050
1067
1051 def d():
1068 def d():
1052 s = repo.changelog.ancestors(heads)
1069 s = repo.changelog.ancestors(heads)
1053 for rev in revs:
1070 for rev in revs:
1054 rev in s
1071 rev in s
1055
1072
1056 timer(d)
1073 timer(d)
1057 fm.end()
1074 fm.end()
1058
1075
1059
1076
1060 @command(
1077 @command(
1061 b'perf::delta-find',
1078 b'perf::delta-find',
1062 revlogopts + formatteropts,
1079 revlogopts + formatteropts,
1063 b'-c|-m|FILE REV',
1080 b'-c|-m|FILE REV',
1064 )
1081 )
1065 def perf_delta_find(ui, repo, arg_1, arg_2=None, **opts):
1082 def perf_delta_find(ui, repo, arg_1, arg_2=None, **opts):
1066 """benchmark the process of finding a valid delta for a revlog revision
1083 """benchmark the process of finding a valid delta for a revlog revision
1067
1084
1068 When a revlog receives a new revision (e.g. from a commit, or from an
1085 When a revlog receives a new revision (e.g. from a commit, or from an
1069 incoming bundle), it searches for a suitable delta-base to produce a delta.
1086 incoming bundle), it searches for a suitable delta-base to produce a delta.
1070 This perf command measures how much time we spend in this process. It
1087 This perf command measures how much time we spend in this process. It
1071 operates on an already stored revision.
1088 operates on an already stored revision.
1072
1089
1073 See `hg help debug-delta-find` for another related command.
1090 See `hg help debug-delta-find` for another related command.
1074 """
1091 """
1075 from mercurial import revlogutils
1092 from mercurial import revlogutils
1076 import mercurial.revlogutils.deltas as deltautil
1093 import mercurial.revlogutils.deltas as deltautil
1077
1094
1078 opts = _byteskwargs(opts)
1095 opts = _byteskwargs(opts)
1079 if arg_2 is None:
1096 if arg_2 is None:
1080 file_ = None
1097 file_ = None
1081 rev = arg_1
1098 rev = arg_1
1082 else:
1099 else:
1083 file_ = arg_1
1100 file_ = arg_1
1084 rev = arg_2
1101 rev = arg_2
1085
1102
1086 repo = repo.unfiltered()
1103 repo = repo.unfiltered()
1087
1104
1088 timer, fm = gettimer(ui, opts)
1105 timer, fm = gettimer(ui, opts)
1089
1106
1090 rev = int(rev)
1107 rev = int(rev)
1091
1108
1092 revlog = cmdutil.openrevlog(repo, b'perf::delta-find', file_, opts)
1109 revlog = cmdutil.openrevlog(repo, b'perf::delta-find', file_, opts)
1093
1110
1094 deltacomputer = deltautil.deltacomputer(revlog)
1111 deltacomputer = deltautil.deltacomputer(revlog)
1095
1112
1096 node = revlog.node(rev)
1113 node = revlog.node(rev)
1097 p1r, p2r = revlog.parentrevs(rev)
1114 p1r, p2r = revlog.parentrevs(rev)
1098 p1 = revlog.node(p1r)
1115 p1 = revlog.node(p1r)
1099 p2 = revlog.node(p2r)
1116 p2 = revlog.node(p2r)
1100 full_text = revlog.revision(rev)
1117 full_text = revlog.revision(rev)
1101 textlen = len(full_text)
1118 textlen = len(full_text)
1102 cachedelta = None
1119 cachedelta = None
1103 flags = revlog.flags(rev)
1120 flags = revlog.flags(rev)
1104
1121
1105 revinfo = revlogutils.revisioninfo(
1122 revinfo = revlogutils.revisioninfo(
1106 node,
1123 node,
1107 p1,
1124 p1,
1108 p2,
1125 p2,
1109 [full_text], # btext
1126 [full_text], # btext
1110 textlen,
1127 textlen,
1111 cachedelta,
1128 cachedelta,
1112 flags,
1129 flags,
1113 )
1130 )
1114
1131
1115 # Note: we should probably purge the potential caches (like the full
1132 # Note: we should probably purge the potential caches (like the full
1116 # manifest cache) between runs.
1133 # manifest cache) between runs.
1117 def find_one():
1134 def find_one():
1118 with revlog._datafp() as fh:
1135 with revlog._datafp() as fh:
1119 deltacomputer.finddeltainfo(revinfo, fh, target_rev=rev)
1136 deltacomputer.finddeltainfo(revinfo, fh, target_rev=rev)
1120
1137
1121 timer(find_one)
1138 timer(find_one)
1122 fm.end()
1139 fm.end()
1123
1140
1124
1141
1125 @command(b'perf::discovery|perfdiscovery', formatteropts, b'PATH')
1142 @command(b'perf::discovery|perfdiscovery', formatteropts, b'PATH')
1126 def perfdiscovery(ui, repo, path, **opts):
1143 def perfdiscovery(ui, repo, path, **opts):
1127 """benchmark discovery between local repo and the peer at given path"""
1144 """benchmark discovery between local repo and the peer at given path"""
1128 repos = [repo, None]
1145 repos = [repo, None]
1129 timer, fm = gettimer(ui, opts)
1146 timer, fm = gettimer(ui, opts)
1130
1147
1131 try:
1148 try:
1132 from mercurial.utils.urlutil import get_unique_pull_path_obj
1149 from mercurial.utils.urlutil import get_unique_pull_path_obj
1133
1150
1134 path = get_unique_pull_path_obj(b'perfdiscovery', ui, path)
1151 path = get_unique_pull_path_obj(b'perfdiscovery', ui, path)
1135 except ImportError:
1152 except ImportError:
1136 try:
1153 try:
1137 from mercurial.utils.urlutil import get_unique_pull_path
1154 from mercurial.utils.urlutil import get_unique_pull_path
1138
1155
1139 path = get_unique_pull_path(b'perfdiscovery', repo, ui, path)[0]
1156 path = get_unique_pull_path(b'perfdiscovery', repo, ui, path)[0]
1140 except ImportError:
1157 except ImportError:
1141 path = ui.expandpath(path)
1158 path = ui.expandpath(path)
1142
1159
1143 def s():
1160 def s():
1144 repos[1] = hg.peer(ui, opts, path)
1161 repos[1] = hg.peer(ui, opts, path)
1145
1162
1146 def d():
1163 def d():
1147 setdiscovery.findcommonheads(ui, *repos)
1164 setdiscovery.findcommonheads(ui, *repos)
1148
1165
1149 timer(d, setup=s)
1166 timer(d, setup=s)
1150 fm.end()
1167 fm.end()
1151
1168
1152
1169
1153 @command(
1170 @command(
1154 b'perf::bookmarks|perfbookmarks',
1171 b'perf::bookmarks|perfbookmarks',
1155 formatteropts
1172 formatteropts
1156 + [
1173 + [
1157 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
1174 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
1158 ],
1175 ],
1159 )
1176 )
1160 def perfbookmarks(ui, repo, **opts):
1177 def perfbookmarks(ui, repo, **opts):
1161 """benchmark parsing bookmarks from disk to memory"""
1178 """benchmark parsing bookmarks from disk to memory"""
1162 opts = _byteskwargs(opts)
1179 opts = _byteskwargs(opts)
1163 timer, fm = gettimer(ui, opts)
1180 timer, fm = gettimer(ui, opts)
1164
1181
1165 clearrevlogs = opts[b'clear_revlogs']
1182 clearrevlogs = opts[b'clear_revlogs']
1166
1183
1167 def s():
1184 def s():
1168 if clearrevlogs:
1185 if clearrevlogs:
1169 clearchangelog(repo)
1186 clearchangelog(repo)
1170 clearfilecache(repo, b'_bookmarks')
1187 clearfilecache(repo, b'_bookmarks')
1171
1188
1172 def d():
1189 def d():
1173 repo._bookmarks
1190 repo._bookmarks
1174
1191
1175 timer(d, setup=s)
1192 timer(d, setup=s)
1176 fm.end()
1193 fm.end()
1177
1194
1178
1195
1179 @command(
1196 @command(
1180 b'perf::bundle',
1197 b'perf::bundle',
1181 [
1198 [
1182 (
1199 (
1183 b'r',
1200 b'r',
1184 b'rev',
1201 b'rev',
1185 [],
1202 [],
1186 b'changesets to bundle',
1203 b'changesets to bundle',
1187 b'REV',
1204 b'REV',
1188 ),
1205 ),
1189 (
1206 (
1190 b't',
1207 b't',
1191 b'type',
1208 b'type',
1192 b'none',
1209 b'none',
1193 b'bundlespec to use (see `hg help bundlespec`)',
1210 b'bundlespec to use (see `hg help bundlespec`)',
1194 b'TYPE',
1211 b'TYPE',
1195 ),
1212 ),
1196 ]
1213 ]
1197 + formatteropts,
1214 + formatteropts,
1198 b'REVS',
1215 b'REVS',
1199 )
1216 )
1200 def perfbundle(ui, repo, *revs, **opts):
1217 def perfbundle(ui, repo, *revs, **opts):
1201 """benchmark the creation of a bundle from a repository
1218 """benchmark the creation of a bundle from a repository
1202
1219
1203 For now, this only supports "none" compression.
1220 For now, this only supports "none" compression.
1204 """
1221 """
1205 try:
1222 try:
1206 from mercurial import bundlecaches
1223 from mercurial import bundlecaches
1207
1224
1208 parsebundlespec = bundlecaches.parsebundlespec
1225 parsebundlespec = bundlecaches.parsebundlespec
1209 except ImportError:
1226 except ImportError:
1210 from mercurial import exchange
1227 from mercurial import exchange
1211
1228
1212 parsebundlespec = exchange.parsebundlespec
1229 parsebundlespec = exchange.parsebundlespec
1213
1230
1214 from mercurial import discovery
1231 from mercurial import discovery
1215 from mercurial import bundle2
1232 from mercurial import bundle2
1216
1233
1217 opts = _byteskwargs(opts)
1234 opts = _byteskwargs(opts)
1218 timer, fm = gettimer(ui, opts)
1235 timer, fm = gettimer(ui, opts)
1219
1236
1220 cl = repo.changelog
1237 cl = repo.changelog
1221 revs = list(revs)
1238 revs = list(revs)
1222 revs.extend(opts.get(b'rev', ()))
1239 revs.extend(opts.get(b'rev', ()))
1223 revs = scmutil.revrange(repo, revs)
1240 revs = scmutil.revrange(repo, revs)
1224 if not revs:
1241 if not revs:
1225 raise error.Abort(b"not revision specified")
1242 raise error.Abort(b"not revision specified")
1226 # make it a consistent set (ie: without topological gaps)
1243 # make it a consistent set (ie: without topological gaps)
1227 old_len = len(revs)
1244 old_len = len(revs)
1228 revs = list(repo.revs(b"%ld::%ld", revs, revs))
1245 revs = list(repo.revs(b"%ld::%ld", revs, revs))
1229 if old_len != len(revs):
1246 if old_len != len(revs):
1230 new_count = len(revs) - old_len
1247 new_count = len(revs) - old_len
1231 msg = b"add %d new revisions to make it a consistent set\n"
1248 msg = b"add %d new revisions to make it a consistent set\n"
1232 ui.write_err(msg % new_count)
1249 ui.write_err(msg % new_count)
1233
1250
1234 targets = [cl.node(r) for r in repo.revs(b"heads(::%ld)", revs)]
1251 targets = [cl.node(r) for r in repo.revs(b"heads(::%ld)", revs)]
1235 bases = [cl.node(r) for r in repo.revs(b"heads(::%ld - %ld)", revs, revs)]
1252 bases = [cl.node(r) for r in repo.revs(b"heads(::%ld - %ld)", revs, revs)]
1236 outgoing = discovery.outgoing(repo, bases, targets)
1253 outgoing = discovery.outgoing(repo, bases, targets)
1237
1254
1238 bundle_spec = opts.get(b'type')
1255 bundle_spec = opts.get(b'type')
1239
1256
1240 bundle_spec = parsebundlespec(repo, bundle_spec, strict=False)
1257 bundle_spec = parsebundlespec(repo, bundle_spec, strict=False)
1241
1258
1242 cgversion = bundle_spec.params.get(b"cg.version")
1259 cgversion = bundle_spec.params.get(b"cg.version")
1243 if cgversion is None:
1260 if cgversion is None:
1244 if bundle_spec.version == b'v1':
1261 if bundle_spec.version == b'v1':
1245 cgversion = b'01'
1262 cgversion = b'01'
1246 if bundle_spec.version == b'v2':
1263 if bundle_spec.version == b'v2':
1247 cgversion = b'02'
1264 cgversion = b'02'
1248 if cgversion not in changegroup.supportedoutgoingversions(repo):
1265 if cgversion not in changegroup.supportedoutgoingversions(repo):
1249 err = b"repository does not support bundle version %s"
1266 err = b"repository does not support bundle version %s"
1250 raise error.Abort(err % cgversion)
1267 raise error.Abort(err % cgversion)
1251
1268
1252 if cgversion == b'01': # bundle1
1269 if cgversion == b'01': # bundle1
1253 bversion = b'HG10' + bundle_spec.wirecompression
1270 bversion = b'HG10' + bundle_spec.wirecompression
1254 bcompression = None
1271 bcompression = None
1255 elif cgversion in (b'02', b'03'):
1272 elif cgversion in (b'02', b'03'):
1256 bversion = b'HG20'
1273 bversion = b'HG20'
1257 bcompression = bundle_spec.wirecompression
1274 bcompression = bundle_spec.wirecompression
1258 else:
1275 else:
1259 err = b'perf::bundle: unexpected changegroup version %s'
1276 err = b'perf::bundle: unexpected changegroup version %s'
1260 raise error.ProgrammingError(err % cgversion)
1277 raise error.ProgrammingError(err % cgversion)
1261
1278
1262 if bcompression is None:
1279 if bcompression is None:
1263 bcompression = b'UN'
1280 bcompression = b'UN'
1264
1281
1265 if bcompression != b'UN':
1282 if bcompression != b'UN':
1266 err = b'perf::bundle: compression currently unsupported: %s'
1283 err = b'perf::bundle: compression currently unsupported: %s'
1267 raise error.ProgrammingError(err % bcompression)
1284 raise error.ProgrammingError(err % bcompression)
1268
1285
1269 def do_bundle():
1286 def do_bundle():
1270 bundle2.writenewbundle(
1287 bundle2.writenewbundle(
1271 ui,
1288 ui,
1272 repo,
1289 repo,
1273 b'perf::bundle',
1290 b'perf::bundle',
1274 os.devnull,
1291 os.devnull,
1275 bversion,
1292 bversion,
1276 outgoing,
1293 outgoing,
1277 bundle_spec.params,
1294 bundle_spec.params,
1278 )
1295 )
1279
1296
1280 timer(do_bundle)
1297 timer(do_bundle)
1281 fm.end()
1298 fm.end()
1282
1299
1283
1300
1284 @command(b'perf::bundleread|perfbundleread', formatteropts, b'BUNDLE')
1301 @command(b'perf::bundleread|perfbundleread', formatteropts, b'BUNDLE')
1285 def perfbundleread(ui, repo, bundlepath, **opts):
1302 def perfbundleread(ui, repo, bundlepath, **opts):
1286 """Benchmark reading of bundle files.
1303 """Benchmark reading of bundle files.
1287
1304
1288 This command is meant to isolate the I/O part of bundle reading as
1305 This command is meant to isolate the I/O part of bundle reading as
1289 much as possible.
1306 much as possible.
1290 """
1307 """
1291 from mercurial import (
1308 from mercurial import (
1292 bundle2,
1309 bundle2,
1293 exchange,
1310 exchange,
1294 streamclone,
1311 streamclone,
1295 )
1312 )
1296
1313
1297 opts = _byteskwargs(opts)
1314 opts = _byteskwargs(opts)
1298
1315
1299 def makebench(fn):
1316 def makebench(fn):
1300 def run():
1317 def run():
1301 with open(bundlepath, b'rb') as fh:
1318 with open(bundlepath, b'rb') as fh:
1302 bundle = exchange.readbundle(ui, fh, bundlepath)
1319 bundle = exchange.readbundle(ui, fh, bundlepath)
1303 fn(bundle)
1320 fn(bundle)
1304
1321
1305 return run
1322 return run
1306
1323
1307 def makereadnbytes(size):
1324 def makereadnbytes(size):
1308 def run():
1325 def run():
1309 with open(bundlepath, b'rb') as fh:
1326 with open(bundlepath, b'rb') as fh:
1310 bundle = exchange.readbundle(ui, fh, bundlepath)
1327 bundle = exchange.readbundle(ui, fh, bundlepath)
1311 while bundle.read(size):
1328 while bundle.read(size):
1312 pass
1329 pass
1313
1330
1314 return run
1331 return run
1315
1332
1316 def makestdioread(size):
1333 def makestdioread(size):
1317 def run():
1334 def run():
1318 with open(bundlepath, b'rb') as fh:
1335 with open(bundlepath, b'rb') as fh:
1319 while fh.read(size):
1336 while fh.read(size):
1320 pass
1337 pass
1321
1338
1322 return run
1339 return run
1323
1340
1324 # bundle1
1341 # bundle1
1325
1342
1326 def deltaiter(bundle):
1343 def deltaiter(bundle):
1327 for delta in bundle.deltaiter():
1344 for delta in bundle.deltaiter():
1328 pass
1345 pass
1329
1346
1330 def iterchunks(bundle):
1347 def iterchunks(bundle):
1331 for chunk in bundle.getchunks():
1348 for chunk in bundle.getchunks():
1332 pass
1349 pass
1333
1350
1334 # bundle2
1351 # bundle2
1335
1352
1336 def forwardchunks(bundle):
1353 def forwardchunks(bundle):
1337 for chunk in bundle._forwardchunks():
1354 for chunk in bundle._forwardchunks():
1338 pass
1355 pass
1339
1356
1340 def iterparts(bundle):
1357 def iterparts(bundle):
1341 for part in bundle.iterparts():
1358 for part in bundle.iterparts():
1342 pass
1359 pass
1343
1360
1344 def iterpartsseekable(bundle):
1361 def iterpartsseekable(bundle):
1345 for part in bundle.iterparts(seekable=True):
1362 for part in bundle.iterparts(seekable=True):
1346 pass
1363 pass
1347
1364
1348 def seek(bundle):
1365 def seek(bundle):
1349 for part in bundle.iterparts(seekable=True):
1366 for part in bundle.iterparts(seekable=True):
1350 part.seek(0, os.SEEK_END)
1367 part.seek(0, os.SEEK_END)
1351
1368
1352 def makepartreadnbytes(size):
1369 def makepartreadnbytes(size):
1353 def run():
1370 def run():
1354 with open(bundlepath, b'rb') as fh:
1371 with open(bundlepath, b'rb') as fh:
1355 bundle = exchange.readbundle(ui, fh, bundlepath)
1372 bundle = exchange.readbundle(ui, fh, bundlepath)
1356 for part in bundle.iterparts():
1373 for part in bundle.iterparts():
1357 while part.read(size):
1374 while part.read(size):
1358 pass
1375 pass
1359
1376
1360 return run
1377 return run
1361
1378
1362 benches = [
1379 benches = [
1363 (makestdioread(8192), b'read(8k)'),
1380 (makestdioread(8192), b'read(8k)'),
1364 (makestdioread(16384), b'read(16k)'),
1381 (makestdioread(16384), b'read(16k)'),
1365 (makestdioread(32768), b'read(32k)'),
1382 (makestdioread(32768), b'read(32k)'),
1366 (makestdioread(131072), b'read(128k)'),
1383 (makestdioread(131072), b'read(128k)'),
1367 ]
1384 ]
1368
1385
1369 with open(bundlepath, b'rb') as fh:
1386 with open(bundlepath, b'rb') as fh:
1370 bundle = exchange.readbundle(ui, fh, bundlepath)
1387 bundle = exchange.readbundle(ui, fh, bundlepath)
1371
1388
1372 if isinstance(bundle, changegroup.cg1unpacker):
1389 if isinstance(bundle, changegroup.cg1unpacker):
1373 benches.extend(
1390 benches.extend(
1374 [
1391 [
1375 (makebench(deltaiter), b'cg1 deltaiter()'),
1392 (makebench(deltaiter), b'cg1 deltaiter()'),
1376 (makebench(iterchunks), b'cg1 getchunks()'),
1393 (makebench(iterchunks), b'cg1 getchunks()'),
1377 (makereadnbytes(8192), b'cg1 read(8k)'),
1394 (makereadnbytes(8192), b'cg1 read(8k)'),
1378 (makereadnbytes(16384), b'cg1 read(16k)'),
1395 (makereadnbytes(16384), b'cg1 read(16k)'),
1379 (makereadnbytes(32768), b'cg1 read(32k)'),
1396 (makereadnbytes(32768), b'cg1 read(32k)'),
1380 (makereadnbytes(131072), b'cg1 read(128k)'),
1397 (makereadnbytes(131072), b'cg1 read(128k)'),
1381 ]
1398 ]
1382 )
1399 )
1383 elif isinstance(bundle, bundle2.unbundle20):
1400 elif isinstance(bundle, bundle2.unbundle20):
1384 benches.extend(
1401 benches.extend(
1385 [
1402 [
1386 (makebench(forwardchunks), b'bundle2 forwardchunks()'),
1403 (makebench(forwardchunks), b'bundle2 forwardchunks()'),
1387 (makebench(iterparts), b'bundle2 iterparts()'),
1404 (makebench(iterparts), b'bundle2 iterparts()'),
1388 (
1405 (
1389 makebench(iterpartsseekable),
1406 makebench(iterpartsseekable),
1390 b'bundle2 iterparts() seekable',
1407 b'bundle2 iterparts() seekable',
1391 ),
1408 ),
1392 (makebench(seek), b'bundle2 part seek()'),
1409 (makebench(seek), b'bundle2 part seek()'),
1393 (makepartreadnbytes(8192), b'bundle2 part read(8k)'),
1410 (makepartreadnbytes(8192), b'bundle2 part read(8k)'),
1394 (makepartreadnbytes(16384), b'bundle2 part read(16k)'),
1411 (makepartreadnbytes(16384), b'bundle2 part read(16k)'),
1395 (makepartreadnbytes(32768), b'bundle2 part read(32k)'),
1412 (makepartreadnbytes(32768), b'bundle2 part read(32k)'),
1396 (makepartreadnbytes(131072), b'bundle2 part read(128k)'),
1413 (makepartreadnbytes(131072), b'bundle2 part read(128k)'),
1397 ]
1414 ]
1398 )
1415 )
1399 elif isinstance(bundle, streamclone.streamcloneapplier):
1416 elif isinstance(bundle, streamclone.streamcloneapplier):
1400 raise error.Abort(b'stream clone bundles not supported')
1417 raise error.Abort(b'stream clone bundles not supported')
1401 else:
1418 else:
1402 raise error.Abort(b'unhandled bundle type: %s' % type(bundle))
1419 raise error.Abort(b'unhandled bundle type: %s' % type(bundle))
1403
1420
1404 for fn, title in benches:
1421 for fn, title in benches:
1405 timer, fm = gettimer(ui, opts)
1422 timer, fm = gettimer(ui, opts)
1406 timer(fn, title=title)
1423 timer(fn, title=title)
1407 fm.end()
1424 fm.end()
1408
1425
1409
1426
1410 @command(
1427 @command(
1411 b'perf::changegroupchangelog|perfchangegroupchangelog',
1428 b'perf::changegroupchangelog|perfchangegroupchangelog',
1412 formatteropts
1429 formatteropts
1413 + [
1430 + [
1414 (b'', b'cgversion', b'02', b'changegroup version'),
1431 (b'', b'cgversion', b'02', b'changegroup version'),
1415 (b'r', b'rev', b'', b'revisions to add to changegroup'),
1432 (b'r', b'rev', b'', b'revisions to add to changegroup'),
1416 ],
1433 ],
1417 )
1434 )
1418 def perfchangegroupchangelog(ui, repo, cgversion=b'02', rev=None, **opts):
1435 def perfchangegroupchangelog(ui, repo, cgversion=b'02', rev=None, **opts):
1419 """Benchmark producing a changelog group for a changegroup.
1436 """Benchmark producing a changelog group for a changegroup.
1420
1437
1421 This measures the time spent processing the changelog during a
1438 This measures the time spent processing the changelog during a
1422 bundle operation. This occurs during `hg bundle` and on a server
1439 bundle operation. This occurs during `hg bundle` and on a server
1423 processing a `getbundle` wire protocol request (handles clones
1440 processing a `getbundle` wire protocol request (handles clones
1424 and pull requests).
1441 and pull requests).
1425
1442
1426 By default, all revisions are added to the changegroup.
1443 By default, all revisions are added to the changegroup.
1427 """
1444 """
1428 opts = _byteskwargs(opts)
1445 opts = _byteskwargs(opts)
1429 cl = repo.changelog
1446 cl = repo.changelog
1430 nodes = [cl.lookup(r) for r in repo.revs(rev or b'all()')]
1447 nodes = [cl.lookup(r) for r in repo.revs(rev or b'all()')]
1431 bundler = changegroup.getbundler(cgversion, repo)
1448 bundler = changegroup.getbundler(cgversion, repo)
1432
1449
1433 def d():
1450 def d():
1434 state, chunks = bundler._generatechangelog(cl, nodes)
1451 state, chunks = bundler._generatechangelog(cl, nodes)
1435 for chunk in chunks:
1452 for chunk in chunks:
1436 pass
1453 pass
1437
1454
1438 timer, fm = gettimer(ui, opts)
1455 timer, fm = gettimer(ui, opts)
1439
1456
1440 # Terminal printing can interfere with timing. So disable it.
1457 # Terminal printing can interfere with timing. So disable it.
1441 with ui.configoverride({(b'progress', b'disable'): True}):
1458 with ui.configoverride({(b'progress', b'disable'): True}):
1442 timer(d)
1459 timer(d)
1443
1460
1444 fm.end()
1461 fm.end()
1445
1462
1446
1463
1447 @command(b'perf::dirs|perfdirs', formatteropts)
1464 @command(b'perf::dirs|perfdirs', formatteropts)
1448 def perfdirs(ui, repo, **opts):
1465 def perfdirs(ui, repo, **opts):
1449 opts = _byteskwargs(opts)
1466 opts = _byteskwargs(opts)
1450 timer, fm = gettimer(ui, opts)
1467 timer, fm = gettimer(ui, opts)
1451 dirstate = repo.dirstate
1468 dirstate = repo.dirstate
1452 b'a' in dirstate
1469 b'a' in dirstate
1453
1470
1454 def d():
1471 def d():
1455 dirstate.hasdir(b'a')
1472 dirstate.hasdir(b'a')
1456 try:
1473 try:
1457 del dirstate._map._dirs
1474 del dirstate._map._dirs
1458 except AttributeError:
1475 except AttributeError:
1459 pass
1476 pass
1460
1477
1461 timer(d)
1478 timer(d)
1462 fm.end()
1479 fm.end()
1463
1480
1464
1481
1465 @command(
1482 @command(
1466 b'perf::dirstate|perfdirstate',
1483 b'perf::dirstate|perfdirstate',
1467 [
1484 [
1468 (
1485 (
1469 b'',
1486 b'',
1470 b'iteration',
1487 b'iteration',
1471 None,
1488 None,
1472 b'benchmark a full iteration for the dirstate',
1489 b'benchmark a full iteration for the dirstate',
1473 ),
1490 ),
1474 (
1491 (
1475 b'',
1492 b'',
1476 b'contains',
1493 b'contains',
1477 None,
1494 None,
1478 b'benchmark a large amount of `nf in dirstate` calls',
1495 b'benchmark a large amount of `nf in dirstate` calls',
1479 ),
1496 ),
1480 ]
1497 ]
1481 + formatteropts,
1498 + formatteropts,
1482 )
1499 )
1483 def perfdirstate(ui, repo, **opts):
1500 def perfdirstate(ui, repo, **opts):
1484 """benchmap the time of various distate operations
1501 """benchmap the time of various distate operations
1485
1502
1486 By default benchmark the time necessary to load a dirstate from scratch.
1503 By default benchmark the time necessary to load a dirstate from scratch.
1487 The dirstate is loaded to the point were a "contains" request can be
1504 The dirstate is loaded to the point were a "contains" request can be
1488 answered.
1505 answered.
1489 """
1506 """
1490 opts = _byteskwargs(opts)
1507 opts = _byteskwargs(opts)
1491 timer, fm = gettimer(ui, opts)
1508 timer, fm = gettimer(ui, opts)
1492 b"a" in repo.dirstate
1509 b"a" in repo.dirstate
1493
1510
1494 if opts[b'iteration'] and opts[b'contains']:
1511 if opts[b'iteration'] and opts[b'contains']:
1495 msg = b'only specify one of --iteration or --contains'
1512 msg = b'only specify one of --iteration or --contains'
1496 raise error.Abort(msg)
1513 raise error.Abort(msg)
1497
1514
1498 if opts[b'iteration']:
1515 if opts[b'iteration']:
1499 setup = None
1516 setup = None
1500 dirstate = repo.dirstate
1517 dirstate = repo.dirstate
1501
1518
1502 def d():
1519 def d():
1503 for f in dirstate:
1520 for f in dirstate:
1504 pass
1521 pass
1505
1522
1506 elif opts[b'contains']:
1523 elif opts[b'contains']:
1507 setup = None
1524 setup = None
1508 dirstate = repo.dirstate
1525 dirstate = repo.dirstate
1509 allfiles = list(dirstate)
1526 allfiles = list(dirstate)
1510 # also add file path that will be "missing" from the dirstate
1527 # also add file path that will be "missing" from the dirstate
1511 allfiles.extend([f[::-1] for f in allfiles])
1528 allfiles.extend([f[::-1] for f in allfiles])
1512
1529
1513 def d():
1530 def d():
1514 for f in allfiles:
1531 for f in allfiles:
1515 f in dirstate
1532 f in dirstate
1516
1533
1517 else:
1534 else:
1518
1535
1519 def setup():
1536 def setup():
1520 repo.dirstate.invalidate()
1537 repo.dirstate.invalidate()
1521
1538
1522 def d():
1539 def d():
1523 b"a" in repo.dirstate
1540 b"a" in repo.dirstate
1524
1541
1525 timer(d, setup=setup)
1542 timer(d, setup=setup)
1526 fm.end()
1543 fm.end()
1527
1544
1528
1545
1529 @command(b'perf::dirstatedirs|perfdirstatedirs', formatteropts)
1546 @command(b'perf::dirstatedirs|perfdirstatedirs', formatteropts)
1530 def perfdirstatedirs(ui, repo, **opts):
1547 def perfdirstatedirs(ui, repo, **opts):
1531 """benchmap a 'dirstate.hasdir' call from an empty `dirs` cache"""
1548 """benchmap a 'dirstate.hasdir' call from an empty `dirs` cache"""
1532 opts = _byteskwargs(opts)
1549 opts = _byteskwargs(opts)
1533 timer, fm = gettimer(ui, opts)
1550 timer, fm = gettimer(ui, opts)
1534 repo.dirstate.hasdir(b"a")
1551 repo.dirstate.hasdir(b"a")
1535
1552
1536 def setup():
1553 def setup():
1537 try:
1554 try:
1538 del repo.dirstate._map._dirs
1555 del repo.dirstate._map._dirs
1539 except AttributeError:
1556 except AttributeError:
1540 pass
1557 pass
1541
1558
1542 def d():
1559 def d():
1543 repo.dirstate.hasdir(b"a")
1560 repo.dirstate.hasdir(b"a")
1544
1561
1545 timer(d, setup=setup)
1562 timer(d, setup=setup)
1546 fm.end()
1563 fm.end()
1547
1564
1548
1565
1549 @command(b'perf::dirstatefoldmap|perfdirstatefoldmap', formatteropts)
1566 @command(b'perf::dirstatefoldmap|perfdirstatefoldmap', formatteropts)
1550 def perfdirstatefoldmap(ui, repo, **opts):
1567 def perfdirstatefoldmap(ui, repo, **opts):
1551 """benchmap a `dirstate._map.filefoldmap.get()` request
1568 """benchmap a `dirstate._map.filefoldmap.get()` request
1552
1569
1553 The dirstate filefoldmap cache is dropped between every request.
1570 The dirstate filefoldmap cache is dropped between every request.
1554 """
1571 """
1555 opts = _byteskwargs(opts)
1572 opts = _byteskwargs(opts)
1556 timer, fm = gettimer(ui, opts)
1573 timer, fm = gettimer(ui, opts)
1557 dirstate = repo.dirstate
1574 dirstate = repo.dirstate
1558 dirstate._map.filefoldmap.get(b'a')
1575 dirstate._map.filefoldmap.get(b'a')
1559
1576
1560 def setup():
1577 def setup():
1561 del dirstate._map.filefoldmap
1578 del dirstate._map.filefoldmap
1562
1579
1563 def d():
1580 def d():
1564 dirstate._map.filefoldmap.get(b'a')
1581 dirstate._map.filefoldmap.get(b'a')
1565
1582
1566 timer(d, setup=setup)
1583 timer(d, setup=setup)
1567 fm.end()
1584 fm.end()
1568
1585
1569
1586
1570 @command(b'perf::dirfoldmap|perfdirfoldmap', formatteropts)
1587 @command(b'perf::dirfoldmap|perfdirfoldmap', formatteropts)
1571 def perfdirfoldmap(ui, repo, **opts):
1588 def perfdirfoldmap(ui, repo, **opts):
1572 """benchmap a `dirstate._map.dirfoldmap.get()` request
1589 """benchmap a `dirstate._map.dirfoldmap.get()` request
1573
1590
1574 The dirstate dirfoldmap cache is dropped between every request.
1591 The dirstate dirfoldmap cache is dropped between every request.
1575 """
1592 """
1576 opts = _byteskwargs(opts)
1593 opts = _byteskwargs(opts)
1577 timer, fm = gettimer(ui, opts)
1594 timer, fm = gettimer(ui, opts)
1578 dirstate = repo.dirstate
1595 dirstate = repo.dirstate
1579 dirstate._map.dirfoldmap.get(b'a')
1596 dirstate._map.dirfoldmap.get(b'a')
1580
1597
1581 def setup():
1598 def setup():
1582 del dirstate._map.dirfoldmap
1599 del dirstate._map.dirfoldmap
1583 try:
1600 try:
1584 del dirstate._map._dirs
1601 del dirstate._map._dirs
1585 except AttributeError:
1602 except AttributeError:
1586 pass
1603 pass
1587
1604
1588 def d():
1605 def d():
1589 dirstate._map.dirfoldmap.get(b'a')
1606 dirstate._map.dirfoldmap.get(b'a')
1590
1607
1591 timer(d, setup=setup)
1608 timer(d, setup=setup)
1592 fm.end()
1609 fm.end()
1593
1610
1594
1611
1595 @command(b'perf::dirstatewrite|perfdirstatewrite', formatteropts)
1612 @command(b'perf::dirstatewrite|perfdirstatewrite', formatteropts)
1596 def perfdirstatewrite(ui, repo, **opts):
1613 def perfdirstatewrite(ui, repo, **opts):
1597 """benchmap the time it take to write a dirstate on disk"""
1614 """benchmap the time it take to write a dirstate on disk"""
1598 opts = _byteskwargs(opts)
1615 opts = _byteskwargs(opts)
1599 timer, fm = gettimer(ui, opts)
1616 timer, fm = gettimer(ui, opts)
1600 ds = repo.dirstate
1617 ds = repo.dirstate
1601 b"a" in ds
1618 b"a" in ds
1602
1619
1603 def setup():
1620 def setup():
1604 ds._dirty = True
1621 ds._dirty = True
1605
1622
1606 def d():
1623 def d():
1607 ds.write(repo.currenttransaction())
1624 ds.write(repo.currenttransaction())
1608
1625
1609 with repo.wlock():
1626 with repo.wlock():
1610 timer(d, setup=setup)
1627 timer(d, setup=setup)
1611 fm.end()
1628 fm.end()
1612
1629
1613
1630
1614 def _getmergerevs(repo, opts):
1631 def _getmergerevs(repo, opts):
1615 """parse command argument to return rev involved in merge
1632 """parse command argument to return rev involved in merge
1616
1633
1617 input: options dictionnary with `rev`, `from` and `bse`
1634 input: options dictionnary with `rev`, `from` and `bse`
1618 output: (localctx, otherctx, basectx)
1635 output: (localctx, otherctx, basectx)
1619 """
1636 """
1620 if opts[b'from']:
1637 if opts[b'from']:
1621 fromrev = scmutil.revsingle(repo, opts[b'from'])
1638 fromrev = scmutil.revsingle(repo, opts[b'from'])
1622 wctx = repo[fromrev]
1639 wctx = repo[fromrev]
1623 else:
1640 else:
1624 wctx = repo[None]
1641 wctx = repo[None]
1625 # we don't want working dir files to be stat'd in the benchmark, so
1642 # we don't want working dir files to be stat'd in the benchmark, so
1626 # prime that cache
1643 # prime that cache
1627 wctx.dirty()
1644 wctx.dirty()
1628 rctx = scmutil.revsingle(repo, opts[b'rev'], opts[b'rev'])
1645 rctx = scmutil.revsingle(repo, opts[b'rev'], opts[b'rev'])
1629 if opts[b'base']:
1646 if opts[b'base']:
1630 fromrev = scmutil.revsingle(repo, opts[b'base'])
1647 fromrev = scmutil.revsingle(repo, opts[b'base'])
1631 ancestor = repo[fromrev]
1648 ancestor = repo[fromrev]
1632 else:
1649 else:
1633 ancestor = wctx.ancestor(rctx)
1650 ancestor = wctx.ancestor(rctx)
1634 return (wctx, rctx, ancestor)
1651 return (wctx, rctx, ancestor)
1635
1652
1636
1653
1637 @command(
1654 @command(
1638 b'perf::mergecalculate|perfmergecalculate',
1655 b'perf::mergecalculate|perfmergecalculate',
1639 [
1656 [
1640 (b'r', b'rev', b'.', b'rev to merge against'),
1657 (b'r', b'rev', b'.', b'rev to merge against'),
1641 (b'', b'from', b'', b'rev to merge from'),
1658 (b'', b'from', b'', b'rev to merge from'),
1642 (b'', b'base', b'', b'the revision to use as base'),
1659 (b'', b'base', b'', b'the revision to use as base'),
1643 ]
1660 ]
1644 + formatteropts,
1661 + formatteropts,
1645 )
1662 )
1646 def perfmergecalculate(ui, repo, **opts):
1663 def perfmergecalculate(ui, repo, **opts):
1647 opts = _byteskwargs(opts)
1664 opts = _byteskwargs(opts)
1648 timer, fm = gettimer(ui, opts)
1665 timer, fm = gettimer(ui, opts)
1649
1666
1650 wctx, rctx, ancestor = _getmergerevs(repo, opts)
1667 wctx, rctx, ancestor = _getmergerevs(repo, opts)
1651
1668
1652 def d():
1669 def d():
1653 # acceptremote is True because we don't want prompts in the middle of
1670 # acceptremote is True because we don't want prompts in the middle of
1654 # our benchmark
1671 # our benchmark
1655 merge.calculateupdates(
1672 merge.calculateupdates(
1656 repo,
1673 repo,
1657 wctx,
1674 wctx,
1658 rctx,
1675 rctx,
1659 [ancestor],
1676 [ancestor],
1660 branchmerge=False,
1677 branchmerge=False,
1661 force=False,
1678 force=False,
1662 acceptremote=True,
1679 acceptremote=True,
1663 followcopies=True,
1680 followcopies=True,
1664 )
1681 )
1665
1682
1666 timer(d)
1683 timer(d)
1667 fm.end()
1684 fm.end()
1668
1685
1669
1686
1670 @command(
1687 @command(
1671 b'perf::mergecopies|perfmergecopies',
1688 b'perf::mergecopies|perfmergecopies',
1672 [
1689 [
1673 (b'r', b'rev', b'.', b'rev to merge against'),
1690 (b'r', b'rev', b'.', b'rev to merge against'),
1674 (b'', b'from', b'', b'rev to merge from'),
1691 (b'', b'from', b'', b'rev to merge from'),
1675 (b'', b'base', b'', b'the revision to use as base'),
1692 (b'', b'base', b'', b'the revision to use as base'),
1676 ]
1693 ]
1677 + formatteropts,
1694 + formatteropts,
1678 )
1695 )
1679 def perfmergecopies(ui, repo, **opts):
1696 def perfmergecopies(ui, repo, **opts):
1680 """measure runtime of `copies.mergecopies`"""
1697 """measure runtime of `copies.mergecopies`"""
1681 opts = _byteskwargs(opts)
1698 opts = _byteskwargs(opts)
1682 timer, fm = gettimer(ui, opts)
1699 timer, fm = gettimer(ui, opts)
1683 wctx, rctx, ancestor = _getmergerevs(repo, opts)
1700 wctx, rctx, ancestor = _getmergerevs(repo, opts)
1684
1701
1685 def d():
1702 def d():
1686 # acceptremote is True because we don't want prompts in the middle of
1703 # acceptremote is True because we don't want prompts in the middle of
1687 # our benchmark
1704 # our benchmark
1688 copies.mergecopies(repo, wctx, rctx, ancestor)
1705 copies.mergecopies(repo, wctx, rctx, ancestor)
1689
1706
1690 timer(d)
1707 timer(d)
1691 fm.end()
1708 fm.end()
1692
1709
1693
1710
1694 @command(b'perf::pathcopies|perfpathcopies', [], b"REV REV")
1711 @command(b'perf::pathcopies|perfpathcopies', [], b"REV REV")
1695 def perfpathcopies(ui, repo, rev1, rev2, **opts):
1712 def perfpathcopies(ui, repo, rev1, rev2, **opts):
1696 """benchmark the copy tracing logic"""
1713 """benchmark the copy tracing logic"""
1697 opts = _byteskwargs(opts)
1714 opts = _byteskwargs(opts)
1698 timer, fm = gettimer(ui, opts)
1715 timer, fm = gettimer(ui, opts)
1699 ctx1 = scmutil.revsingle(repo, rev1, rev1)
1716 ctx1 = scmutil.revsingle(repo, rev1, rev1)
1700 ctx2 = scmutil.revsingle(repo, rev2, rev2)
1717 ctx2 = scmutil.revsingle(repo, rev2, rev2)
1701
1718
1702 def d():
1719 def d():
1703 copies.pathcopies(ctx1, ctx2)
1720 copies.pathcopies(ctx1, ctx2)
1704
1721
1705 timer(d)
1722 timer(d)
1706 fm.end()
1723 fm.end()
1707
1724
1708
1725
1709 @command(
1726 @command(
1710 b'perf::phases|perfphases',
1727 b'perf::phases|perfphases',
1711 [
1728 [
1712 (b'', b'full', False, b'include file reading time too'),
1729 (b'', b'full', False, b'include file reading time too'),
1713 ]
1730 ]
1714 + formatteropts,
1731 + formatteropts,
1715 b"",
1732 b"",
1716 )
1733 )
1717 def perfphases(ui, repo, **opts):
1734 def perfphases(ui, repo, **opts):
1718 """benchmark phasesets computation"""
1735 """benchmark phasesets computation"""
1719 opts = _byteskwargs(opts)
1736 opts = _byteskwargs(opts)
1720 timer, fm = gettimer(ui, opts)
1737 timer, fm = gettimer(ui, opts)
1721 _phases = repo._phasecache
1738 _phases = repo._phasecache
1722 full = opts.get(b'full')
1739 full = opts.get(b'full')
1723 tip_rev = repo.changelog.tiprev()
1740 tip_rev = repo.changelog.tiprev()
1724
1741
1725 def d():
1742 def d():
1726 phases = _phases
1743 phases = _phases
1727 if full:
1744 if full:
1728 clearfilecache(repo, b'_phasecache')
1745 clearfilecache(repo, b'_phasecache')
1729 phases = repo._phasecache
1746 phases = repo._phasecache
1730 phases.invalidate()
1747 phases.invalidate()
1731 phases.phase(repo, tip_rev)
1748 phases.phase(repo, tip_rev)
1732
1749
1733 timer(d)
1750 timer(d)
1734 fm.end()
1751 fm.end()
1735
1752
1736
1753
1737 @command(b'perf::phasesremote|perfphasesremote', [], b"[DEST]")
1754 @command(b'perf::phasesremote|perfphasesremote', [], b"[DEST]")
1738 def perfphasesremote(ui, repo, dest=None, **opts):
1755 def perfphasesremote(ui, repo, dest=None, **opts):
1739 """benchmark time needed to analyse phases of the remote server"""
1756 """benchmark time needed to analyse phases of the remote server"""
1740 from mercurial.node import bin
1757 from mercurial.node import bin
1741 from mercurial import (
1758 from mercurial import (
1742 exchange,
1759 exchange,
1743 hg,
1760 hg,
1744 phases,
1761 phases,
1745 )
1762 )
1746
1763
1747 opts = _byteskwargs(opts)
1764 opts = _byteskwargs(opts)
1748 timer, fm = gettimer(ui, opts)
1765 timer, fm = gettimer(ui, opts)
1749
1766
1750 path = ui.getpath(dest, default=(b'default-push', b'default'))
1767 path = ui.getpath(dest, default=(b'default-push', b'default'))
1751 if not path:
1768 if not path:
1752 raise error.Abort(
1769 raise error.Abort(
1753 b'default repository not configured!',
1770 b'default repository not configured!',
1754 hint=b"see 'hg help config.paths'",
1771 hint=b"see 'hg help config.paths'",
1755 )
1772 )
1756 if util.safehasattr(path, 'main_path'):
1773 if util.safehasattr(path, 'main_path'):
1757 path = path.get_push_variant()
1774 path = path.get_push_variant()
1758 dest = path.loc
1775 dest = path.loc
1759 else:
1776 else:
1760 dest = path.pushloc or path.loc
1777 dest = path.pushloc or path.loc
1761 ui.statusnoi18n(b'analysing phase of %s\n' % util.hidepassword(dest))
1778 ui.statusnoi18n(b'analysing phase of %s\n' % util.hidepassword(dest))
1762 other = hg.peer(repo, opts, dest)
1779 other = hg.peer(repo, opts, dest)
1763
1780
1764 # easier to perform discovery through the operation
1781 # easier to perform discovery through the operation
1765 op = exchange.pushoperation(repo, other)
1782 op = exchange.pushoperation(repo, other)
1766 exchange._pushdiscoverychangeset(op)
1783 exchange._pushdiscoverychangeset(op)
1767
1784
1768 remotesubset = op.fallbackheads
1785 remotesubset = op.fallbackheads
1769
1786
1770 with other.commandexecutor() as e:
1787 with other.commandexecutor() as e:
1771 remotephases = e.callcommand(
1788 remotephases = e.callcommand(
1772 b'listkeys', {b'namespace': b'phases'}
1789 b'listkeys', {b'namespace': b'phases'}
1773 ).result()
1790 ).result()
1774 del other
1791 del other
1775 publishing = remotephases.get(b'publishing', False)
1792 publishing = remotephases.get(b'publishing', False)
1776 if publishing:
1793 if publishing:
1777 ui.statusnoi18n(b'publishing: yes\n')
1794 ui.statusnoi18n(b'publishing: yes\n')
1778 else:
1795 else:
1779 ui.statusnoi18n(b'publishing: no\n')
1796 ui.statusnoi18n(b'publishing: no\n')
1780
1797
1781 has_node = getattr(repo.changelog.index, 'has_node', None)
1798 has_node = getattr(repo.changelog.index, 'has_node', None)
1782 if has_node is None:
1799 if has_node is None:
1783 has_node = repo.changelog.nodemap.__contains__
1800 has_node = repo.changelog.nodemap.__contains__
1784 nonpublishroots = 0
1801 nonpublishroots = 0
1785 for nhex, phase in remotephases.iteritems():
1802 for nhex, phase in remotephases.iteritems():
1786 if nhex == b'publishing': # ignore data related to publish option
1803 if nhex == b'publishing': # ignore data related to publish option
1787 continue
1804 continue
1788 node = bin(nhex)
1805 node = bin(nhex)
1789 if has_node(node) and int(phase):
1806 if has_node(node) and int(phase):
1790 nonpublishroots += 1
1807 nonpublishroots += 1
1791 ui.statusnoi18n(b'number of roots: %d\n' % len(remotephases))
1808 ui.statusnoi18n(b'number of roots: %d\n' % len(remotephases))
1792 ui.statusnoi18n(b'number of known non public roots: %d\n' % nonpublishroots)
1809 ui.statusnoi18n(b'number of known non public roots: %d\n' % nonpublishroots)
1793
1810
1794 def d():
1811 def d():
1795 phases.remotephasessummary(repo, remotesubset, remotephases)
1812 phases.remotephasessummary(repo, remotesubset, remotephases)
1796
1813
1797 timer(d)
1814 timer(d)
1798 fm.end()
1815 fm.end()
1799
1816
1800
1817
1801 @command(
1818 @command(
1802 b'perf::manifest|perfmanifest',
1819 b'perf::manifest|perfmanifest',
1803 [
1820 [
1804 (b'm', b'manifest-rev', False, b'Look up a manifest node revision'),
1821 (b'm', b'manifest-rev', False, b'Look up a manifest node revision'),
1805 (b'', b'clear-disk', False, b'clear on-disk caches too'),
1822 (b'', b'clear-disk', False, b'clear on-disk caches too'),
1806 ]
1823 ]
1807 + formatteropts,
1824 + formatteropts,
1808 b'REV|NODE',
1825 b'REV|NODE',
1809 )
1826 )
1810 def perfmanifest(ui, repo, rev, manifest_rev=False, clear_disk=False, **opts):
1827 def perfmanifest(ui, repo, rev, manifest_rev=False, clear_disk=False, **opts):
1811 """benchmark the time to read a manifest from disk and return a usable
1828 """benchmark the time to read a manifest from disk and return a usable
1812 dict-like object
1829 dict-like object
1813
1830
1814 Manifest caches are cleared before retrieval."""
1831 Manifest caches are cleared before retrieval."""
1815 opts = _byteskwargs(opts)
1832 opts = _byteskwargs(opts)
1816 timer, fm = gettimer(ui, opts)
1833 timer, fm = gettimer(ui, opts)
1817 if not manifest_rev:
1834 if not manifest_rev:
1818 ctx = scmutil.revsingle(repo, rev, rev)
1835 ctx = scmutil.revsingle(repo, rev, rev)
1819 t = ctx.manifestnode()
1836 t = ctx.manifestnode()
1820 else:
1837 else:
1821 from mercurial.node import bin
1838 from mercurial.node import bin
1822
1839
1823 if len(rev) == 40:
1840 if len(rev) == 40:
1824 t = bin(rev)
1841 t = bin(rev)
1825 else:
1842 else:
1826 try:
1843 try:
1827 rev = int(rev)
1844 rev = int(rev)
1828
1845
1829 if util.safehasattr(repo.manifestlog, b'getstorage'):
1846 if util.safehasattr(repo.manifestlog, b'getstorage'):
1830 t = repo.manifestlog.getstorage(b'').node(rev)
1847 t = repo.manifestlog.getstorage(b'').node(rev)
1831 else:
1848 else:
1832 t = repo.manifestlog._revlog.lookup(rev)
1849 t = repo.manifestlog._revlog.lookup(rev)
1833 except ValueError:
1850 except ValueError:
1834 raise error.Abort(
1851 raise error.Abort(
1835 b'manifest revision must be integer or full node'
1852 b'manifest revision must be integer or full node'
1836 )
1853 )
1837
1854
1838 def d():
1855 def d():
1839 repo.manifestlog.clearcaches(clear_persisted_data=clear_disk)
1856 repo.manifestlog.clearcaches(clear_persisted_data=clear_disk)
1840 repo.manifestlog[t].read()
1857 repo.manifestlog[t].read()
1841
1858
1842 timer(d)
1859 timer(d)
1843 fm.end()
1860 fm.end()
1844
1861
1845
1862
1846 @command(b'perf::changeset|perfchangeset', formatteropts)
1863 @command(b'perf::changeset|perfchangeset', formatteropts)
1847 def perfchangeset(ui, repo, rev, **opts):
1864 def perfchangeset(ui, repo, rev, **opts):
1848 opts = _byteskwargs(opts)
1865 opts = _byteskwargs(opts)
1849 timer, fm = gettimer(ui, opts)
1866 timer, fm = gettimer(ui, opts)
1850 n = scmutil.revsingle(repo, rev).node()
1867 n = scmutil.revsingle(repo, rev).node()
1851
1868
1852 def d():
1869 def d():
1853 repo.changelog.read(n)
1870 repo.changelog.read(n)
1854 # repo.changelog._cache = None
1871 # repo.changelog._cache = None
1855
1872
1856 timer(d)
1873 timer(d)
1857 fm.end()
1874 fm.end()
1858
1875
1859
1876
1860 @command(b'perf::ignore|perfignore', formatteropts)
1877 @command(b'perf::ignore|perfignore', formatteropts)
1861 def perfignore(ui, repo, **opts):
1878 def perfignore(ui, repo, **opts):
1862 """benchmark operation related to computing ignore"""
1879 """benchmark operation related to computing ignore"""
1863 opts = _byteskwargs(opts)
1880 opts = _byteskwargs(opts)
1864 timer, fm = gettimer(ui, opts)
1881 timer, fm = gettimer(ui, opts)
1865 dirstate = repo.dirstate
1882 dirstate = repo.dirstate
1866
1883
1867 def setupone():
1884 def setupone():
1868 dirstate.invalidate()
1885 dirstate.invalidate()
1869 clearfilecache(dirstate, b'_ignore')
1886 clearfilecache(dirstate, b'_ignore')
1870
1887
1871 def runone():
1888 def runone():
1872 dirstate._ignore
1889 dirstate._ignore
1873
1890
1874 timer(runone, setup=setupone, title=b"load")
1891 timer(runone, setup=setupone, title=b"load")
1875 fm.end()
1892 fm.end()
1876
1893
1877
1894
1878 @command(
1895 @command(
1879 b'perf::index|perfindex',
1896 b'perf::index|perfindex',
1880 [
1897 [
1881 (b'', b'rev', [], b'revision to be looked up (default tip)'),
1898 (b'', b'rev', [], b'revision to be looked up (default tip)'),
1882 (b'', b'no-lookup', None, b'do not revision lookup post creation'),
1899 (b'', b'no-lookup', None, b'do not revision lookup post creation'),
1883 ]
1900 ]
1884 + formatteropts,
1901 + formatteropts,
1885 )
1902 )
1886 def perfindex(ui, repo, **opts):
1903 def perfindex(ui, repo, **opts):
1887 """benchmark index creation time followed by a lookup
1904 """benchmark index creation time followed by a lookup
1888
1905
1889 The default is to look `tip` up. Depending on the index implementation,
1906 The default is to look `tip` up. Depending on the index implementation,
1890 the revision looked up can matters. For example, an implementation
1907 the revision looked up can matters. For example, an implementation
1891 scanning the index will have a faster lookup time for `--rev tip` than for
1908 scanning the index will have a faster lookup time for `--rev tip` than for
1892 `--rev 0`. The number of looked up revisions and their order can also
1909 `--rev 0`. The number of looked up revisions and their order can also
1893 matters.
1910 matters.
1894
1911
1895 Example of useful set to test:
1912 Example of useful set to test:
1896
1913
1897 * tip
1914 * tip
1898 * 0
1915 * 0
1899 * -10:
1916 * -10:
1900 * :10
1917 * :10
1901 * -10: + :10
1918 * -10: + :10
1902 * :10: + -10:
1919 * :10: + -10:
1903 * -10000:
1920 * -10000:
1904 * -10000: + 0
1921 * -10000: + 0
1905
1922
1906 It is not currently possible to check for lookup of a missing node. For
1923 It is not currently possible to check for lookup of a missing node. For
1907 deeper lookup benchmarking, checkout the `perfnodemap` command."""
1924 deeper lookup benchmarking, checkout the `perfnodemap` command."""
1908 import mercurial.revlog
1925 import mercurial.revlog
1909
1926
1910 opts = _byteskwargs(opts)
1927 opts = _byteskwargs(opts)
1911 timer, fm = gettimer(ui, opts)
1928 timer, fm = gettimer(ui, opts)
1912 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
1929 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
1913 if opts[b'no_lookup']:
1930 if opts[b'no_lookup']:
1914 if opts['rev']:
1931 if opts['rev']:
1915 raise error.Abort('--no-lookup and --rev are mutually exclusive')
1932 raise error.Abort('--no-lookup and --rev are mutually exclusive')
1916 nodes = []
1933 nodes = []
1917 elif not opts[b'rev']:
1934 elif not opts[b'rev']:
1918 nodes = [repo[b"tip"].node()]
1935 nodes = [repo[b"tip"].node()]
1919 else:
1936 else:
1920 revs = scmutil.revrange(repo, opts[b'rev'])
1937 revs = scmutil.revrange(repo, opts[b'rev'])
1921 cl = repo.changelog
1938 cl = repo.changelog
1922 nodes = [cl.node(r) for r in revs]
1939 nodes = [cl.node(r) for r in revs]
1923
1940
1924 unfi = repo.unfiltered()
1941 unfi = repo.unfiltered()
1925 # find the filecache func directly
1942 # find the filecache func directly
1926 # This avoid polluting the benchmark with the filecache logic
1943 # This avoid polluting the benchmark with the filecache logic
1927 makecl = unfi.__class__.changelog.func
1944 makecl = unfi.__class__.changelog.func
1928
1945
1929 def setup():
1946 def setup():
1930 # probably not necessary, but for good measure
1947 # probably not necessary, but for good measure
1931 clearchangelog(unfi)
1948 clearchangelog(unfi)
1932
1949
1933 def d():
1950 def d():
1934 cl = makecl(unfi)
1951 cl = makecl(unfi)
1935 for n in nodes:
1952 for n in nodes:
1936 cl.rev(n)
1953 cl.rev(n)
1937
1954
1938 timer(d, setup=setup)
1955 timer(d, setup=setup)
1939 fm.end()
1956 fm.end()
1940
1957
1941
1958
1942 @command(
1959 @command(
1943 b'perf::nodemap|perfnodemap',
1960 b'perf::nodemap|perfnodemap',
1944 [
1961 [
1945 (b'', b'rev', [], b'revision to be looked up (default tip)'),
1962 (b'', b'rev', [], b'revision to be looked up (default tip)'),
1946 (b'', b'clear-caches', True, b'clear revlog cache between calls'),
1963 (b'', b'clear-caches', True, b'clear revlog cache between calls'),
1947 ]
1964 ]
1948 + formatteropts,
1965 + formatteropts,
1949 )
1966 )
1950 def perfnodemap(ui, repo, **opts):
1967 def perfnodemap(ui, repo, **opts):
1951 """benchmark the time necessary to look up revision from a cold nodemap
1968 """benchmark the time necessary to look up revision from a cold nodemap
1952
1969
1953 Depending on the implementation, the amount and order of revision we look
1970 Depending on the implementation, the amount and order of revision we look
1954 up can varies. Example of useful set to test:
1971 up can varies. Example of useful set to test:
1955 * tip
1972 * tip
1956 * 0
1973 * 0
1957 * -10:
1974 * -10:
1958 * :10
1975 * :10
1959 * -10: + :10
1976 * -10: + :10
1960 * :10: + -10:
1977 * :10: + -10:
1961 * -10000:
1978 * -10000:
1962 * -10000: + 0
1979 * -10000: + 0
1963
1980
1964 The command currently focus on valid binary lookup. Benchmarking for
1981 The command currently focus on valid binary lookup. Benchmarking for
1965 hexlookup, prefix lookup and missing lookup would also be valuable.
1982 hexlookup, prefix lookup and missing lookup would also be valuable.
1966 """
1983 """
1967 import mercurial.revlog
1984 import mercurial.revlog
1968
1985
1969 opts = _byteskwargs(opts)
1986 opts = _byteskwargs(opts)
1970 timer, fm = gettimer(ui, opts)
1987 timer, fm = gettimer(ui, opts)
1971 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
1988 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
1972
1989
1973 unfi = repo.unfiltered()
1990 unfi = repo.unfiltered()
1974 clearcaches = opts[b'clear_caches']
1991 clearcaches = opts[b'clear_caches']
1975 # find the filecache func directly
1992 # find the filecache func directly
1976 # This avoid polluting the benchmark with the filecache logic
1993 # This avoid polluting the benchmark with the filecache logic
1977 makecl = unfi.__class__.changelog.func
1994 makecl = unfi.__class__.changelog.func
1978 if not opts[b'rev']:
1995 if not opts[b'rev']:
1979 raise error.Abort(b'use --rev to specify revisions to look up')
1996 raise error.Abort(b'use --rev to specify revisions to look up')
1980 revs = scmutil.revrange(repo, opts[b'rev'])
1997 revs = scmutil.revrange(repo, opts[b'rev'])
1981 cl = repo.changelog
1998 cl = repo.changelog
1982 nodes = [cl.node(r) for r in revs]
1999 nodes = [cl.node(r) for r in revs]
1983
2000
1984 # use a list to pass reference to a nodemap from one closure to the next
2001 # use a list to pass reference to a nodemap from one closure to the next
1985 nodeget = [None]
2002 nodeget = [None]
1986
2003
1987 def setnodeget():
2004 def setnodeget():
1988 # probably not necessary, but for good measure
2005 # probably not necessary, but for good measure
1989 clearchangelog(unfi)
2006 clearchangelog(unfi)
1990 cl = makecl(unfi)
2007 cl = makecl(unfi)
1991 if util.safehasattr(cl.index, 'get_rev'):
2008 if util.safehasattr(cl.index, 'get_rev'):
1992 nodeget[0] = cl.index.get_rev
2009 nodeget[0] = cl.index.get_rev
1993 else:
2010 else:
1994 nodeget[0] = cl.nodemap.get
2011 nodeget[0] = cl.nodemap.get
1995
2012
1996 def d():
2013 def d():
1997 get = nodeget[0]
2014 get = nodeget[0]
1998 for n in nodes:
2015 for n in nodes:
1999 get(n)
2016 get(n)
2000
2017
2001 setup = None
2018 setup = None
2002 if clearcaches:
2019 if clearcaches:
2003
2020
2004 def setup():
2021 def setup():
2005 setnodeget()
2022 setnodeget()
2006
2023
2007 else:
2024 else:
2008 setnodeget()
2025 setnodeget()
2009 d() # prewarm the data structure
2026 d() # prewarm the data structure
2010 timer(d, setup=setup)
2027 timer(d, setup=setup)
2011 fm.end()
2028 fm.end()
2012
2029
2013
2030
2014 @command(b'perf::startup|perfstartup', formatteropts)
2031 @command(b'perf::startup|perfstartup', formatteropts)
2015 def perfstartup(ui, repo, **opts):
2032 def perfstartup(ui, repo, **opts):
2016 opts = _byteskwargs(opts)
2033 opts = _byteskwargs(opts)
2017 timer, fm = gettimer(ui, opts)
2034 timer, fm = gettimer(ui, opts)
2018
2035
2019 def d():
2036 def d():
2020 if os.name != 'nt':
2037 if os.name != 'nt':
2021 os.system(
2038 os.system(
2022 b"HGRCPATH= %s version -q > /dev/null" % fsencode(sys.argv[0])
2039 b"HGRCPATH= %s version -q > /dev/null" % fsencode(sys.argv[0])
2023 )
2040 )
2024 else:
2041 else:
2025 os.environ['HGRCPATH'] = r' '
2042 os.environ['HGRCPATH'] = r' '
2026 os.system("%s version -q > NUL" % sys.argv[0])
2043 os.system("%s version -q > NUL" % sys.argv[0])
2027
2044
2028 timer(d)
2045 timer(d)
2029 fm.end()
2046 fm.end()
2030
2047
2031
2048
2049 def _clear_store_audit_cache(repo):
2050 vfs = getsvfs(repo)
2051 # unwrap the fncache proxy
2052 if not hasattr(vfs, "audit"):
2053 vfs = getattr(vfs, "vfs", vfs)
2054 auditor = vfs.audit
2055 if hasattr(auditor, "clear_audit_cache"):
2056 auditor.clear_audit_cache()
2057 elif hasattr(auditor, "audited"):
2058 auditor.audited.clear()
2059 auditor.auditeddir.clear()
2060
2061
2032 def _find_stream_generator(version):
2062 def _find_stream_generator(version):
2033 """find the proper generator function for this stream version"""
2063 """find the proper generator function for this stream version"""
2034 import mercurial.streamclone
2064 import mercurial.streamclone
2035
2065
2036 available = {}
2066 available = {}
2037
2067
2038 # try to fetch a v1 generator
2068 # try to fetch a v1 generator
2039 generatev1 = getattr(mercurial.streamclone, "generatev1", None)
2069 generatev1 = getattr(mercurial.streamclone, "generatev1", None)
2040 if generatev1 is not None:
2070 if generatev1 is not None:
2041
2071
2042 def generate(repo):
2072 def generate(repo):
2043 entries, bytes, data = generatev2(repo, None, None, True)
2073 entries, bytes, data = generatev1(repo, None, None, True)
2044 return data
2074 return data
2045
2075
2046 available[b'v1'] = generatev1
2076 available[b'v1'] = generatev1
2047 # try to fetch a v2 generator
2077 # try to fetch a v2 generator
2048 generatev2 = getattr(mercurial.streamclone, "generatev2", None)
2078 generatev2 = getattr(mercurial.streamclone, "generatev2", None)
2049 if generatev2 is not None:
2079 if generatev2 is not None:
2050
2080
2051 def generate(repo):
2081 def generate(repo):
2052 entries, bytes, data = generatev2(repo, None, None, True)
2082 entries, bytes, data = generatev2(repo, None, None, True)
2053 return data
2083 return data
2054
2084
2055 available[b'v2'] = generate
2085 available[b'v2'] = generate
2056 # try to fetch a v3 generator
2086 # try to fetch a v3 generator
2057 generatev3 = getattr(mercurial.streamclone, "generatev3", None)
2087 generatev3 = getattr(mercurial.streamclone, "generatev3", None)
2058 if generatev3 is not None:
2088 if generatev3 is not None:
2059
2089
2060 def generate(repo):
2090 def generate(repo):
2061 entries, bytes, data = generatev3(repo, None, None, True)
2091 return generatev3(repo, None, None, True)
2062 return data
2063
2092
2064 available[b'v3-exp'] = generate
2093 available[b'v3-exp'] = generate
2065
2094
2066 # resolve the request
2095 # resolve the request
2067 if version == b"latest":
2096 if version == b"latest":
2068 # latest is the highest non experimental version
2097 # latest is the highest non experimental version
2069 latest_key = max(v for v in available if b'-exp' not in v)
2098 latest_key = max(v for v in available if b'-exp' not in v)
2070 return available[latest_key]
2099 return available[latest_key]
2071 elif version in available:
2100 elif version in available:
2072 return available[version]
2101 return available[version]
2073 else:
2102 else:
2074 msg = b"unkown or unavailable version: %s"
2103 msg = b"unkown or unavailable version: %s"
2075 msg %= version
2104 msg %= version
2076 hint = b"available versions: %s"
2105 hint = b"available versions: %s"
2077 hint %= b', '.join(sorted(available))
2106 hint %= b', '.join(sorted(available))
2078 raise error.Abort(msg, hint=hint)
2107 raise error.Abort(msg, hint=hint)
2079
2108
2080
2109
2081 @command(
2110 @command(
2082 b'perf::stream-locked-section',
2111 b'perf::stream-locked-section',
2083 [
2112 [
2084 (
2113 (
2085 b'',
2114 b'',
2086 b'stream-version',
2115 b'stream-version',
2087 b'latest',
2116 b'latest',
2088 b'stream version to use ("v1", "v2", "v3" or "latest", (the default))',
2117 b'stream version to use ("v1", "v2", "v3-exp" '
2118 b'or "latest", (the default))',
2089 ),
2119 ),
2090 ]
2120 ]
2091 + formatteropts,
2121 + formatteropts,
2092 )
2122 )
2093 def perf_stream_clone_scan(ui, repo, stream_version, **opts):
2123 def perf_stream_clone_scan(ui, repo, stream_version, **opts):
2094 """benchmark the initial, repo-locked, section of a stream-clone"""
2124 """benchmark the initial, repo-locked, section of a stream-clone"""
2095
2125
2096 opts = _byteskwargs(opts)
2126 opts = _byteskwargs(opts)
2097 timer, fm = gettimer(ui, opts)
2127 timer, fm = gettimer(ui, opts)
2098
2128
2099 # deletion of the generator may trigger some cleanup that we do not want to
2129 # deletion of the generator may trigger some cleanup that we do not want to
2100 # measure
2130 # measure
2101 result_holder = [None]
2131 result_holder = [None]
2102
2132
2103 def setupone():
2133 def setupone():
2104 result_holder[0] = None
2134 result_holder[0] = None
2135 # This is important for the full generation, even if it does not
2136 # currently matters, it seems safer to also real it here.
2137 _clear_store_audit_cache(repo)
2105
2138
2106 generate = _find_stream_generator(stream_version)
2139 generate = _find_stream_generator(stream_version)
2107
2140
2108 def runone():
2141 def runone():
2109 # the lock is held for the duration the initialisation
2142 # the lock is held for the duration the initialisation
2110 result_holder[0] = generate(repo)
2143 result_holder[0] = generate(repo)
2111
2144
2112 timer(runone, setup=setupone, title=b"load")
2145 timer(runone, setup=setupone, title=b"load")
2113 fm.end()
2146 fm.end()
2114
2147
2115
2148
2116 @command(
2149 @command(
2117 b'perf::stream-generate',
2150 b'perf::stream-generate',
2118 [
2151 [
2119 (
2152 (
2120 b'',
2153 b'',
2121 b'stream-version',
2154 b'stream-version',
2122 b'latest',
2155 b'latest',
2123 b'stream version to us ("v1", "v2" or "latest", (the default))',
2156 b'stream version to us ("v1", "v2", "v3-exp" '
2157 b'or "latest", (the default))',
2124 ),
2158 ),
2125 ]
2159 ]
2126 + formatteropts,
2160 + formatteropts,
2127 )
2161 )
2128 def perf_stream_clone_generate(ui, repo, stream_version, **opts):
2162 def perf_stream_clone_generate(ui, repo, stream_version, **opts):
2129 """benchmark the full generation of a stream clone"""
2163 """benchmark the full generation of a stream clone"""
2130
2164
2131 opts = _byteskwargs(opts)
2165 opts = _byteskwargs(opts)
2132 timer, fm = gettimer(ui, opts)
2166 timer, fm = gettimer(ui, opts)
2133
2167
2134 # deletion of the generator may trigger some cleanup that we do not want to
2168 # deletion of the generator may trigger some cleanup that we do not want to
2135 # measure
2169 # measure
2136
2170
2137 generate = _find_stream_generator(stream_version)
2171 generate = _find_stream_generator(stream_version)
2138
2172
2173 def setup():
2174 _clear_store_audit_cache(repo)
2175
2139 def runone():
2176 def runone():
2140 # the lock is held for the duration the initialisation
2177 # the lock is held for the duration the initialisation
2141 for chunk in generate(repo):
2178 for chunk in generate(repo):
2142 pass
2179 pass
2143
2180
2144 timer(runone, title=b"generate")
2181 timer(runone, setup=setup, title=b"generate")
2145 fm.end()
2182 fm.end()
2146
2183
2147
2184
2148 @command(
2185 @command(
2149 b'perf::stream-consume',
2186 b'perf::stream-consume',
2150 formatteropts,
2187 formatteropts,
2151 )
2188 )
2152 def perf_stream_clone_consume(ui, repo, filename, **opts):
2189 def perf_stream_clone_consume(ui, repo, filename, **opts):
2153 """benchmark the full application of a stream clone
2190 """benchmark the full application of a stream clone
2154
2191
2155 This include the creation of the repository
2192 This include the creation of the repository
2156 """
2193 """
2157 # try except to appease check code
2194 # try except to appease check code
2158 msg = b"mercurial too old, missing necessary module: %s"
2195 msg = b"mercurial too old, missing necessary module: %s"
2159 try:
2196 try:
2160 from mercurial import bundle2
2197 from mercurial import bundle2
2161 except ImportError as exc:
2198 except ImportError as exc:
2162 msg %= _bytestr(exc)
2199 msg %= _bytestr(exc)
2163 raise error.Abort(msg)
2200 raise error.Abort(msg)
2164 try:
2201 try:
2165 from mercurial import exchange
2202 from mercurial import exchange
2166 except ImportError as exc:
2203 except ImportError as exc:
2167 msg %= _bytestr(exc)
2204 msg %= _bytestr(exc)
2168 raise error.Abort(msg)
2205 raise error.Abort(msg)
2169 try:
2206 try:
2170 from mercurial import hg
2207 from mercurial import hg
2171 except ImportError as exc:
2208 except ImportError as exc:
2172 msg %= _bytestr(exc)
2209 msg %= _bytestr(exc)
2173 raise error.Abort(msg)
2210 raise error.Abort(msg)
2174 try:
2211 try:
2175 from mercurial import localrepo
2212 from mercurial import localrepo
2176 except ImportError as exc:
2213 except ImportError as exc:
2177 msg %= _bytestr(exc)
2214 msg %= _bytestr(exc)
2178 raise error.Abort(msg)
2215 raise error.Abort(msg)
2179
2216
2180 opts = _byteskwargs(opts)
2217 opts = _byteskwargs(opts)
2181 timer, fm = gettimer(ui, opts)
2218 timer, fm = gettimer(ui, opts)
2182
2219
2183 # deletion of the generator may trigger some cleanup that we do not want to
2220 # deletion of the generator may trigger some cleanup that we do not want to
2184 # measure
2221 # measure
2185 if not (os.path.isfile(filename) and os.access(filename, os.R_OK)):
2222 if not (os.path.isfile(filename) and os.access(filename, os.R_OK)):
2186 raise error.Abort("not a readable file: %s" % filename)
2223 raise error.Abort("not a readable file: %s" % filename)
2187
2224
2188 run_variables = [None, None]
2225 run_variables = [None, None]
2189
2226
2227 # we create the new repository next to the other one for two reasons:
2228 # - this way we use the same file system, which are relevant for benchmark
2229 # - if /tmp/ is small, the operation could overfills it.
2230 source_repo_dir = os.path.dirname(repo.root)
2231
2190 @contextlib.contextmanager
2232 @contextlib.contextmanager
2191 def context():
2233 def context():
2192 with open(filename, mode='rb') as bundle:
2234 with open(filename, mode='rb') as bundle:
2193 with tempfile.TemporaryDirectory() as tmp_dir:
2235 with tempfile.TemporaryDirectory(
2236 prefix=b'hg-perf-stream-consume-',
2237 dir=source_repo_dir,
2238 ) as tmp_dir:
2194 tmp_dir = fsencode(tmp_dir)
2239 tmp_dir = fsencode(tmp_dir)
2195 run_variables[0] = bundle
2240 run_variables[0] = bundle
2196 run_variables[1] = tmp_dir
2241 run_variables[1] = tmp_dir
2197 yield
2242 yield
2198 run_variables[0] = None
2243 run_variables[0] = None
2199 run_variables[1] = None
2244 run_variables[1] = None
2200
2245
2201 def runone():
2246 def runone():
2202 bundle = run_variables[0]
2247 bundle = run_variables[0]
2203 tmp_dir = run_variables[1]
2248 tmp_dir = run_variables[1]
2249
2250 # we actually wants to copy all config to ensure the repo config is
2251 # taken in account during the benchmark
2252 new_ui = repo.ui.__class__(repo.ui)
2204 # only pass ui when no srcrepo
2253 # only pass ui when no srcrepo
2205 localrepo.createrepository(
2254 localrepo.createrepository(
2206 repo.ui, tmp_dir, requirements=repo.requirements
2255 new_ui, tmp_dir, requirements=repo.requirements
2207 )
2256 )
2208 target = hg.repository(repo.ui, tmp_dir)
2257 target = hg.repository(new_ui, tmp_dir)
2209 gen = exchange.readbundle(target.ui, bundle, bundle.name)
2258 gen = exchange.readbundle(target.ui, bundle, bundle.name)
2210 # stream v1
2259 # stream v1
2211 if util.safehasattr(gen, 'apply'):
2260 if util.safehasattr(gen, 'apply'):
2212 gen.apply(target)
2261 gen.apply(target)
2213 else:
2262 else:
2214 with target.transaction(b"perf::stream-consume") as tr:
2263 with target.transaction(b"perf::stream-consume") as tr:
2215 bundle2.applybundle(
2264 bundle2.applybundle(
2216 target,
2265 target,
2217 gen,
2266 gen,
2218 tr,
2267 tr,
2219 source=b'unbundle',
2268 source=b'unbundle',
2220 url=filename,
2269 url=filename,
2221 )
2270 )
2222
2271
2223 timer(runone, context=context, title=b"consume")
2272 timer(runone, context=context, title=b"consume")
2224 fm.end()
2273 fm.end()
2225
2274
2226
2275
2227 @command(b'perf::parents|perfparents', formatteropts)
2276 @command(b'perf::parents|perfparents', formatteropts)
2228 def perfparents(ui, repo, **opts):
2277 def perfparents(ui, repo, **opts):
2229 """benchmark the time necessary to fetch one changeset's parents.
2278 """benchmark the time necessary to fetch one changeset's parents.
2230
2279
2231 The fetch is done using the `node identifier`, traversing all object layers
2280 The fetch is done using the `node identifier`, traversing all object layers
2232 from the repository object. The first N revisions will be used for this
2281 from the repository object. The first N revisions will be used for this
2233 benchmark. N is controlled by the ``perf.parentscount`` config option
2282 benchmark. N is controlled by the ``perf.parentscount`` config option
2234 (default: 1000).
2283 (default: 1000).
2235 """
2284 """
2236 opts = _byteskwargs(opts)
2285 opts = _byteskwargs(opts)
2237 timer, fm = gettimer(ui, opts)
2286 timer, fm = gettimer(ui, opts)
2238 # control the number of commits perfparents iterates over
2287 # control the number of commits perfparents iterates over
2239 # experimental config: perf.parentscount
2288 # experimental config: perf.parentscount
2240 count = getint(ui, b"perf", b"parentscount", 1000)
2289 count = getint(ui, b"perf", b"parentscount", 1000)
2241 if len(repo.changelog) < count:
2290 if len(repo.changelog) < count:
2242 raise error.Abort(b"repo needs %d commits for this test" % count)
2291 raise error.Abort(b"repo needs %d commits for this test" % count)
2243 repo = repo.unfiltered()
2292 repo = repo.unfiltered()
2244 nl = [repo.changelog.node(i) for i in _xrange(count)]
2293 nl = [repo.changelog.node(i) for i in _xrange(count)]
2245
2294
2246 def d():
2295 def d():
2247 for n in nl:
2296 for n in nl:
2248 repo.changelog.parents(n)
2297 repo.changelog.parents(n)
2249
2298
2250 timer(d)
2299 timer(d)
2251 fm.end()
2300 fm.end()
2252
2301
2253
2302
2254 @command(b'perf::ctxfiles|perfctxfiles', formatteropts)
2303 @command(b'perf::ctxfiles|perfctxfiles', formatteropts)
2255 def perfctxfiles(ui, repo, x, **opts):
2304 def perfctxfiles(ui, repo, x, **opts):
2256 opts = _byteskwargs(opts)
2305 opts = _byteskwargs(opts)
2257 x = int(x)
2306 x = int(x)
2258 timer, fm = gettimer(ui, opts)
2307 timer, fm = gettimer(ui, opts)
2259
2308
2260 def d():
2309 def d():
2261 len(repo[x].files())
2310 len(repo[x].files())
2262
2311
2263 timer(d)
2312 timer(d)
2264 fm.end()
2313 fm.end()
2265
2314
2266
2315
2267 @command(b'perf::rawfiles|perfrawfiles', formatteropts)
2316 @command(b'perf::rawfiles|perfrawfiles', formatteropts)
2268 def perfrawfiles(ui, repo, x, **opts):
2317 def perfrawfiles(ui, repo, x, **opts):
2269 opts = _byteskwargs(opts)
2318 opts = _byteskwargs(opts)
2270 x = int(x)
2319 x = int(x)
2271 timer, fm = gettimer(ui, opts)
2320 timer, fm = gettimer(ui, opts)
2272 cl = repo.changelog
2321 cl = repo.changelog
2273
2322
2274 def d():
2323 def d():
2275 len(cl.read(x)[3])
2324 len(cl.read(x)[3])
2276
2325
2277 timer(d)
2326 timer(d)
2278 fm.end()
2327 fm.end()
2279
2328
2280
2329
2281 @command(b'perf::lookup|perflookup', formatteropts)
2330 @command(b'perf::lookup|perflookup', formatteropts)
2282 def perflookup(ui, repo, rev, **opts):
2331 def perflookup(ui, repo, rev, **opts):
2283 opts = _byteskwargs(opts)
2332 opts = _byteskwargs(opts)
2284 timer, fm = gettimer(ui, opts)
2333 timer, fm = gettimer(ui, opts)
2285 timer(lambda: len(repo.lookup(rev)))
2334 timer(lambda: len(repo.lookup(rev)))
2286 fm.end()
2335 fm.end()
2287
2336
2288
2337
2289 @command(
2338 @command(
2290 b'perf::linelogedits|perflinelogedits',
2339 b'perf::linelogedits|perflinelogedits',
2291 [
2340 [
2292 (b'n', b'edits', 10000, b'number of edits'),
2341 (b'n', b'edits', 10000, b'number of edits'),
2293 (b'', b'max-hunk-lines', 10, b'max lines in a hunk'),
2342 (b'', b'max-hunk-lines', 10, b'max lines in a hunk'),
2294 ],
2343 ],
2295 norepo=True,
2344 norepo=True,
2296 )
2345 )
2297 def perflinelogedits(ui, **opts):
2346 def perflinelogedits(ui, **opts):
2298 from mercurial import linelog
2347 from mercurial import linelog
2299
2348
2300 opts = _byteskwargs(opts)
2349 opts = _byteskwargs(opts)
2301
2350
2302 edits = opts[b'edits']
2351 edits = opts[b'edits']
2303 maxhunklines = opts[b'max_hunk_lines']
2352 maxhunklines = opts[b'max_hunk_lines']
2304
2353
2305 maxb1 = 100000
2354 maxb1 = 100000
2306 random.seed(0)
2355 random.seed(0)
2307 randint = random.randint
2356 randint = random.randint
2308 currentlines = 0
2357 currentlines = 0
2309 arglist = []
2358 arglist = []
2310 for rev in _xrange(edits):
2359 for rev in _xrange(edits):
2311 a1 = randint(0, currentlines)
2360 a1 = randint(0, currentlines)
2312 a2 = randint(a1, min(currentlines, a1 + maxhunklines))
2361 a2 = randint(a1, min(currentlines, a1 + maxhunklines))
2313 b1 = randint(0, maxb1)
2362 b1 = randint(0, maxb1)
2314 b2 = randint(b1, b1 + maxhunklines)
2363 b2 = randint(b1, b1 + maxhunklines)
2315 currentlines += (b2 - b1) - (a2 - a1)
2364 currentlines += (b2 - b1) - (a2 - a1)
2316 arglist.append((rev, a1, a2, b1, b2))
2365 arglist.append((rev, a1, a2, b1, b2))
2317
2366
2318 def d():
2367 def d():
2319 ll = linelog.linelog()
2368 ll = linelog.linelog()
2320 for args in arglist:
2369 for args in arglist:
2321 ll.replacelines(*args)
2370 ll.replacelines(*args)
2322
2371
2323 timer, fm = gettimer(ui, opts)
2372 timer, fm = gettimer(ui, opts)
2324 timer(d)
2373 timer(d)
2325 fm.end()
2374 fm.end()
2326
2375
2327
2376
2328 @command(b'perf::revrange|perfrevrange', formatteropts)
2377 @command(b'perf::revrange|perfrevrange', formatteropts)
2329 def perfrevrange(ui, repo, *specs, **opts):
2378 def perfrevrange(ui, repo, *specs, **opts):
2330 opts = _byteskwargs(opts)
2379 opts = _byteskwargs(opts)
2331 timer, fm = gettimer(ui, opts)
2380 timer, fm = gettimer(ui, opts)
2332 revrange = scmutil.revrange
2381 revrange = scmutil.revrange
2333 timer(lambda: len(revrange(repo, specs)))
2382 timer(lambda: len(revrange(repo, specs)))
2334 fm.end()
2383 fm.end()
2335
2384
2336
2385
2337 @command(b'perf::nodelookup|perfnodelookup', formatteropts)
2386 @command(b'perf::nodelookup|perfnodelookup', formatteropts)
2338 def perfnodelookup(ui, repo, rev, **opts):
2387 def perfnodelookup(ui, repo, rev, **opts):
2339 opts = _byteskwargs(opts)
2388 opts = _byteskwargs(opts)
2340 timer, fm = gettimer(ui, opts)
2389 timer, fm = gettimer(ui, opts)
2341 import mercurial.revlog
2390 import mercurial.revlog
2342
2391
2343 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
2392 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
2344 n = scmutil.revsingle(repo, rev).node()
2393 n = scmutil.revsingle(repo, rev).node()
2345
2394
2346 try:
2395 try:
2347 cl = revlog(getsvfs(repo), radix=b"00changelog")
2396 cl = revlog(getsvfs(repo), radix=b"00changelog")
2348 except TypeError:
2397 except TypeError:
2349 cl = revlog(getsvfs(repo), indexfile=b"00changelog.i")
2398 cl = revlog(getsvfs(repo), indexfile=b"00changelog.i")
2350
2399
2351 def d():
2400 def d():
2352 cl.rev(n)
2401 cl.rev(n)
2353 clearcaches(cl)
2402 clearcaches(cl)
2354
2403
2355 timer(d)
2404 timer(d)
2356 fm.end()
2405 fm.end()
2357
2406
2358
2407
2359 @command(
2408 @command(
2360 b'perf::log|perflog',
2409 b'perf::log|perflog',
2361 [(b'', b'rename', False, b'ask log to follow renames')] + formatteropts,
2410 [(b'', b'rename', False, b'ask log to follow renames')] + formatteropts,
2362 )
2411 )
2363 def perflog(ui, repo, rev=None, **opts):
2412 def perflog(ui, repo, rev=None, **opts):
2364 opts = _byteskwargs(opts)
2413 opts = _byteskwargs(opts)
2365 if rev is None:
2414 if rev is None:
2366 rev = []
2415 rev = []
2367 timer, fm = gettimer(ui, opts)
2416 timer, fm = gettimer(ui, opts)
2368 ui.pushbuffer()
2417 ui.pushbuffer()
2369 timer(
2418 timer(
2370 lambda: commands.log(
2419 lambda: commands.log(
2371 ui, repo, rev=rev, date=b'', user=b'', copies=opts.get(b'rename')
2420 ui, repo, rev=rev, date=b'', user=b'', copies=opts.get(b'rename')
2372 )
2421 )
2373 )
2422 )
2374 ui.popbuffer()
2423 ui.popbuffer()
2375 fm.end()
2424 fm.end()
2376
2425
2377
2426
2378 @command(b'perf::moonwalk|perfmoonwalk', formatteropts)
2427 @command(b'perf::moonwalk|perfmoonwalk', formatteropts)
2379 def perfmoonwalk(ui, repo, **opts):
2428 def perfmoonwalk(ui, repo, **opts):
2380 """benchmark walking the changelog backwards
2429 """benchmark walking the changelog backwards
2381
2430
2382 This also loads the changelog data for each revision in the changelog.
2431 This also loads the changelog data for each revision in the changelog.
2383 """
2432 """
2384 opts = _byteskwargs(opts)
2433 opts = _byteskwargs(opts)
2385 timer, fm = gettimer(ui, opts)
2434 timer, fm = gettimer(ui, opts)
2386
2435
2387 def moonwalk():
2436 def moonwalk():
2388 for i in repo.changelog.revs(start=(len(repo) - 1), stop=-1):
2437 for i in repo.changelog.revs(start=(len(repo) - 1), stop=-1):
2389 ctx = repo[i]
2438 ctx = repo[i]
2390 ctx.branch() # read changelog data (in addition to the index)
2439 ctx.branch() # read changelog data (in addition to the index)
2391
2440
2392 timer(moonwalk)
2441 timer(moonwalk)
2393 fm.end()
2442 fm.end()
2394
2443
2395
2444
2396 @command(
2445 @command(
2397 b'perf::templating|perftemplating',
2446 b'perf::templating|perftemplating',
2398 [
2447 [
2399 (b'r', b'rev', [], b'revisions to run the template on'),
2448 (b'r', b'rev', [], b'revisions to run the template on'),
2400 ]
2449 ]
2401 + formatteropts,
2450 + formatteropts,
2402 )
2451 )
2403 def perftemplating(ui, repo, testedtemplate=None, **opts):
2452 def perftemplating(ui, repo, testedtemplate=None, **opts):
2404 """test the rendering time of a given template"""
2453 """test the rendering time of a given template"""
2405 if makelogtemplater is None:
2454 if makelogtemplater is None:
2406 raise error.Abort(
2455 raise error.Abort(
2407 b"perftemplating not available with this Mercurial",
2456 b"perftemplating not available with this Mercurial",
2408 hint=b"use 4.3 or later",
2457 hint=b"use 4.3 or later",
2409 )
2458 )
2410
2459
2411 opts = _byteskwargs(opts)
2460 opts = _byteskwargs(opts)
2412
2461
2413 nullui = ui.copy()
2462 nullui = ui.copy()
2414 nullui.fout = open(os.devnull, 'wb')
2463 nullui.fout = open(os.devnull, 'wb')
2415 nullui.disablepager()
2464 nullui.disablepager()
2416 revs = opts.get(b'rev')
2465 revs = opts.get(b'rev')
2417 if not revs:
2466 if not revs:
2418 revs = [b'all()']
2467 revs = [b'all()']
2419 revs = list(scmutil.revrange(repo, revs))
2468 revs = list(scmutil.revrange(repo, revs))
2420
2469
2421 defaulttemplate = (
2470 defaulttemplate = (
2422 b'{date|shortdate} [{rev}:{node|short}]'
2471 b'{date|shortdate} [{rev}:{node|short}]'
2423 b' {author|person}: {desc|firstline}\n'
2472 b' {author|person}: {desc|firstline}\n'
2424 )
2473 )
2425 if testedtemplate is None:
2474 if testedtemplate is None:
2426 testedtemplate = defaulttemplate
2475 testedtemplate = defaulttemplate
2427 displayer = makelogtemplater(nullui, repo, testedtemplate)
2476 displayer = makelogtemplater(nullui, repo, testedtemplate)
2428
2477
2429 def format():
2478 def format():
2430 for r in revs:
2479 for r in revs:
2431 ctx = repo[r]
2480 ctx = repo[r]
2432 displayer.show(ctx)
2481 displayer.show(ctx)
2433 displayer.flush(ctx)
2482 displayer.flush(ctx)
2434
2483
2435 timer, fm = gettimer(ui, opts)
2484 timer, fm = gettimer(ui, opts)
2436 timer(format)
2485 timer(format)
2437 fm.end()
2486 fm.end()
2438
2487
2439
2488
2440 def _displaystats(ui, opts, entries, data):
2489 def _displaystats(ui, opts, entries, data):
2441 # use a second formatter because the data are quite different, not sure
2490 # use a second formatter because the data are quite different, not sure
2442 # how it flies with the templater.
2491 # how it flies with the templater.
2443 fm = ui.formatter(b'perf-stats', opts)
2492 fm = ui.formatter(b'perf-stats', opts)
2444 for key, title in entries:
2493 for key, title in entries:
2445 values = data[key]
2494 values = data[key]
2446 nbvalues = len(data)
2495 nbvalues = len(data)
2447 values.sort()
2496 values.sort()
2448 stats = {
2497 stats = {
2449 'key': key,
2498 'key': key,
2450 'title': title,
2499 'title': title,
2451 'nbitems': len(values),
2500 'nbitems': len(values),
2452 'min': values[0][0],
2501 'min': values[0][0],
2453 '10%': values[(nbvalues * 10) // 100][0],
2502 '10%': values[(nbvalues * 10) // 100][0],
2454 '25%': values[(nbvalues * 25) // 100][0],
2503 '25%': values[(nbvalues * 25) // 100][0],
2455 '50%': values[(nbvalues * 50) // 100][0],
2504 '50%': values[(nbvalues * 50) // 100][0],
2456 '75%': values[(nbvalues * 75) // 100][0],
2505 '75%': values[(nbvalues * 75) // 100][0],
2457 '80%': values[(nbvalues * 80) // 100][0],
2506 '80%': values[(nbvalues * 80) // 100][0],
2458 '85%': values[(nbvalues * 85) // 100][0],
2507 '85%': values[(nbvalues * 85) // 100][0],
2459 '90%': values[(nbvalues * 90) // 100][0],
2508 '90%': values[(nbvalues * 90) // 100][0],
2460 '95%': values[(nbvalues * 95) // 100][0],
2509 '95%': values[(nbvalues * 95) // 100][0],
2461 '99%': values[(nbvalues * 99) // 100][0],
2510 '99%': values[(nbvalues * 99) // 100][0],
2462 'max': values[-1][0],
2511 'max': values[-1][0],
2463 }
2512 }
2464 fm.startitem()
2513 fm.startitem()
2465 fm.data(**stats)
2514 fm.data(**stats)
2466 # make node pretty for the human output
2515 # make node pretty for the human output
2467 fm.plain('### %s (%d items)\n' % (title, len(values)))
2516 fm.plain('### %s (%d items)\n' % (title, len(values)))
2468 lines = [
2517 lines = [
2469 'min',
2518 'min',
2470 '10%',
2519 '10%',
2471 '25%',
2520 '25%',
2472 '50%',
2521 '50%',
2473 '75%',
2522 '75%',
2474 '80%',
2523 '80%',
2475 '85%',
2524 '85%',
2476 '90%',
2525 '90%',
2477 '95%',
2526 '95%',
2478 '99%',
2527 '99%',
2479 'max',
2528 'max',
2480 ]
2529 ]
2481 for l in lines:
2530 for l in lines:
2482 fm.plain('%s: %s\n' % (l, stats[l]))
2531 fm.plain('%s: %s\n' % (l, stats[l]))
2483 fm.end()
2532 fm.end()
2484
2533
2485
2534
2486 @command(
2535 @command(
2487 b'perf::helper-mergecopies|perfhelper-mergecopies',
2536 b'perf::helper-mergecopies|perfhelper-mergecopies',
2488 formatteropts
2537 formatteropts
2489 + [
2538 + [
2490 (b'r', b'revs', [], b'restrict search to these revisions'),
2539 (b'r', b'revs', [], b'restrict search to these revisions'),
2491 (b'', b'timing', False, b'provides extra data (costly)'),
2540 (b'', b'timing', False, b'provides extra data (costly)'),
2492 (b'', b'stats', False, b'provides statistic about the measured data'),
2541 (b'', b'stats', False, b'provides statistic about the measured data'),
2493 ],
2542 ],
2494 )
2543 )
2495 def perfhelpermergecopies(ui, repo, revs=[], **opts):
2544 def perfhelpermergecopies(ui, repo, revs=[], **opts):
2496 """find statistics about potential parameters for `perfmergecopies`
2545 """find statistics about potential parameters for `perfmergecopies`
2497
2546
2498 This command find (base, p1, p2) triplet relevant for copytracing
2547 This command find (base, p1, p2) triplet relevant for copytracing
2499 benchmarking in the context of a merge. It reports values for some of the
2548 benchmarking in the context of a merge. It reports values for some of the
2500 parameters that impact merge copy tracing time during merge.
2549 parameters that impact merge copy tracing time during merge.
2501
2550
2502 If `--timing` is set, rename detection is run and the associated timing
2551 If `--timing` is set, rename detection is run and the associated timing
2503 will be reported. The extra details come at the cost of slower command
2552 will be reported. The extra details come at the cost of slower command
2504 execution.
2553 execution.
2505
2554
2506 Since rename detection is only run once, other factors might easily
2555 Since rename detection is only run once, other factors might easily
2507 affect the precision of the timing. However it should give a good
2556 affect the precision of the timing. However it should give a good
2508 approximation of which revision triplets are very costly.
2557 approximation of which revision triplets are very costly.
2509 """
2558 """
2510 opts = _byteskwargs(opts)
2559 opts = _byteskwargs(opts)
2511 fm = ui.formatter(b'perf', opts)
2560 fm = ui.formatter(b'perf', opts)
2512 dotiming = opts[b'timing']
2561 dotiming = opts[b'timing']
2513 dostats = opts[b'stats']
2562 dostats = opts[b'stats']
2514
2563
2515 output_template = [
2564 output_template = [
2516 ("base", "%(base)12s"),
2565 ("base", "%(base)12s"),
2517 ("p1", "%(p1.node)12s"),
2566 ("p1", "%(p1.node)12s"),
2518 ("p2", "%(p2.node)12s"),
2567 ("p2", "%(p2.node)12s"),
2519 ("p1.nb-revs", "%(p1.nbrevs)12d"),
2568 ("p1.nb-revs", "%(p1.nbrevs)12d"),
2520 ("p1.nb-files", "%(p1.nbmissingfiles)12d"),
2569 ("p1.nb-files", "%(p1.nbmissingfiles)12d"),
2521 ("p1.renames", "%(p1.renamedfiles)12d"),
2570 ("p1.renames", "%(p1.renamedfiles)12d"),
2522 ("p1.time", "%(p1.time)12.3f"),
2571 ("p1.time", "%(p1.time)12.3f"),
2523 ("p2.nb-revs", "%(p2.nbrevs)12d"),
2572 ("p2.nb-revs", "%(p2.nbrevs)12d"),
2524 ("p2.nb-files", "%(p2.nbmissingfiles)12d"),
2573 ("p2.nb-files", "%(p2.nbmissingfiles)12d"),
2525 ("p2.renames", "%(p2.renamedfiles)12d"),
2574 ("p2.renames", "%(p2.renamedfiles)12d"),
2526 ("p2.time", "%(p2.time)12.3f"),
2575 ("p2.time", "%(p2.time)12.3f"),
2527 ("renames", "%(nbrenamedfiles)12d"),
2576 ("renames", "%(nbrenamedfiles)12d"),
2528 ("total.time", "%(time)12.3f"),
2577 ("total.time", "%(time)12.3f"),
2529 ]
2578 ]
2530 if not dotiming:
2579 if not dotiming:
2531 output_template = [
2580 output_template = [
2532 i
2581 i
2533 for i in output_template
2582 for i in output_template
2534 if not ('time' in i[0] or 'renames' in i[0])
2583 if not ('time' in i[0] or 'renames' in i[0])
2535 ]
2584 ]
2536 header_names = [h for (h, v) in output_template]
2585 header_names = [h for (h, v) in output_template]
2537 output = ' '.join([v for (h, v) in output_template]) + '\n'
2586 output = ' '.join([v for (h, v) in output_template]) + '\n'
2538 header = ' '.join(['%12s'] * len(header_names)) + '\n'
2587 header = ' '.join(['%12s'] * len(header_names)) + '\n'
2539 fm.plain(header % tuple(header_names))
2588 fm.plain(header % tuple(header_names))
2540
2589
2541 if not revs:
2590 if not revs:
2542 revs = ['all()']
2591 revs = ['all()']
2543 revs = scmutil.revrange(repo, revs)
2592 revs = scmutil.revrange(repo, revs)
2544
2593
2545 if dostats:
2594 if dostats:
2546 alldata = {
2595 alldata = {
2547 'nbrevs': [],
2596 'nbrevs': [],
2548 'nbmissingfiles': [],
2597 'nbmissingfiles': [],
2549 }
2598 }
2550 if dotiming:
2599 if dotiming:
2551 alldata['parentnbrenames'] = []
2600 alldata['parentnbrenames'] = []
2552 alldata['totalnbrenames'] = []
2601 alldata['totalnbrenames'] = []
2553 alldata['parenttime'] = []
2602 alldata['parenttime'] = []
2554 alldata['totaltime'] = []
2603 alldata['totaltime'] = []
2555
2604
2556 roi = repo.revs('merge() and %ld', revs)
2605 roi = repo.revs('merge() and %ld', revs)
2557 for r in roi:
2606 for r in roi:
2558 ctx = repo[r]
2607 ctx = repo[r]
2559 p1 = ctx.p1()
2608 p1 = ctx.p1()
2560 p2 = ctx.p2()
2609 p2 = ctx.p2()
2561 bases = repo.changelog._commonancestorsheads(p1.rev(), p2.rev())
2610 bases = repo.changelog._commonancestorsheads(p1.rev(), p2.rev())
2562 for b in bases:
2611 for b in bases:
2563 b = repo[b]
2612 b = repo[b]
2564 p1missing = copies._computeforwardmissing(b, p1)
2613 p1missing = copies._computeforwardmissing(b, p1)
2565 p2missing = copies._computeforwardmissing(b, p2)
2614 p2missing = copies._computeforwardmissing(b, p2)
2566 data = {
2615 data = {
2567 b'base': b.hex(),
2616 b'base': b.hex(),
2568 b'p1.node': p1.hex(),
2617 b'p1.node': p1.hex(),
2569 b'p1.nbrevs': len(repo.revs('only(%d, %d)', p1.rev(), b.rev())),
2618 b'p1.nbrevs': len(repo.revs('only(%d, %d)', p1.rev(), b.rev())),
2570 b'p1.nbmissingfiles': len(p1missing),
2619 b'p1.nbmissingfiles': len(p1missing),
2571 b'p2.node': p2.hex(),
2620 b'p2.node': p2.hex(),
2572 b'p2.nbrevs': len(repo.revs('only(%d, %d)', p2.rev(), b.rev())),
2621 b'p2.nbrevs': len(repo.revs('only(%d, %d)', p2.rev(), b.rev())),
2573 b'p2.nbmissingfiles': len(p2missing),
2622 b'p2.nbmissingfiles': len(p2missing),
2574 }
2623 }
2575 if dostats:
2624 if dostats:
2576 if p1missing:
2625 if p1missing:
2577 alldata['nbrevs'].append(
2626 alldata['nbrevs'].append(
2578 (data['p1.nbrevs'], b.hex(), p1.hex())
2627 (data['p1.nbrevs'], b.hex(), p1.hex())
2579 )
2628 )
2580 alldata['nbmissingfiles'].append(
2629 alldata['nbmissingfiles'].append(
2581 (data['p1.nbmissingfiles'], b.hex(), p1.hex())
2630 (data['p1.nbmissingfiles'], b.hex(), p1.hex())
2582 )
2631 )
2583 if p2missing:
2632 if p2missing:
2584 alldata['nbrevs'].append(
2633 alldata['nbrevs'].append(
2585 (data['p2.nbrevs'], b.hex(), p2.hex())
2634 (data['p2.nbrevs'], b.hex(), p2.hex())
2586 )
2635 )
2587 alldata['nbmissingfiles'].append(
2636 alldata['nbmissingfiles'].append(
2588 (data['p2.nbmissingfiles'], b.hex(), p2.hex())
2637 (data['p2.nbmissingfiles'], b.hex(), p2.hex())
2589 )
2638 )
2590 if dotiming:
2639 if dotiming:
2591 begin = util.timer()
2640 begin = util.timer()
2592 mergedata = copies.mergecopies(repo, p1, p2, b)
2641 mergedata = copies.mergecopies(repo, p1, p2, b)
2593 end = util.timer()
2642 end = util.timer()
2594 # not very stable timing since we did only one run
2643 # not very stable timing since we did only one run
2595 data['time'] = end - begin
2644 data['time'] = end - begin
2596 # mergedata contains five dicts: "copy", "movewithdir",
2645 # mergedata contains five dicts: "copy", "movewithdir",
2597 # "diverge", "renamedelete" and "dirmove".
2646 # "diverge", "renamedelete" and "dirmove".
2598 # The first 4 are about renamed file so lets count that.
2647 # The first 4 are about renamed file so lets count that.
2599 renames = len(mergedata[0])
2648 renames = len(mergedata[0])
2600 renames += len(mergedata[1])
2649 renames += len(mergedata[1])
2601 renames += len(mergedata[2])
2650 renames += len(mergedata[2])
2602 renames += len(mergedata[3])
2651 renames += len(mergedata[3])
2603 data['nbrenamedfiles'] = renames
2652 data['nbrenamedfiles'] = renames
2604 begin = util.timer()
2653 begin = util.timer()
2605 p1renames = copies.pathcopies(b, p1)
2654 p1renames = copies.pathcopies(b, p1)
2606 end = util.timer()
2655 end = util.timer()
2607 data['p1.time'] = end - begin
2656 data['p1.time'] = end - begin
2608 begin = util.timer()
2657 begin = util.timer()
2609 p2renames = copies.pathcopies(b, p2)
2658 p2renames = copies.pathcopies(b, p2)
2610 end = util.timer()
2659 end = util.timer()
2611 data['p2.time'] = end - begin
2660 data['p2.time'] = end - begin
2612 data['p1.renamedfiles'] = len(p1renames)
2661 data['p1.renamedfiles'] = len(p1renames)
2613 data['p2.renamedfiles'] = len(p2renames)
2662 data['p2.renamedfiles'] = len(p2renames)
2614
2663
2615 if dostats:
2664 if dostats:
2616 if p1missing:
2665 if p1missing:
2617 alldata['parentnbrenames'].append(
2666 alldata['parentnbrenames'].append(
2618 (data['p1.renamedfiles'], b.hex(), p1.hex())
2667 (data['p1.renamedfiles'], b.hex(), p1.hex())
2619 )
2668 )
2620 alldata['parenttime'].append(
2669 alldata['parenttime'].append(
2621 (data['p1.time'], b.hex(), p1.hex())
2670 (data['p1.time'], b.hex(), p1.hex())
2622 )
2671 )
2623 if p2missing:
2672 if p2missing:
2624 alldata['parentnbrenames'].append(
2673 alldata['parentnbrenames'].append(
2625 (data['p2.renamedfiles'], b.hex(), p2.hex())
2674 (data['p2.renamedfiles'], b.hex(), p2.hex())
2626 )
2675 )
2627 alldata['parenttime'].append(
2676 alldata['parenttime'].append(
2628 (data['p2.time'], b.hex(), p2.hex())
2677 (data['p2.time'], b.hex(), p2.hex())
2629 )
2678 )
2630 if p1missing or p2missing:
2679 if p1missing or p2missing:
2631 alldata['totalnbrenames'].append(
2680 alldata['totalnbrenames'].append(
2632 (
2681 (
2633 data['nbrenamedfiles'],
2682 data['nbrenamedfiles'],
2634 b.hex(),
2683 b.hex(),
2635 p1.hex(),
2684 p1.hex(),
2636 p2.hex(),
2685 p2.hex(),
2637 )
2686 )
2638 )
2687 )
2639 alldata['totaltime'].append(
2688 alldata['totaltime'].append(
2640 (data['time'], b.hex(), p1.hex(), p2.hex())
2689 (data['time'], b.hex(), p1.hex(), p2.hex())
2641 )
2690 )
2642 fm.startitem()
2691 fm.startitem()
2643 fm.data(**data)
2692 fm.data(**data)
2644 # make node pretty for the human output
2693 # make node pretty for the human output
2645 out = data.copy()
2694 out = data.copy()
2646 out['base'] = fm.hexfunc(b.node())
2695 out['base'] = fm.hexfunc(b.node())
2647 out['p1.node'] = fm.hexfunc(p1.node())
2696 out['p1.node'] = fm.hexfunc(p1.node())
2648 out['p2.node'] = fm.hexfunc(p2.node())
2697 out['p2.node'] = fm.hexfunc(p2.node())
2649 fm.plain(output % out)
2698 fm.plain(output % out)
2650
2699
2651 fm.end()
2700 fm.end()
2652 if dostats:
2701 if dostats:
2653 # use a second formatter because the data are quite different, not sure
2702 # use a second formatter because the data are quite different, not sure
2654 # how it flies with the templater.
2703 # how it flies with the templater.
2655 entries = [
2704 entries = [
2656 ('nbrevs', 'number of revision covered'),
2705 ('nbrevs', 'number of revision covered'),
2657 ('nbmissingfiles', 'number of missing files at head'),
2706 ('nbmissingfiles', 'number of missing files at head'),
2658 ]
2707 ]
2659 if dotiming:
2708 if dotiming:
2660 entries.append(
2709 entries.append(
2661 ('parentnbrenames', 'rename from one parent to base')
2710 ('parentnbrenames', 'rename from one parent to base')
2662 )
2711 )
2663 entries.append(('totalnbrenames', 'total number of renames'))
2712 entries.append(('totalnbrenames', 'total number of renames'))
2664 entries.append(('parenttime', 'time for one parent'))
2713 entries.append(('parenttime', 'time for one parent'))
2665 entries.append(('totaltime', 'time for both parents'))
2714 entries.append(('totaltime', 'time for both parents'))
2666 _displaystats(ui, opts, entries, alldata)
2715 _displaystats(ui, opts, entries, alldata)
2667
2716
2668
2717
2669 @command(
2718 @command(
2670 b'perf::helper-pathcopies|perfhelper-pathcopies',
2719 b'perf::helper-pathcopies|perfhelper-pathcopies',
2671 formatteropts
2720 formatteropts
2672 + [
2721 + [
2673 (b'r', b'revs', [], b'restrict search to these revisions'),
2722 (b'r', b'revs', [], b'restrict search to these revisions'),
2674 (b'', b'timing', False, b'provides extra data (costly)'),
2723 (b'', b'timing', False, b'provides extra data (costly)'),
2675 (b'', b'stats', False, b'provides statistic about the measured data'),
2724 (b'', b'stats', False, b'provides statistic about the measured data'),
2676 ],
2725 ],
2677 )
2726 )
2678 def perfhelperpathcopies(ui, repo, revs=[], **opts):
2727 def perfhelperpathcopies(ui, repo, revs=[], **opts):
2679 """find statistic about potential parameters for the `perftracecopies`
2728 """find statistic about potential parameters for the `perftracecopies`
2680
2729
2681 This command find source-destination pair relevant for copytracing testing.
2730 This command find source-destination pair relevant for copytracing testing.
2682 It report value for some of the parameters that impact copy tracing time.
2731 It report value for some of the parameters that impact copy tracing time.
2683
2732
2684 If `--timing` is set, rename detection is run and the associated timing
2733 If `--timing` is set, rename detection is run and the associated timing
2685 will be reported. The extra details comes at the cost of a slower command
2734 will be reported. The extra details comes at the cost of a slower command
2686 execution.
2735 execution.
2687
2736
2688 Since the rename detection is only run once, other factors might easily
2737 Since the rename detection is only run once, other factors might easily
2689 affect the precision of the timing. However it should give a good
2738 affect the precision of the timing. However it should give a good
2690 approximation of which revision pairs are very costly.
2739 approximation of which revision pairs are very costly.
2691 """
2740 """
2692 opts = _byteskwargs(opts)
2741 opts = _byteskwargs(opts)
2693 fm = ui.formatter(b'perf', opts)
2742 fm = ui.formatter(b'perf', opts)
2694 dotiming = opts[b'timing']
2743 dotiming = opts[b'timing']
2695 dostats = opts[b'stats']
2744 dostats = opts[b'stats']
2696
2745
2697 if dotiming:
2746 if dotiming:
2698 header = '%12s %12s %12s %12s %12s %12s\n'
2747 header = '%12s %12s %12s %12s %12s %12s\n'
2699 output = (
2748 output = (
2700 "%(source)12s %(destination)12s "
2749 "%(source)12s %(destination)12s "
2701 "%(nbrevs)12d %(nbmissingfiles)12d "
2750 "%(nbrevs)12d %(nbmissingfiles)12d "
2702 "%(nbrenamedfiles)12d %(time)18.5f\n"
2751 "%(nbrenamedfiles)12d %(time)18.5f\n"
2703 )
2752 )
2704 header_names = (
2753 header_names = (
2705 "source",
2754 "source",
2706 "destination",
2755 "destination",
2707 "nb-revs",
2756 "nb-revs",
2708 "nb-files",
2757 "nb-files",
2709 "nb-renames",
2758 "nb-renames",
2710 "time",
2759 "time",
2711 )
2760 )
2712 fm.plain(header % header_names)
2761 fm.plain(header % header_names)
2713 else:
2762 else:
2714 header = '%12s %12s %12s %12s\n'
2763 header = '%12s %12s %12s %12s\n'
2715 output = (
2764 output = (
2716 "%(source)12s %(destination)12s "
2765 "%(source)12s %(destination)12s "
2717 "%(nbrevs)12d %(nbmissingfiles)12d\n"
2766 "%(nbrevs)12d %(nbmissingfiles)12d\n"
2718 )
2767 )
2719 fm.plain(header % ("source", "destination", "nb-revs", "nb-files"))
2768 fm.plain(header % ("source", "destination", "nb-revs", "nb-files"))
2720
2769
2721 if not revs:
2770 if not revs:
2722 revs = ['all()']
2771 revs = ['all()']
2723 revs = scmutil.revrange(repo, revs)
2772 revs = scmutil.revrange(repo, revs)
2724
2773
2725 if dostats:
2774 if dostats:
2726 alldata = {
2775 alldata = {
2727 'nbrevs': [],
2776 'nbrevs': [],
2728 'nbmissingfiles': [],
2777 'nbmissingfiles': [],
2729 }
2778 }
2730 if dotiming:
2779 if dotiming:
2731 alldata['nbrenames'] = []
2780 alldata['nbrenames'] = []
2732 alldata['time'] = []
2781 alldata['time'] = []
2733
2782
2734 roi = repo.revs('merge() and %ld', revs)
2783 roi = repo.revs('merge() and %ld', revs)
2735 for r in roi:
2784 for r in roi:
2736 ctx = repo[r]
2785 ctx = repo[r]
2737 p1 = ctx.p1().rev()
2786 p1 = ctx.p1().rev()
2738 p2 = ctx.p2().rev()
2787 p2 = ctx.p2().rev()
2739 bases = repo.changelog._commonancestorsheads(p1, p2)
2788 bases = repo.changelog._commonancestorsheads(p1, p2)
2740 for p in (p1, p2):
2789 for p in (p1, p2):
2741 for b in bases:
2790 for b in bases:
2742 base = repo[b]
2791 base = repo[b]
2743 parent = repo[p]
2792 parent = repo[p]
2744 missing = copies._computeforwardmissing(base, parent)
2793 missing = copies._computeforwardmissing(base, parent)
2745 if not missing:
2794 if not missing:
2746 continue
2795 continue
2747 data = {
2796 data = {
2748 b'source': base.hex(),
2797 b'source': base.hex(),
2749 b'destination': parent.hex(),
2798 b'destination': parent.hex(),
2750 b'nbrevs': len(repo.revs('only(%d, %d)', p, b)),
2799 b'nbrevs': len(repo.revs('only(%d, %d)', p, b)),
2751 b'nbmissingfiles': len(missing),
2800 b'nbmissingfiles': len(missing),
2752 }
2801 }
2753 if dostats:
2802 if dostats:
2754 alldata['nbrevs'].append(
2803 alldata['nbrevs'].append(
2755 (
2804 (
2756 data['nbrevs'],
2805 data['nbrevs'],
2757 base.hex(),
2806 base.hex(),
2758 parent.hex(),
2807 parent.hex(),
2759 )
2808 )
2760 )
2809 )
2761 alldata['nbmissingfiles'].append(
2810 alldata['nbmissingfiles'].append(
2762 (
2811 (
2763 data['nbmissingfiles'],
2812 data['nbmissingfiles'],
2764 base.hex(),
2813 base.hex(),
2765 parent.hex(),
2814 parent.hex(),
2766 )
2815 )
2767 )
2816 )
2768 if dotiming:
2817 if dotiming:
2769 begin = util.timer()
2818 begin = util.timer()
2770 renames = copies.pathcopies(base, parent)
2819 renames = copies.pathcopies(base, parent)
2771 end = util.timer()
2820 end = util.timer()
2772 # not very stable timing since we did only one run
2821 # not very stable timing since we did only one run
2773 data['time'] = end - begin
2822 data['time'] = end - begin
2774 data['nbrenamedfiles'] = len(renames)
2823 data['nbrenamedfiles'] = len(renames)
2775 if dostats:
2824 if dostats:
2776 alldata['time'].append(
2825 alldata['time'].append(
2777 (
2826 (
2778 data['time'],
2827 data['time'],
2779 base.hex(),
2828 base.hex(),
2780 parent.hex(),
2829 parent.hex(),
2781 )
2830 )
2782 )
2831 )
2783 alldata['nbrenames'].append(
2832 alldata['nbrenames'].append(
2784 (
2833 (
2785 data['nbrenamedfiles'],
2834 data['nbrenamedfiles'],
2786 base.hex(),
2835 base.hex(),
2787 parent.hex(),
2836 parent.hex(),
2788 )
2837 )
2789 )
2838 )
2790 fm.startitem()
2839 fm.startitem()
2791 fm.data(**data)
2840 fm.data(**data)
2792 out = data.copy()
2841 out = data.copy()
2793 out['source'] = fm.hexfunc(base.node())
2842 out['source'] = fm.hexfunc(base.node())
2794 out['destination'] = fm.hexfunc(parent.node())
2843 out['destination'] = fm.hexfunc(parent.node())
2795 fm.plain(output % out)
2844 fm.plain(output % out)
2796
2845
2797 fm.end()
2846 fm.end()
2798 if dostats:
2847 if dostats:
2799 entries = [
2848 entries = [
2800 ('nbrevs', 'number of revision covered'),
2849 ('nbrevs', 'number of revision covered'),
2801 ('nbmissingfiles', 'number of missing files at head'),
2850 ('nbmissingfiles', 'number of missing files at head'),
2802 ]
2851 ]
2803 if dotiming:
2852 if dotiming:
2804 entries.append(('nbrenames', 'renamed files'))
2853 entries.append(('nbrenames', 'renamed files'))
2805 entries.append(('time', 'time'))
2854 entries.append(('time', 'time'))
2806 _displaystats(ui, opts, entries, alldata)
2855 _displaystats(ui, opts, entries, alldata)
2807
2856
2808
2857
2809 @command(b'perf::cca|perfcca', formatteropts)
2858 @command(b'perf::cca|perfcca', formatteropts)
2810 def perfcca(ui, repo, **opts):
2859 def perfcca(ui, repo, **opts):
2811 opts = _byteskwargs(opts)
2860 opts = _byteskwargs(opts)
2812 timer, fm = gettimer(ui, opts)
2861 timer, fm = gettimer(ui, opts)
2813 timer(lambda: scmutil.casecollisionauditor(ui, False, repo.dirstate))
2862 timer(lambda: scmutil.casecollisionauditor(ui, False, repo.dirstate))
2814 fm.end()
2863 fm.end()
2815
2864
2816
2865
2817 @command(b'perf::fncacheload|perffncacheload', formatteropts)
2866 @command(b'perf::fncacheload|perffncacheload', formatteropts)
2818 def perffncacheload(ui, repo, **opts):
2867 def perffncacheload(ui, repo, **opts):
2819 opts = _byteskwargs(opts)
2868 opts = _byteskwargs(opts)
2820 timer, fm = gettimer(ui, opts)
2869 timer, fm = gettimer(ui, opts)
2821 s = repo.store
2870 s = repo.store
2822
2871
2823 def d():
2872 def d():
2824 s.fncache._load()
2873 s.fncache._load()
2825
2874
2826 timer(d)
2875 timer(d)
2827 fm.end()
2876 fm.end()
2828
2877
2829
2878
2830 @command(b'perf::fncachewrite|perffncachewrite', formatteropts)
2879 @command(b'perf::fncachewrite|perffncachewrite', formatteropts)
2831 def perffncachewrite(ui, repo, **opts):
2880 def perffncachewrite(ui, repo, **opts):
2832 opts = _byteskwargs(opts)
2881 opts = _byteskwargs(opts)
2833 timer, fm = gettimer(ui, opts)
2882 timer, fm = gettimer(ui, opts)
2834 s = repo.store
2883 s = repo.store
2835 lock = repo.lock()
2884 lock = repo.lock()
2836 s.fncache._load()
2885 s.fncache._load()
2837 tr = repo.transaction(b'perffncachewrite')
2886 tr = repo.transaction(b'perffncachewrite')
2838 tr.addbackup(b'fncache')
2887 tr.addbackup(b'fncache')
2839
2888
2840 def d():
2889 def d():
2841 s.fncache._dirty = True
2890 s.fncache._dirty = True
2842 s.fncache.write(tr)
2891 s.fncache.write(tr)
2843
2892
2844 timer(d)
2893 timer(d)
2845 tr.close()
2894 tr.close()
2846 lock.release()
2895 lock.release()
2847 fm.end()
2896 fm.end()
2848
2897
2849
2898
2850 @command(b'perf::fncacheencode|perffncacheencode', formatteropts)
2899 @command(b'perf::fncacheencode|perffncacheencode', formatteropts)
2851 def perffncacheencode(ui, repo, **opts):
2900 def perffncacheencode(ui, repo, **opts):
2852 opts = _byteskwargs(opts)
2901 opts = _byteskwargs(opts)
2853 timer, fm = gettimer(ui, opts)
2902 timer, fm = gettimer(ui, opts)
2854 s = repo.store
2903 s = repo.store
2855 s.fncache._load()
2904 s.fncache._load()
2856
2905
2857 def d():
2906 def d():
2858 for p in s.fncache.entries:
2907 for p in s.fncache.entries:
2859 s.encode(p)
2908 s.encode(p)
2860
2909
2861 timer(d)
2910 timer(d)
2862 fm.end()
2911 fm.end()
2863
2912
2864
2913
2865 def _bdiffworker(q, blocks, xdiff, ready, done):
2914 def _bdiffworker(q, blocks, xdiff, ready, done):
2866 while not done.is_set():
2915 while not done.is_set():
2867 pair = q.get()
2916 pair = q.get()
2868 while pair is not None:
2917 while pair is not None:
2869 if xdiff:
2918 if xdiff:
2870 mdiff.bdiff.xdiffblocks(*pair)
2919 mdiff.bdiff.xdiffblocks(*pair)
2871 elif blocks:
2920 elif blocks:
2872 mdiff.bdiff.blocks(*pair)
2921 mdiff.bdiff.blocks(*pair)
2873 else:
2922 else:
2874 mdiff.textdiff(*pair)
2923 mdiff.textdiff(*pair)
2875 q.task_done()
2924 q.task_done()
2876 pair = q.get()
2925 pair = q.get()
2877 q.task_done() # for the None one
2926 q.task_done() # for the None one
2878 with ready:
2927 with ready:
2879 ready.wait()
2928 ready.wait()
2880
2929
2881
2930
2882 def _manifestrevision(repo, mnode):
2931 def _manifestrevision(repo, mnode):
2883 ml = repo.manifestlog
2932 ml = repo.manifestlog
2884
2933
2885 if util.safehasattr(ml, b'getstorage'):
2934 if util.safehasattr(ml, b'getstorage'):
2886 store = ml.getstorage(b'')
2935 store = ml.getstorage(b'')
2887 else:
2936 else:
2888 store = ml._revlog
2937 store = ml._revlog
2889
2938
2890 return store.revision(mnode)
2939 return store.revision(mnode)
2891
2940
2892
2941
2893 @command(
2942 @command(
2894 b'perf::bdiff|perfbdiff',
2943 b'perf::bdiff|perfbdiff',
2895 revlogopts
2944 revlogopts
2896 + formatteropts
2945 + formatteropts
2897 + [
2946 + [
2898 (
2947 (
2899 b'',
2948 b'',
2900 b'count',
2949 b'count',
2901 1,
2950 1,
2902 b'number of revisions to test (when using --startrev)',
2951 b'number of revisions to test (when using --startrev)',
2903 ),
2952 ),
2904 (b'', b'alldata', False, b'test bdiffs for all associated revisions'),
2953 (b'', b'alldata', False, b'test bdiffs for all associated revisions'),
2905 (b'', b'threads', 0, b'number of thread to use (disable with 0)'),
2954 (b'', b'threads', 0, b'number of thread to use (disable with 0)'),
2906 (b'', b'blocks', False, b'test computing diffs into blocks'),
2955 (b'', b'blocks', False, b'test computing diffs into blocks'),
2907 (b'', b'xdiff', False, b'use xdiff algorithm'),
2956 (b'', b'xdiff', False, b'use xdiff algorithm'),
2908 ],
2957 ],
2909 b'-c|-m|FILE REV',
2958 b'-c|-m|FILE REV',
2910 )
2959 )
2911 def perfbdiff(ui, repo, file_, rev=None, count=None, threads=0, **opts):
2960 def perfbdiff(ui, repo, file_, rev=None, count=None, threads=0, **opts):
2912 """benchmark a bdiff between revisions
2961 """benchmark a bdiff between revisions
2913
2962
2914 By default, benchmark a bdiff between its delta parent and itself.
2963 By default, benchmark a bdiff between its delta parent and itself.
2915
2964
2916 With ``--count``, benchmark bdiffs between delta parents and self for N
2965 With ``--count``, benchmark bdiffs between delta parents and self for N
2917 revisions starting at the specified revision.
2966 revisions starting at the specified revision.
2918
2967
2919 With ``--alldata``, assume the requested revision is a changeset and
2968 With ``--alldata``, assume the requested revision is a changeset and
2920 measure bdiffs for all changes related to that changeset (manifest
2969 measure bdiffs for all changes related to that changeset (manifest
2921 and filelogs).
2970 and filelogs).
2922 """
2971 """
2923 opts = _byteskwargs(opts)
2972 opts = _byteskwargs(opts)
2924
2973
2925 if opts[b'xdiff'] and not opts[b'blocks']:
2974 if opts[b'xdiff'] and not opts[b'blocks']:
2926 raise error.CommandError(b'perfbdiff', b'--xdiff requires --blocks')
2975 raise error.CommandError(b'perfbdiff', b'--xdiff requires --blocks')
2927
2976
2928 if opts[b'alldata']:
2977 if opts[b'alldata']:
2929 opts[b'changelog'] = True
2978 opts[b'changelog'] = True
2930
2979
2931 if opts.get(b'changelog') or opts.get(b'manifest'):
2980 if opts.get(b'changelog') or opts.get(b'manifest'):
2932 file_, rev = None, file_
2981 file_, rev = None, file_
2933 elif rev is None:
2982 elif rev is None:
2934 raise error.CommandError(b'perfbdiff', b'invalid arguments')
2983 raise error.CommandError(b'perfbdiff', b'invalid arguments')
2935
2984
2936 blocks = opts[b'blocks']
2985 blocks = opts[b'blocks']
2937 xdiff = opts[b'xdiff']
2986 xdiff = opts[b'xdiff']
2938 textpairs = []
2987 textpairs = []
2939
2988
2940 r = cmdutil.openrevlog(repo, b'perfbdiff', file_, opts)
2989 r = cmdutil.openrevlog(repo, b'perfbdiff', file_, opts)
2941
2990
2942 startrev = r.rev(r.lookup(rev))
2991 startrev = r.rev(r.lookup(rev))
2943 for rev in range(startrev, min(startrev + count, len(r) - 1)):
2992 for rev in range(startrev, min(startrev + count, len(r) - 1)):
2944 if opts[b'alldata']:
2993 if opts[b'alldata']:
2945 # Load revisions associated with changeset.
2994 # Load revisions associated with changeset.
2946 ctx = repo[rev]
2995 ctx = repo[rev]
2947 mtext = _manifestrevision(repo, ctx.manifestnode())
2996 mtext = _manifestrevision(repo, ctx.manifestnode())
2948 for pctx in ctx.parents():
2997 for pctx in ctx.parents():
2949 pman = _manifestrevision(repo, pctx.manifestnode())
2998 pman = _manifestrevision(repo, pctx.manifestnode())
2950 textpairs.append((pman, mtext))
2999 textpairs.append((pman, mtext))
2951
3000
2952 # Load filelog revisions by iterating manifest delta.
3001 # Load filelog revisions by iterating manifest delta.
2953 man = ctx.manifest()
3002 man = ctx.manifest()
2954 pman = ctx.p1().manifest()
3003 pman = ctx.p1().manifest()
2955 for filename, change in pman.diff(man).items():
3004 for filename, change in pman.diff(man).items():
2956 fctx = repo.file(filename)
3005 fctx = repo.file(filename)
2957 f1 = fctx.revision(change[0][0] or -1)
3006 f1 = fctx.revision(change[0][0] or -1)
2958 f2 = fctx.revision(change[1][0] or -1)
3007 f2 = fctx.revision(change[1][0] or -1)
2959 textpairs.append((f1, f2))
3008 textpairs.append((f1, f2))
2960 else:
3009 else:
2961 dp = r.deltaparent(rev)
3010 dp = r.deltaparent(rev)
2962 textpairs.append((r.revision(dp), r.revision(rev)))
3011 textpairs.append((r.revision(dp), r.revision(rev)))
2963
3012
2964 withthreads = threads > 0
3013 withthreads = threads > 0
2965 if not withthreads:
3014 if not withthreads:
2966
3015
2967 def d():
3016 def d():
2968 for pair in textpairs:
3017 for pair in textpairs:
2969 if xdiff:
3018 if xdiff:
2970 mdiff.bdiff.xdiffblocks(*pair)
3019 mdiff.bdiff.xdiffblocks(*pair)
2971 elif blocks:
3020 elif blocks:
2972 mdiff.bdiff.blocks(*pair)
3021 mdiff.bdiff.blocks(*pair)
2973 else:
3022 else:
2974 mdiff.textdiff(*pair)
3023 mdiff.textdiff(*pair)
2975
3024
2976 else:
3025 else:
2977 q = queue()
3026 q = queue()
2978 for i in _xrange(threads):
3027 for i in _xrange(threads):
2979 q.put(None)
3028 q.put(None)
2980 ready = threading.Condition()
3029 ready = threading.Condition()
2981 done = threading.Event()
3030 done = threading.Event()
2982 for i in _xrange(threads):
3031 for i in _xrange(threads):
2983 threading.Thread(
3032 threading.Thread(
2984 target=_bdiffworker, args=(q, blocks, xdiff, ready, done)
3033 target=_bdiffworker, args=(q, blocks, xdiff, ready, done)
2985 ).start()
3034 ).start()
2986 q.join()
3035 q.join()
2987
3036
2988 def d():
3037 def d():
2989 for pair in textpairs:
3038 for pair in textpairs:
2990 q.put(pair)
3039 q.put(pair)
2991 for i in _xrange(threads):
3040 for i in _xrange(threads):
2992 q.put(None)
3041 q.put(None)
2993 with ready:
3042 with ready:
2994 ready.notify_all()
3043 ready.notify_all()
2995 q.join()
3044 q.join()
2996
3045
2997 timer, fm = gettimer(ui, opts)
3046 timer, fm = gettimer(ui, opts)
2998 timer(d)
3047 timer(d)
2999 fm.end()
3048 fm.end()
3000
3049
3001 if withthreads:
3050 if withthreads:
3002 done.set()
3051 done.set()
3003 for i in _xrange(threads):
3052 for i in _xrange(threads):
3004 q.put(None)
3053 q.put(None)
3005 with ready:
3054 with ready:
3006 ready.notify_all()
3055 ready.notify_all()
3007
3056
3008
3057
3009 @command(
3058 @command(
3010 b'perf::unbundle',
3059 b'perf::unbundle',
3011 [
3060 [
3012 (b'', b'as-push', None, b'pretend the bundle comes from a push'),
3061 (b'', b'as-push', None, b'pretend the bundle comes from a push'),
3013 ]
3062 ]
3014 + formatteropts,
3063 + formatteropts,
3015 b'BUNDLE_FILE',
3064 b'BUNDLE_FILE',
3016 )
3065 )
3017 def perf_unbundle(ui, repo, fname, **opts):
3066 def perf_unbundle(ui, repo, fname, **opts):
3018 """benchmark application of a bundle in a repository.
3067 """benchmark application of a bundle in a repository.
3019
3068
3020 This does not include the final transaction processing
3069 This does not include the final transaction processing
3021
3070
3022 The --as-push option make the unbundle operation appears like it comes from
3071 The --as-push option make the unbundle operation appears like it comes from
3023 a client push. It change some aspect of the processing and associated
3072 a client push. It change some aspect of the processing and associated
3024 performance profile.
3073 performance profile.
3025 """
3074 """
3026
3075
3027 from mercurial import exchange
3076 from mercurial import exchange
3028 from mercurial import bundle2
3077 from mercurial import bundle2
3029 from mercurial import transaction
3078 from mercurial import transaction
3030
3079
3031 opts = _byteskwargs(opts)
3080 opts = _byteskwargs(opts)
3032
3081
3033 ### some compatibility hotfix
3082 ### some compatibility hotfix
3034 #
3083 #
3035 # the data attribute is dropped in 63edc384d3b7 a changeset introducing a
3084 # the data attribute is dropped in 63edc384d3b7 a changeset introducing a
3036 # critical regression that break transaction rollback for files that are
3085 # critical regression that break transaction rollback for files that are
3037 # de-inlined.
3086 # de-inlined.
3038 method = transaction.transaction._addentry
3087 method = transaction.transaction._addentry
3039 pre_63edc384d3b7 = "data" in getargspec(method).args
3088 pre_63edc384d3b7 = "data" in getargspec(method).args
3040 # the `detailed_exit_code` attribute is introduced in 33c0c25d0b0f
3089 # the `detailed_exit_code` attribute is introduced in 33c0c25d0b0f
3041 # a changeset that is a close descendant of 18415fc918a1, the changeset
3090 # a changeset that is a close descendant of 18415fc918a1, the changeset
3042 # that conclude the fix run for the bug introduced in 63edc384d3b7.
3091 # that conclude the fix run for the bug introduced in 63edc384d3b7.
3043 args = getargspec(error.Abort.__init__).args
3092 args = getargspec(error.Abort.__init__).args
3044 post_18415fc918a1 = "detailed_exit_code" in args
3093 post_18415fc918a1 = "detailed_exit_code" in args
3045
3094
3046 unbundle_source = b'perf::unbundle'
3095 unbundle_source = b'perf::unbundle'
3047 if opts[b'as_push']:
3096 if opts[b'as_push']:
3048 unbundle_source = b'push'
3097 unbundle_source = b'push'
3049
3098
3050 old_max_inline = None
3099 old_max_inline = None
3051 try:
3100 try:
3052 if not (pre_63edc384d3b7 or post_18415fc918a1):
3101 if not (pre_63edc384d3b7 or post_18415fc918a1):
3053 # disable inlining
3102 # disable inlining
3054 old_max_inline = mercurial.revlog._maxinline
3103 old_max_inline = mercurial.revlog._maxinline
3055 # large enough to never happen
3104 # large enough to never happen
3056 mercurial.revlog._maxinline = 2 ** 50
3105 mercurial.revlog._maxinline = 2 ** 50
3057
3106
3058 with repo.lock():
3107 with repo.lock():
3059 bundle = [None, None]
3108 bundle = [None, None]
3060 orig_quiet = repo.ui.quiet
3109 orig_quiet = repo.ui.quiet
3061 try:
3110 try:
3062 repo.ui.quiet = True
3111 repo.ui.quiet = True
3063 with open(fname, mode="rb") as f:
3112 with open(fname, mode="rb") as f:
3064
3113
3065 def noop_report(*args, **kwargs):
3114 def noop_report(*args, **kwargs):
3066 pass
3115 pass
3067
3116
3068 def setup():
3117 def setup():
3069 gen, tr = bundle
3118 gen, tr = bundle
3070 if tr is not None:
3119 if tr is not None:
3071 tr.abort()
3120 tr.abort()
3072 bundle[:] = [None, None]
3121 bundle[:] = [None, None]
3073 f.seek(0)
3122 f.seek(0)
3074 bundle[0] = exchange.readbundle(ui, f, fname)
3123 bundle[0] = exchange.readbundle(ui, f, fname)
3075 bundle[1] = repo.transaction(b'perf::unbundle')
3124 bundle[1] = repo.transaction(b'perf::unbundle')
3076 # silence the transaction
3125 # silence the transaction
3077 bundle[1]._report = noop_report
3126 bundle[1]._report = noop_report
3078
3127
3079 def apply():
3128 def apply():
3080 gen, tr = bundle
3129 gen, tr = bundle
3081 bundle2.applybundle(
3130 bundle2.applybundle(
3082 repo,
3131 repo,
3083 gen,
3132 gen,
3084 tr,
3133 tr,
3085 source=unbundle_source,
3134 source=unbundle_source,
3086 url=fname,
3135 url=fname,
3087 )
3136 )
3088
3137
3089 timer, fm = gettimer(ui, opts)
3138 timer, fm = gettimer(ui, opts)
3090 timer(apply, setup=setup)
3139 timer(apply, setup=setup)
3091 fm.end()
3140 fm.end()
3092 finally:
3141 finally:
3093 repo.ui.quiet == orig_quiet
3142 repo.ui.quiet == orig_quiet
3094 gen, tr = bundle
3143 gen, tr = bundle
3095 if tr is not None:
3144 if tr is not None:
3096 tr.abort()
3145 tr.abort()
3097 finally:
3146 finally:
3098 if old_max_inline is not None:
3147 if old_max_inline is not None:
3099 mercurial.revlog._maxinline = old_max_inline
3148 mercurial.revlog._maxinline = old_max_inline
3100
3149
3101
3150
3102 @command(
3151 @command(
3103 b'perf::unidiff|perfunidiff',
3152 b'perf::unidiff|perfunidiff',
3104 revlogopts
3153 revlogopts
3105 + formatteropts
3154 + formatteropts
3106 + [
3155 + [
3107 (
3156 (
3108 b'',
3157 b'',
3109 b'count',
3158 b'count',
3110 1,
3159 1,
3111 b'number of revisions to test (when using --startrev)',
3160 b'number of revisions to test (when using --startrev)',
3112 ),
3161 ),
3113 (b'', b'alldata', False, b'test unidiffs for all associated revisions'),
3162 (b'', b'alldata', False, b'test unidiffs for all associated revisions'),
3114 ],
3163 ],
3115 b'-c|-m|FILE REV',
3164 b'-c|-m|FILE REV',
3116 )
3165 )
3117 def perfunidiff(ui, repo, file_, rev=None, count=None, **opts):
3166 def perfunidiff(ui, repo, file_, rev=None, count=None, **opts):
3118 """benchmark a unified diff between revisions
3167 """benchmark a unified diff between revisions
3119
3168
3120 This doesn't include any copy tracing - it's just a unified diff
3169 This doesn't include any copy tracing - it's just a unified diff
3121 of the texts.
3170 of the texts.
3122
3171
3123 By default, benchmark a diff between its delta parent and itself.
3172 By default, benchmark a diff between its delta parent and itself.
3124
3173
3125 With ``--count``, benchmark diffs between delta parents and self for N
3174 With ``--count``, benchmark diffs between delta parents and self for N
3126 revisions starting at the specified revision.
3175 revisions starting at the specified revision.
3127
3176
3128 With ``--alldata``, assume the requested revision is a changeset and
3177 With ``--alldata``, assume the requested revision is a changeset and
3129 measure diffs for all changes related to that changeset (manifest
3178 measure diffs for all changes related to that changeset (manifest
3130 and filelogs).
3179 and filelogs).
3131 """
3180 """
3132 opts = _byteskwargs(opts)
3181 opts = _byteskwargs(opts)
3133 if opts[b'alldata']:
3182 if opts[b'alldata']:
3134 opts[b'changelog'] = True
3183 opts[b'changelog'] = True
3135
3184
3136 if opts.get(b'changelog') or opts.get(b'manifest'):
3185 if opts.get(b'changelog') or opts.get(b'manifest'):
3137 file_, rev = None, file_
3186 file_, rev = None, file_
3138 elif rev is None:
3187 elif rev is None:
3139 raise error.CommandError(b'perfunidiff', b'invalid arguments')
3188 raise error.CommandError(b'perfunidiff', b'invalid arguments')
3140
3189
3141 textpairs = []
3190 textpairs = []
3142
3191
3143 r = cmdutil.openrevlog(repo, b'perfunidiff', file_, opts)
3192 r = cmdutil.openrevlog(repo, b'perfunidiff', file_, opts)
3144
3193
3145 startrev = r.rev(r.lookup(rev))
3194 startrev = r.rev(r.lookup(rev))
3146 for rev in range(startrev, min(startrev + count, len(r) - 1)):
3195 for rev in range(startrev, min(startrev + count, len(r) - 1)):
3147 if opts[b'alldata']:
3196 if opts[b'alldata']:
3148 # Load revisions associated with changeset.
3197 # Load revisions associated with changeset.
3149 ctx = repo[rev]
3198 ctx = repo[rev]
3150 mtext = _manifestrevision(repo, ctx.manifestnode())
3199 mtext = _manifestrevision(repo, ctx.manifestnode())
3151 for pctx in ctx.parents():
3200 for pctx in ctx.parents():
3152 pman = _manifestrevision(repo, pctx.manifestnode())
3201 pman = _manifestrevision(repo, pctx.manifestnode())
3153 textpairs.append((pman, mtext))
3202 textpairs.append((pman, mtext))
3154
3203
3155 # Load filelog revisions by iterating manifest delta.
3204 # Load filelog revisions by iterating manifest delta.
3156 man = ctx.manifest()
3205 man = ctx.manifest()
3157 pman = ctx.p1().manifest()
3206 pman = ctx.p1().manifest()
3158 for filename, change in pman.diff(man).items():
3207 for filename, change in pman.diff(man).items():
3159 fctx = repo.file(filename)
3208 fctx = repo.file(filename)
3160 f1 = fctx.revision(change[0][0] or -1)
3209 f1 = fctx.revision(change[0][0] or -1)
3161 f2 = fctx.revision(change[1][0] or -1)
3210 f2 = fctx.revision(change[1][0] or -1)
3162 textpairs.append((f1, f2))
3211 textpairs.append((f1, f2))
3163 else:
3212 else:
3164 dp = r.deltaparent(rev)
3213 dp = r.deltaparent(rev)
3165 textpairs.append((r.revision(dp), r.revision(rev)))
3214 textpairs.append((r.revision(dp), r.revision(rev)))
3166
3215
3167 def d():
3216 def d():
3168 for left, right in textpairs:
3217 for left, right in textpairs:
3169 # The date strings don't matter, so we pass empty strings.
3218 # The date strings don't matter, so we pass empty strings.
3170 headerlines, hunks = mdiff.unidiff(
3219 headerlines, hunks = mdiff.unidiff(
3171 left, b'', right, b'', b'left', b'right', binary=False
3220 left, b'', right, b'', b'left', b'right', binary=False
3172 )
3221 )
3173 # consume iterators in roughly the way patch.py does
3222 # consume iterators in roughly the way patch.py does
3174 b'\n'.join(headerlines)
3223 b'\n'.join(headerlines)
3175 b''.join(sum((list(hlines) for hrange, hlines in hunks), []))
3224 b''.join(sum((list(hlines) for hrange, hlines in hunks), []))
3176
3225
3177 timer, fm = gettimer(ui, opts)
3226 timer, fm = gettimer(ui, opts)
3178 timer(d)
3227 timer(d)
3179 fm.end()
3228 fm.end()
3180
3229
3181
3230
3182 @command(b'perf::diffwd|perfdiffwd', formatteropts)
3231 @command(b'perf::diffwd|perfdiffwd', formatteropts)
3183 def perfdiffwd(ui, repo, **opts):
3232 def perfdiffwd(ui, repo, **opts):
3184 """Profile diff of working directory changes"""
3233 """Profile diff of working directory changes"""
3185 opts = _byteskwargs(opts)
3234 opts = _byteskwargs(opts)
3186 timer, fm = gettimer(ui, opts)
3235 timer, fm = gettimer(ui, opts)
3187 options = {
3236 options = {
3188 'w': 'ignore_all_space',
3237 'w': 'ignore_all_space',
3189 'b': 'ignore_space_change',
3238 'b': 'ignore_space_change',
3190 'B': 'ignore_blank_lines',
3239 'B': 'ignore_blank_lines',
3191 }
3240 }
3192
3241
3193 for diffopt in ('', 'w', 'b', 'B', 'wB'):
3242 for diffopt in ('', 'w', 'b', 'B', 'wB'):
3194 opts = {options[c]: b'1' for c in diffopt}
3243 opts = {options[c]: b'1' for c in diffopt}
3195
3244
3196 def d():
3245 def d():
3197 ui.pushbuffer()
3246 ui.pushbuffer()
3198 commands.diff(ui, repo, **opts)
3247 commands.diff(ui, repo, **opts)
3199 ui.popbuffer()
3248 ui.popbuffer()
3200
3249
3201 diffopt = diffopt.encode('ascii')
3250 diffopt = diffopt.encode('ascii')
3202 title = b'diffopts: %s' % (diffopt and (b'-' + diffopt) or b'none')
3251 title = b'diffopts: %s' % (diffopt and (b'-' + diffopt) or b'none')
3203 timer(d, title=title)
3252 timer(d, title=title)
3204 fm.end()
3253 fm.end()
3205
3254
3206
3255
3207 @command(
3256 @command(
3208 b'perf::revlogindex|perfrevlogindex',
3257 b'perf::revlogindex|perfrevlogindex',
3209 revlogopts + formatteropts,
3258 revlogopts + formatteropts,
3210 b'-c|-m|FILE',
3259 b'-c|-m|FILE',
3211 )
3260 )
3212 def perfrevlogindex(ui, repo, file_=None, **opts):
3261 def perfrevlogindex(ui, repo, file_=None, **opts):
3213 """Benchmark operations against a revlog index.
3262 """Benchmark operations against a revlog index.
3214
3263
3215 This tests constructing a revlog instance, reading index data,
3264 This tests constructing a revlog instance, reading index data,
3216 parsing index data, and performing various operations related to
3265 parsing index data, and performing various operations related to
3217 index data.
3266 index data.
3218 """
3267 """
3219
3268
3220 opts = _byteskwargs(opts)
3269 opts = _byteskwargs(opts)
3221
3270
3222 rl = cmdutil.openrevlog(repo, b'perfrevlogindex', file_, opts)
3271 rl = cmdutil.openrevlog(repo, b'perfrevlogindex', file_, opts)
3223
3272
3224 opener = getattr(rl, 'opener') # trick linter
3273 opener = getattr(rl, 'opener') # trick linter
3225 # compat with hg <= 5.8
3274 # compat with hg <= 5.8
3226 radix = getattr(rl, 'radix', None)
3275 radix = getattr(rl, 'radix', None)
3227 indexfile = getattr(rl, '_indexfile', None)
3276 indexfile = getattr(rl, '_indexfile', None)
3228 if indexfile is None:
3277 if indexfile is None:
3229 # compatibility with <= hg-5.8
3278 # compatibility with <= hg-5.8
3230 indexfile = getattr(rl, 'indexfile')
3279 indexfile = getattr(rl, 'indexfile')
3231 data = opener.read(indexfile)
3280 data = opener.read(indexfile)
3232
3281
3233 header = struct.unpack(b'>I', data[0:4])[0]
3282 header = struct.unpack(b'>I', data[0:4])[0]
3234 version = header & 0xFFFF
3283 version = header & 0xFFFF
3235 if version == 1:
3284 if version == 1:
3236 inline = header & (1 << 16)
3285 inline = header & (1 << 16)
3237 else:
3286 else:
3238 raise error.Abort(b'unsupported revlog version: %d' % version)
3287 raise error.Abort(b'unsupported revlog version: %d' % version)
3239
3288
3240 parse_index_v1 = getattr(mercurial.revlog, 'parse_index_v1', None)
3289 parse_index_v1 = getattr(mercurial.revlog, 'parse_index_v1', None)
3241 if parse_index_v1 is None:
3290 if parse_index_v1 is None:
3242 parse_index_v1 = mercurial.revlog.revlogio().parseindex
3291 parse_index_v1 = mercurial.revlog.revlogio().parseindex
3243
3292
3244 rllen = len(rl)
3293 rllen = len(rl)
3245
3294
3246 node0 = rl.node(0)
3295 node0 = rl.node(0)
3247 node25 = rl.node(rllen // 4)
3296 node25 = rl.node(rllen // 4)
3248 node50 = rl.node(rllen // 2)
3297 node50 = rl.node(rllen // 2)
3249 node75 = rl.node(rllen // 4 * 3)
3298 node75 = rl.node(rllen // 4 * 3)
3250 node100 = rl.node(rllen - 1)
3299 node100 = rl.node(rllen - 1)
3251
3300
3252 allrevs = range(rllen)
3301 allrevs = range(rllen)
3253 allrevsrev = list(reversed(allrevs))
3302 allrevsrev = list(reversed(allrevs))
3254 allnodes = [rl.node(rev) for rev in range(rllen)]
3303 allnodes = [rl.node(rev) for rev in range(rllen)]
3255 allnodesrev = list(reversed(allnodes))
3304 allnodesrev = list(reversed(allnodes))
3256
3305
3257 def constructor():
3306 def constructor():
3258 if radix is not None:
3307 if radix is not None:
3259 revlog(opener, radix=radix)
3308 revlog(opener, radix=radix)
3260 else:
3309 else:
3261 # hg <= 5.8
3310 # hg <= 5.8
3262 revlog(opener, indexfile=indexfile)
3311 revlog(opener, indexfile=indexfile)
3263
3312
3264 def read():
3313 def read():
3265 with opener(indexfile) as fh:
3314 with opener(indexfile) as fh:
3266 fh.read()
3315 fh.read()
3267
3316
3268 def parseindex():
3317 def parseindex():
3269 parse_index_v1(data, inline)
3318 parse_index_v1(data, inline)
3270
3319
3271 def getentry(revornode):
3320 def getentry(revornode):
3272 index = parse_index_v1(data, inline)[0]
3321 index = parse_index_v1(data, inline)[0]
3273 index[revornode]
3322 index[revornode]
3274
3323
3275 def getentries(revs, count=1):
3324 def getentries(revs, count=1):
3276 index = parse_index_v1(data, inline)[0]
3325 index = parse_index_v1(data, inline)[0]
3277
3326
3278 for i in range(count):
3327 for i in range(count):
3279 for rev in revs:
3328 for rev in revs:
3280 index[rev]
3329 index[rev]
3281
3330
3282 def resolvenode(node):
3331 def resolvenode(node):
3283 index = parse_index_v1(data, inline)[0]
3332 index = parse_index_v1(data, inline)[0]
3284 rev = getattr(index, 'rev', None)
3333 rev = getattr(index, 'rev', None)
3285 if rev is None:
3334 if rev is None:
3286 nodemap = getattr(parse_index_v1(data, inline)[0], 'nodemap', None)
3335 nodemap = getattr(parse_index_v1(data, inline)[0], 'nodemap', None)
3287 # This only works for the C code.
3336 # This only works for the C code.
3288 if nodemap is None:
3337 if nodemap is None:
3289 return
3338 return
3290 rev = nodemap.__getitem__
3339 rev = nodemap.__getitem__
3291
3340
3292 try:
3341 try:
3293 rev(node)
3342 rev(node)
3294 except error.RevlogError:
3343 except error.RevlogError:
3295 pass
3344 pass
3296
3345
3297 def resolvenodes(nodes, count=1):
3346 def resolvenodes(nodes, count=1):
3298 index = parse_index_v1(data, inline)[0]
3347 index = parse_index_v1(data, inline)[0]
3299 rev = getattr(index, 'rev', None)
3348 rev = getattr(index, 'rev', None)
3300 if rev is None:
3349 if rev is None:
3301 nodemap = getattr(parse_index_v1(data, inline)[0], 'nodemap', None)
3350 nodemap = getattr(parse_index_v1(data, inline)[0], 'nodemap', None)
3302 # This only works for the C code.
3351 # This only works for the C code.
3303 if nodemap is None:
3352 if nodemap is None:
3304 return
3353 return
3305 rev = nodemap.__getitem__
3354 rev = nodemap.__getitem__
3306
3355
3307 for i in range(count):
3356 for i in range(count):
3308 for node in nodes:
3357 for node in nodes:
3309 try:
3358 try:
3310 rev(node)
3359 rev(node)
3311 except error.RevlogError:
3360 except error.RevlogError:
3312 pass
3361 pass
3313
3362
3314 benches = [
3363 benches = [
3315 (constructor, b'revlog constructor'),
3364 (constructor, b'revlog constructor'),
3316 (read, b'read'),
3365 (read, b'read'),
3317 (parseindex, b'create index object'),
3366 (parseindex, b'create index object'),
3318 (lambda: getentry(0), b'retrieve index entry for rev 0'),
3367 (lambda: getentry(0), b'retrieve index entry for rev 0'),
3319 (lambda: resolvenode(b'a' * 20), b'look up missing node'),
3368 (lambda: resolvenode(b'a' * 20), b'look up missing node'),
3320 (lambda: resolvenode(node0), b'look up node at rev 0'),
3369 (lambda: resolvenode(node0), b'look up node at rev 0'),
3321 (lambda: resolvenode(node25), b'look up node at 1/4 len'),
3370 (lambda: resolvenode(node25), b'look up node at 1/4 len'),
3322 (lambda: resolvenode(node50), b'look up node at 1/2 len'),
3371 (lambda: resolvenode(node50), b'look up node at 1/2 len'),
3323 (lambda: resolvenode(node75), b'look up node at 3/4 len'),
3372 (lambda: resolvenode(node75), b'look up node at 3/4 len'),
3324 (lambda: resolvenode(node100), b'look up node at tip'),
3373 (lambda: resolvenode(node100), b'look up node at tip'),
3325 # 2x variation is to measure caching impact.
3374 # 2x variation is to measure caching impact.
3326 (lambda: resolvenodes(allnodes), b'look up all nodes (forward)'),
3375 (lambda: resolvenodes(allnodes), b'look up all nodes (forward)'),
3327 (lambda: resolvenodes(allnodes, 2), b'look up all nodes 2x (forward)'),
3376 (lambda: resolvenodes(allnodes, 2), b'look up all nodes 2x (forward)'),
3328 (lambda: resolvenodes(allnodesrev), b'look up all nodes (reverse)'),
3377 (lambda: resolvenodes(allnodesrev), b'look up all nodes (reverse)'),
3329 (
3378 (
3330 lambda: resolvenodes(allnodesrev, 2),
3379 lambda: resolvenodes(allnodesrev, 2),
3331 b'look up all nodes 2x (reverse)',
3380 b'look up all nodes 2x (reverse)',
3332 ),
3381 ),
3333 (lambda: getentries(allrevs), b'retrieve all index entries (forward)'),
3382 (lambda: getentries(allrevs), b'retrieve all index entries (forward)'),
3334 (
3383 (
3335 lambda: getentries(allrevs, 2),
3384 lambda: getentries(allrevs, 2),
3336 b'retrieve all index entries 2x (forward)',
3385 b'retrieve all index entries 2x (forward)',
3337 ),
3386 ),
3338 (
3387 (
3339 lambda: getentries(allrevsrev),
3388 lambda: getentries(allrevsrev),
3340 b'retrieve all index entries (reverse)',
3389 b'retrieve all index entries (reverse)',
3341 ),
3390 ),
3342 (
3391 (
3343 lambda: getentries(allrevsrev, 2),
3392 lambda: getentries(allrevsrev, 2),
3344 b'retrieve all index entries 2x (reverse)',
3393 b'retrieve all index entries 2x (reverse)',
3345 ),
3394 ),
3346 ]
3395 ]
3347
3396
3348 for fn, title in benches:
3397 for fn, title in benches:
3349 timer, fm = gettimer(ui, opts)
3398 timer, fm = gettimer(ui, opts)
3350 timer(fn, title=title)
3399 timer(fn, title=title)
3351 fm.end()
3400 fm.end()
3352
3401
3353
3402
3354 @command(
3403 @command(
3355 b'perf::revlogrevisions|perfrevlogrevisions',
3404 b'perf::revlogrevisions|perfrevlogrevisions',
3356 revlogopts
3405 revlogopts
3357 + formatteropts
3406 + formatteropts
3358 + [
3407 + [
3359 (b'd', b'dist', 100, b'distance between the revisions'),
3408 (b'd', b'dist', 100, b'distance between the revisions'),
3360 (b's', b'startrev', 0, b'revision to start reading at'),
3409 (b's', b'startrev', 0, b'revision to start reading at'),
3361 (b'', b'reverse', False, b'read in reverse'),
3410 (b'', b'reverse', False, b'read in reverse'),
3362 ],
3411 ],
3363 b'-c|-m|FILE',
3412 b'-c|-m|FILE',
3364 )
3413 )
3365 def perfrevlogrevisions(
3414 def perfrevlogrevisions(
3366 ui, repo, file_=None, startrev=0, reverse=False, **opts
3415 ui, repo, file_=None, startrev=0, reverse=False, **opts
3367 ):
3416 ):
3368 """Benchmark reading a series of revisions from a revlog.
3417 """Benchmark reading a series of revisions from a revlog.
3369
3418
3370 By default, we read every ``-d/--dist`` revision from 0 to tip of
3419 By default, we read every ``-d/--dist`` revision from 0 to tip of
3371 the specified revlog.
3420 the specified revlog.
3372
3421
3373 The start revision can be defined via ``-s/--startrev``.
3422 The start revision can be defined via ``-s/--startrev``.
3374 """
3423 """
3375 opts = _byteskwargs(opts)
3424 opts = _byteskwargs(opts)
3376
3425
3377 rl = cmdutil.openrevlog(repo, b'perfrevlogrevisions', file_, opts)
3426 rl = cmdutil.openrevlog(repo, b'perfrevlogrevisions', file_, opts)
3378 rllen = getlen(ui)(rl)
3427 rllen = getlen(ui)(rl)
3379
3428
3380 if startrev < 0:
3429 if startrev < 0:
3381 startrev = rllen + startrev
3430 startrev = rllen + startrev
3382
3431
3383 def d():
3432 def d():
3384 rl.clearcaches()
3433 rl.clearcaches()
3385
3434
3386 beginrev = startrev
3435 beginrev = startrev
3387 endrev = rllen
3436 endrev = rllen
3388 dist = opts[b'dist']
3437 dist = opts[b'dist']
3389
3438
3390 if reverse:
3439 if reverse:
3391 beginrev, endrev = endrev - 1, beginrev - 1
3440 beginrev, endrev = endrev - 1, beginrev - 1
3392 dist = -1 * dist
3441 dist = -1 * dist
3393
3442
3394 for x in _xrange(beginrev, endrev, dist):
3443 for x in _xrange(beginrev, endrev, dist):
3395 # Old revisions don't support passing int.
3444 # Old revisions don't support passing int.
3396 n = rl.node(x)
3445 n = rl.node(x)
3397 rl.revision(n)
3446 rl.revision(n)
3398
3447
3399 timer, fm = gettimer(ui, opts)
3448 timer, fm = gettimer(ui, opts)
3400 timer(d)
3449 timer(d)
3401 fm.end()
3450 fm.end()
3402
3451
3403
3452
3404 @command(
3453 @command(
3405 b'perf::revlogwrite|perfrevlogwrite',
3454 b'perf::revlogwrite|perfrevlogwrite',
3406 revlogopts
3455 revlogopts
3407 + formatteropts
3456 + formatteropts
3408 + [
3457 + [
3409 (b's', b'startrev', 1000, b'revision to start writing at'),
3458 (b's', b'startrev', 1000, b'revision to start writing at'),
3410 (b'', b'stoprev', -1, b'last revision to write'),
3459 (b'', b'stoprev', -1, b'last revision to write'),
3411 (b'', b'count', 3, b'number of passes to perform'),
3460 (b'', b'count', 3, b'number of passes to perform'),
3412 (b'', b'details', False, b'print timing for every revisions tested'),
3461 (b'', b'details', False, b'print timing for every revisions tested'),
3413 (b'', b'source', b'full', b'the kind of data feed in the revlog'),
3462 (b'', b'source', b'full', b'the kind of data feed in the revlog'),
3414 (b'', b'lazydeltabase', True, b'try the provided delta first'),
3463 (b'', b'lazydeltabase', True, b'try the provided delta first'),
3415 (b'', b'clear-caches', True, b'clear revlog cache between calls'),
3464 (b'', b'clear-caches', True, b'clear revlog cache between calls'),
3416 ],
3465 ],
3417 b'-c|-m|FILE',
3466 b'-c|-m|FILE',
3418 )
3467 )
3419 def perfrevlogwrite(ui, repo, file_=None, startrev=1000, stoprev=-1, **opts):
3468 def perfrevlogwrite(ui, repo, file_=None, startrev=1000, stoprev=-1, **opts):
3420 """Benchmark writing a series of revisions to a revlog.
3469 """Benchmark writing a series of revisions to a revlog.
3421
3470
3422 Possible source values are:
3471 Possible source values are:
3423 * `full`: add from a full text (default).
3472 * `full`: add from a full text (default).
3424 * `parent-1`: add from a delta to the first parent
3473 * `parent-1`: add from a delta to the first parent
3425 * `parent-2`: add from a delta to the second parent if it exists
3474 * `parent-2`: add from a delta to the second parent if it exists
3426 (use a delta from the first parent otherwise)
3475 (use a delta from the first parent otherwise)
3427 * `parent-smallest`: add from the smallest delta (either p1 or p2)
3476 * `parent-smallest`: add from the smallest delta (either p1 or p2)
3428 * `storage`: add from the existing precomputed deltas
3477 * `storage`: add from the existing precomputed deltas
3429
3478
3430 Note: This performance command measures performance in a custom way. As a
3479 Note: This performance command measures performance in a custom way. As a
3431 result some of the global configuration of the 'perf' command does not
3480 result some of the global configuration of the 'perf' command does not
3432 apply to it:
3481 apply to it:
3433
3482
3434 * ``pre-run``: disabled
3483 * ``pre-run``: disabled
3435
3484
3436 * ``profile-benchmark``: disabled
3485 * ``profile-benchmark``: disabled
3437
3486
3438 * ``run-limits``: disabled use --count instead
3487 * ``run-limits``: disabled use --count instead
3439 """
3488 """
3440 opts = _byteskwargs(opts)
3489 opts = _byteskwargs(opts)
3441
3490
3442 rl = cmdutil.openrevlog(repo, b'perfrevlogwrite', file_, opts)
3491 rl = cmdutil.openrevlog(repo, b'perfrevlogwrite', file_, opts)
3443 rllen = getlen(ui)(rl)
3492 rllen = getlen(ui)(rl)
3444 if startrev < 0:
3493 if startrev < 0:
3445 startrev = rllen + startrev
3494 startrev = rllen + startrev
3446 if stoprev < 0:
3495 if stoprev < 0:
3447 stoprev = rllen + stoprev
3496 stoprev = rllen + stoprev
3448
3497
3449 lazydeltabase = opts['lazydeltabase']
3498 lazydeltabase = opts['lazydeltabase']
3450 source = opts['source']
3499 source = opts['source']
3451 clearcaches = opts['clear_caches']
3500 clearcaches = opts['clear_caches']
3452 validsource = (
3501 validsource = (
3453 b'full',
3502 b'full',
3454 b'parent-1',
3503 b'parent-1',
3455 b'parent-2',
3504 b'parent-2',
3456 b'parent-smallest',
3505 b'parent-smallest',
3457 b'storage',
3506 b'storage',
3458 )
3507 )
3459 if source not in validsource:
3508 if source not in validsource:
3460 raise error.Abort('invalid source type: %s' % source)
3509 raise error.Abort('invalid source type: %s' % source)
3461
3510
3462 ### actually gather results
3511 ### actually gather results
3463 count = opts['count']
3512 count = opts['count']
3464 if count <= 0:
3513 if count <= 0:
3465 raise error.Abort('invalide run count: %d' % count)
3514 raise error.Abort('invalide run count: %d' % count)
3466 allresults = []
3515 allresults = []
3467 for c in range(count):
3516 for c in range(count):
3468 timing = _timeonewrite(
3517 timing = _timeonewrite(
3469 ui,
3518 ui,
3470 rl,
3519 rl,
3471 source,
3520 source,
3472 startrev,
3521 startrev,
3473 stoprev,
3522 stoprev,
3474 c + 1,
3523 c + 1,
3475 lazydeltabase=lazydeltabase,
3524 lazydeltabase=lazydeltabase,
3476 clearcaches=clearcaches,
3525 clearcaches=clearcaches,
3477 )
3526 )
3478 allresults.append(timing)
3527 allresults.append(timing)
3479
3528
3480 ### consolidate the results in a single list
3529 ### consolidate the results in a single list
3481 results = []
3530 results = []
3482 for idx, (rev, t) in enumerate(allresults[0]):
3531 for idx, (rev, t) in enumerate(allresults[0]):
3483 ts = [t]
3532 ts = [t]
3484 for other in allresults[1:]:
3533 for other in allresults[1:]:
3485 orev, ot = other[idx]
3534 orev, ot = other[idx]
3486 assert orev == rev
3535 assert orev == rev
3487 ts.append(ot)
3536 ts.append(ot)
3488 results.append((rev, ts))
3537 results.append((rev, ts))
3489 resultcount = len(results)
3538 resultcount = len(results)
3490
3539
3491 ### Compute and display relevant statistics
3540 ### Compute and display relevant statistics
3492
3541
3493 # get a formatter
3542 # get a formatter
3494 fm = ui.formatter(b'perf', opts)
3543 fm = ui.formatter(b'perf', opts)
3495 displayall = ui.configbool(b"perf", b"all-timing", True)
3544 displayall = ui.configbool(b"perf", b"all-timing", True)
3496
3545
3497 # print individual details if requested
3546 # print individual details if requested
3498 if opts['details']:
3547 if opts['details']:
3499 for idx, item in enumerate(results, 1):
3548 for idx, item in enumerate(results, 1):
3500 rev, data = item
3549 rev, data = item
3501 title = 'revisions #%d of %d, rev %d' % (idx, resultcount, rev)
3550 title = 'revisions #%d of %d, rev %d' % (idx, resultcount, rev)
3502 formatone(fm, data, title=title, displayall=displayall)
3551 formatone(fm, data, title=title, displayall=displayall)
3503
3552
3504 # sorts results by median time
3553 # sorts results by median time
3505 results.sort(key=lambda x: sorted(x[1])[len(x[1]) // 2])
3554 results.sort(key=lambda x: sorted(x[1])[len(x[1]) // 2])
3506 # list of (name, index) to display)
3555 # list of (name, index) to display)
3507 relevants = [
3556 relevants = [
3508 ("min", 0),
3557 ("min", 0),
3509 ("10%", resultcount * 10 // 100),
3558 ("10%", resultcount * 10 // 100),
3510 ("25%", resultcount * 25 // 100),
3559 ("25%", resultcount * 25 // 100),
3511 ("50%", resultcount * 70 // 100),
3560 ("50%", resultcount * 70 // 100),
3512 ("75%", resultcount * 75 // 100),
3561 ("75%", resultcount * 75 // 100),
3513 ("90%", resultcount * 90 // 100),
3562 ("90%", resultcount * 90 // 100),
3514 ("95%", resultcount * 95 // 100),
3563 ("95%", resultcount * 95 // 100),
3515 ("99%", resultcount * 99 // 100),
3564 ("99%", resultcount * 99 // 100),
3516 ("99.9%", resultcount * 999 // 1000),
3565 ("99.9%", resultcount * 999 // 1000),
3517 ("99.99%", resultcount * 9999 // 10000),
3566 ("99.99%", resultcount * 9999 // 10000),
3518 ("99.999%", resultcount * 99999 // 100000),
3567 ("99.999%", resultcount * 99999 // 100000),
3519 ("max", -1),
3568 ("max", -1),
3520 ]
3569 ]
3521 if not ui.quiet:
3570 if not ui.quiet:
3522 for name, idx in relevants:
3571 for name, idx in relevants:
3523 data = results[idx]
3572 data = results[idx]
3524 title = '%s of %d, rev %d' % (name, resultcount, data[0])
3573 title = '%s of %d, rev %d' % (name, resultcount, data[0])
3525 formatone(fm, data[1], title=title, displayall=displayall)
3574 formatone(fm, data[1], title=title, displayall=displayall)
3526
3575
3527 # XXX summing that many float will not be very precise, we ignore this fact
3576 # XXX summing that many float will not be very precise, we ignore this fact
3528 # for now
3577 # for now
3529 totaltime = []
3578 totaltime = []
3530 for item in allresults:
3579 for item in allresults:
3531 totaltime.append(
3580 totaltime.append(
3532 (
3581 (
3533 sum(x[1][0] for x in item),
3582 sum(x[1][0] for x in item),
3534 sum(x[1][1] for x in item),
3583 sum(x[1][1] for x in item),
3535 sum(x[1][2] for x in item),
3584 sum(x[1][2] for x in item),
3536 )
3585 )
3537 )
3586 )
3538 formatone(
3587 formatone(
3539 fm,
3588 fm,
3540 totaltime,
3589 totaltime,
3541 title="total time (%d revs)" % resultcount,
3590 title="total time (%d revs)" % resultcount,
3542 displayall=displayall,
3591 displayall=displayall,
3543 )
3592 )
3544 fm.end()
3593 fm.end()
3545
3594
3546
3595
3547 class _faketr:
3596 class _faketr:
3548 def add(s, x, y, z=None):
3597 def add(s, x, y, z=None):
3549 return None
3598 return None
3550
3599
3551
3600
3552 def _timeonewrite(
3601 def _timeonewrite(
3553 ui,
3602 ui,
3554 orig,
3603 orig,
3555 source,
3604 source,
3556 startrev,
3605 startrev,
3557 stoprev,
3606 stoprev,
3558 runidx=None,
3607 runidx=None,
3559 lazydeltabase=True,
3608 lazydeltabase=True,
3560 clearcaches=True,
3609 clearcaches=True,
3561 ):
3610 ):
3562 timings = []
3611 timings = []
3563 tr = _faketr()
3612 tr = _faketr()
3564 with _temprevlog(ui, orig, startrev) as dest:
3613 with _temprevlog(ui, orig, startrev) as dest:
3565 if hasattr(dest, "delta_config"):
3614 if hasattr(dest, "delta_config"):
3566 dest.delta_config.lazy_delta_base = lazydeltabase
3615 dest.delta_config.lazy_delta_base = lazydeltabase
3567 else:
3616 else:
3568 dest._lazydeltabase = lazydeltabase
3617 dest._lazydeltabase = lazydeltabase
3569 revs = list(orig.revs(startrev, stoprev))
3618 revs = list(orig.revs(startrev, stoprev))
3570 total = len(revs)
3619 total = len(revs)
3571 topic = 'adding'
3620 topic = 'adding'
3572 if runidx is not None:
3621 if runidx is not None:
3573 topic += ' (run #%d)' % runidx
3622 topic += ' (run #%d)' % runidx
3574 # Support both old and new progress API
3623 # Support both old and new progress API
3575 if util.safehasattr(ui, 'makeprogress'):
3624 if util.safehasattr(ui, 'makeprogress'):
3576 progress = ui.makeprogress(topic, unit='revs', total=total)
3625 progress = ui.makeprogress(topic, unit='revs', total=total)
3577
3626
3578 def updateprogress(pos):
3627 def updateprogress(pos):
3579 progress.update(pos)
3628 progress.update(pos)
3580
3629
3581 def completeprogress():
3630 def completeprogress():
3582 progress.complete()
3631 progress.complete()
3583
3632
3584 else:
3633 else:
3585
3634
3586 def updateprogress(pos):
3635 def updateprogress(pos):
3587 ui.progress(topic, pos, unit='revs', total=total)
3636 ui.progress(topic, pos, unit='revs', total=total)
3588
3637
3589 def completeprogress():
3638 def completeprogress():
3590 ui.progress(topic, None, unit='revs', total=total)
3639 ui.progress(topic, None, unit='revs', total=total)
3591
3640
3592 for idx, rev in enumerate(revs):
3641 for idx, rev in enumerate(revs):
3593 updateprogress(idx)
3642 updateprogress(idx)
3594 addargs, addkwargs = _getrevisionseed(orig, rev, tr, source)
3643 addargs, addkwargs = _getrevisionseed(orig, rev, tr, source)
3595 if clearcaches:
3644 if clearcaches:
3596 dest.index.clearcaches()
3645 dest.index.clearcaches()
3597 dest.clearcaches()
3646 dest.clearcaches()
3598 with timeone() as r:
3647 with timeone() as r:
3599 dest.addrawrevision(*addargs, **addkwargs)
3648 dest.addrawrevision(*addargs, **addkwargs)
3600 timings.append((rev, r[0]))
3649 timings.append((rev, r[0]))
3601 updateprogress(total)
3650 updateprogress(total)
3602 completeprogress()
3651 completeprogress()
3603 return timings
3652 return timings
3604
3653
3605
3654
3606 def _getrevisionseed(orig, rev, tr, source):
3655 def _getrevisionseed(orig, rev, tr, source):
3607 from mercurial.node import nullid
3656 from mercurial.node import nullid
3608
3657
3609 linkrev = orig.linkrev(rev)
3658 linkrev = orig.linkrev(rev)
3610 node = orig.node(rev)
3659 node = orig.node(rev)
3611 p1, p2 = orig.parents(node)
3660 p1, p2 = orig.parents(node)
3612 flags = orig.flags(rev)
3661 flags = orig.flags(rev)
3613 cachedelta = None
3662 cachedelta = None
3614 text = None
3663 text = None
3615
3664
3616 if source == b'full':
3665 if source == b'full':
3617 text = orig.revision(rev)
3666 text = orig.revision(rev)
3618 elif source == b'parent-1':
3667 elif source == b'parent-1':
3619 baserev = orig.rev(p1)
3668 baserev = orig.rev(p1)
3620 cachedelta = (baserev, orig.revdiff(p1, rev))
3669 cachedelta = (baserev, orig.revdiff(p1, rev))
3621 elif source == b'parent-2':
3670 elif source == b'parent-2':
3622 parent = p2
3671 parent = p2
3623 if p2 == nullid:
3672 if p2 == nullid:
3624 parent = p1
3673 parent = p1
3625 baserev = orig.rev(parent)
3674 baserev = orig.rev(parent)
3626 cachedelta = (baserev, orig.revdiff(parent, rev))
3675 cachedelta = (baserev, orig.revdiff(parent, rev))
3627 elif source == b'parent-smallest':
3676 elif source == b'parent-smallest':
3628 p1diff = orig.revdiff(p1, rev)
3677 p1diff = orig.revdiff(p1, rev)
3629 parent = p1
3678 parent = p1
3630 diff = p1diff
3679 diff = p1diff
3631 if p2 != nullid:
3680 if p2 != nullid:
3632 p2diff = orig.revdiff(p2, rev)
3681 p2diff = orig.revdiff(p2, rev)
3633 if len(p1diff) > len(p2diff):
3682 if len(p1diff) > len(p2diff):
3634 parent = p2
3683 parent = p2
3635 diff = p2diff
3684 diff = p2diff
3636 baserev = orig.rev(parent)
3685 baserev = orig.rev(parent)
3637 cachedelta = (baserev, diff)
3686 cachedelta = (baserev, diff)
3638 elif source == b'storage':
3687 elif source == b'storage':
3639 baserev = orig.deltaparent(rev)
3688 baserev = orig.deltaparent(rev)
3640 cachedelta = (baserev, orig.revdiff(orig.node(baserev), rev))
3689 cachedelta = (baserev, orig.revdiff(orig.node(baserev), rev))
3641
3690
3642 return (
3691 return (
3643 (text, tr, linkrev, p1, p2),
3692 (text, tr, linkrev, p1, p2),
3644 {'node': node, 'flags': flags, 'cachedelta': cachedelta},
3693 {'node': node, 'flags': flags, 'cachedelta': cachedelta},
3645 )
3694 )
3646
3695
3647
3696
3648 @contextlib.contextmanager
3697 @contextlib.contextmanager
3649 def _temprevlog(ui, orig, truncaterev):
3698 def _temprevlog(ui, orig, truncaterev):
3650 from mercurial import vfs as vfsmod
3699 from mercurial import vfs as vfsmod
3651
3700
3652 if orig._inline:
3701 if orig._inline:
3653 raise error.Abort('not supporting inline revlog (yet)')
3702 raise error.Abort('not supporting inline revlog (yet)')
3654 revlogkwargs = {}
3703 revlogkwargs = {}
3655 k = 'upperboundcomp'
3704 k = 'upperboundcomp'
3656 if util.safehasattr(orig, k):
3705 if util.safehasattr(orig, k):
3657 revlogkwargs[k] = getattr(orig, k)
3706 revlogkwargs[k] = getattr(orig, k)
3658
3707
3659 indexfile = getattr(orig, '_indexfile', None)
3708 indexfile = getattr(orig, '_indexfile', None)
3660 if indexfile is None:
3709 if indexfile is None:
3661 # compatibility with <= hg-5.8
3710 # compatibility with <= hg-5.8
3662 indexfile = getattr(orig, 'indexfile')
3711 indexfile = getattr(orig, 'indexfile')
3663 origindexpath = orig.opener.join(indexfile)
3712 origindexpath = orig.opener.join(indexfile)
3664
3713
3665 datafile = getattr(orig, '_datafile', getattr(orig, 'datafile'))
3714 datafile = getattr(orig, '_datafile', getattr(orig, 'datafile'))
3666 origdatapath = orig.opener.join(datafile)
3715 origdatapath = orig.opener.join(datafile)
3667 radix = b'revlog'
3716 radix = b'revlog'
3668 indexname = b'revlog.i'
3717 indexname = b'revlog.i'
3669 dataname = b'revlog.d'
3718 dataname = b'revlog.d'
3670
3719
3671 tmpdir = tempfile.mkdtemp(prefix='tmp-hgperf-')
3720 tmpdir = tempfile.mkdtemp(prefix='tmp-hgperf-')
3672 try:
3721 try:
3673 # copy the data file in a temporary directory
3722 # copy the data file in a temporary directory
3674 ui.debug('copying data in %s\n' % tmpdir)
3723 ui.debug('copying data in %s\n' % tmpdir)
3675 destindexpath = os.path.join(tmpdir, 'revlog.i')
3724 destindexpath = os.path.join(tmpdir, 'revlog.i')
3676 destdatapath = os.path.join(tmpdir, 'revlog.d')
3725 destdatapath = os.path.join(tmpdir, 'revlog.d')
3677 shutil.copyfile(origindexpath, destindexpath)
3726 shutil.copyfile(origindexpath, destindexpath)
3678 shutil.copyfile(origdatapath, destdatapath)
3727 shutil.copyfile(origdatapath, destdatapath)
3679
3728
3680 # remove the data we want to add again
3729 # remove the data we want to add again
3681 ui.debug('truncating data to be rewritten\n')
3730 ui.debug('truncating data to be rewritten\n')
3682 with open(destindexpath, 'ab') as index:
3731 with open(destindexpath, 'ab') as index:
3683 index.seek(0)
3732 index.seek(0)
3684 index.truncate(truncaterev * orig._io.size)
3733 index.truncate(truncaterev * orig._io.size)
3685 with open(destdatapath, 'ab') as data:
3734 with open(destdatapath, 'ab') as data:
3686 data.seek(0)
3735 data.seek(0)
3687 data.truncate(orig.start(truncaterev))
3736 data.truncate(orig.start(truncaterev))
3688
3737
3689 # instantiate a new revlog from the temporary copy
3738 # instantiate a new revlog from the temporary copy
3690 ui.debug('truncating adding to be rewritten\n')
3739 ui.debug('truncating adding to be rewritten\n')
3691 vfs = vfsmod.vfs(tmpdir)
3740 vfs = vfsmod.vfs(tmpdir)
3692 vfs.options = getattr(orig.opener, 'options', None)
3741 vfs.options = getattr(orig.opener, 'options', None)
3693
3742
3694 try:
3743 try:
3695 dest = revlog(vfs, radix=radix, **revlogkwargs)
3744 dest = revlog(vfs, radix=radix, **revlogkwargs)
3696 except TypeError:
3745 except TypeError:
3697 dest = revlog(
3746 dest = revlog(
3698 vfs, indexfile=indexname, datafile=dataname, **revlogkwargs
3747 vfs, indexfile=indexname, datafile=dataname, **revlogkwargs
3699 )
3748 )
3700 if dest._inline:
3749 if dest._inline:
3701 raise error.Abort('not supporting inline revlog (yet)')
3750 raise error.Abort('not supporting inline revlog (yet)')
3702 # make sure internals are initialized
3751 # make sure internals are initialized
3703 dest.revision(len(dest) - 1)
3752 dest.revision(len(dest) - 1)
3704 yield dest
3753 yield dest
3705 del dest, vfs
3754 del dest, vfs
3706 finally:
3755 finally:
3707 shutil.rmtree(tmpdir, True)
3756 shutil.rmtree(tmpdir, True)
3708
3757
3709
3758
3710 @command(
3759 @command(
3711 b'perf::revlogchunks|perfrevlogchunks',
3760 b'perf::revlogchunks|perfrevlogchunks',
3712 revlogopts
3761 revlogopts
3713 + formatteropts
3762 + formatteropts
3714 + [
3763 + [
3715 (b'e', b'engines', b'', b'compression engines to use'),
3764 (b'e', b'engines', b'', b'compression engines to use'),
3716 (b's', b'startrev', 0, b'revision to start at'),
3765 (b's', b'startrev', 0, b'revision to start at'),
3717 ],
3766 ],
3718 b'-c|-m|FILE',
3767 b'-c|-m|FILE',
3719 )
3768 )
3720 def perfrevlogchunks(ui, repo, file_=None, engines=None, startrev=0, **opts):
3769 def perfrevlogchunks(ui, repo, file_=None, engines=None, startrev=0, **opts):
3721 """Benchmark operations on revlog chunks.
3770 """Benchmark operations on revlog chunks.
3722
3771
3723 Logically, each revlog is a collection of fulltext revisions. However,
3772 Logically, each revlog is a collection of fulltext revisions. However,
3724 stored within each revlog are "chunks" of possibly compressed data. This
3773 stored within each revlog are "chunks" of possibly compressed data. This
3725 data needs to be read and decompressed or compressed and written.
3774 data needs to be read and decompressed or compressed and written.
3726
3775
3727 This command measures the time it takes to read+decompress and recompress
3776 This command measures the time it takes to read+decompress and recompress
3728 chunks in a revlog. It effectively isolates I/O and compression performance.
3777 chunks in a revlog. It effectively isolates I/O and compression performance.
3729 For measurements of higher-level operations like resolving revisions,
3778 For measurements of higher-level operations like resolving revisions,
3730 see ``perfrevlogrevisions`` and ``perfrevlogrevision``.
3779 see ``perfrevlogrevisions`` and ``perfrevlogrevision``.
3731 """
3780 """
3732 opts = _byteskwargs(opts)
3781 opts = _byteskwargs(opts)
3733
3782
3734 rl = cmdutil.openrevlog(repo, b'perfrevlogchunks', file_, opts)
3783 rl = cmdutil.openrevlog(repo, b'perfrevlogchunks', file_, opts)
3735
3784
3736 # - _chunkraw was renamed to _getsegmentforrevs
3785 # - _chunkraw was renamed to _getsegmentforrevs
3737 # - _getsegmentforrevs was moved on the inner object
3786 # - _getsegmentforrevs was moved on the inner object
3738 try:
3787 try:
3739 segmentforrevs = rl._inner.get_segment_for_revs
3788 segmentforrevs = rl._inner.get_segment_for_revs
3740 except AttributeError:
3789 except AttributeError:
3741 try:
3790 try:
3742 segmentforrevs = rl._getsegmentforrevs
3791 segmentforrevs = rl._getsegmentforrevs
3743 except AttributeError:
3792 except AttributeError:
3744 segmentforrevs = rl._chunkraw
3793 segmentforrevs = rl._chunkraw
3745
3794
3746 # Verify engines argument.
3795 # Verify engines argument.
3747 if engines:
3796 if engines:
3748 engines = {e.strip() for e in engines.split(b',')}
3797 engines = {e.strip() for e in engines.split(b',')}
3749 for engine in engines:
3798 for engine in engines:
3750 try:
3799 try:
3751 util.compressionengines[engine]
3800 util.compressionengines[engine]
3752 except KeyError:
3801 except KeyError:
3753 raise error.Abort(b'unknown compression engine: %s' % engine)
3802 raise error.Abort(b'unknown compression engine: %s' % engine)
3754 else:
3803 else:
3755 engines = []
3804 engines = []
3756 for e in util.compengines:
3805 for e in util.compengines:
3757 engine = util.compengines[e]
3806 engine = util.compengines[e]
3758 try:
3807 try:
3759 if engine.available():
3808 if engine.available():
3760 engine.revlogcompressor().compress(b'dummy')
3809 engine.revlogcompressor().compress(b'dummy')
3761 engines.append(e)
3810 engines.append(e)
3762 except NotImplementedError:
3811 except NotImplementedError:
3763 pass
3812 pass
3764
3813
3765 revs = list(rl.revs(startrev, len(rl) - 1))
3814 revs = list(rl.revs(startrev, len(rl) - 1))
3766
3815
3767 @contextlib.contextmanager
3816 @contextlib.contextmanager
3768 def reading(rl):
3817 def reading(rl):
3769 if getattr(rl, 'reading', None) is not None:
3818 if getattr(rl, 'reading', None) is not None:
3770 with rl.reading():
3819 with rl.reading():
3771 yield None
3820 yield None
3772 elif rl._inline:
3821 elif rl._inline:
3773 indexfile = getattr(rl, '_indexfile', None)
3822 indexfile = getattr(rl, '_indexfile', None)
3774 if indexfile is None:
3823 if indexfile is None:
3775 # compatibility with <= hg-5.8
3824 # compatibility with <= hg-5.8
3776 indexfile = getattr(rl, 'indexfile')
3825 indexfile = getattr(rl, 'indexfile')
3777 yield getsvfs(repo)(indexfile)
3826 yield getsvfs(repo)(indexfile)
3778 else:
3827 else:
3779 datafile = getattr(rl, 'datafile', getattr(rl, 'datafile'))
3828 datafile = getattr(rl, 'datafile', getattr(rl, 'datafile'))
3780 yield getsvfs(repo)(datafile)
3829 yield getsvfs(repo)(datafile)
3781
3830
3782 if getattr(rl, 'reading', None) is not None:
3831 if getattr(rl, 'reading', None) is not None:
3783
3832
3784 @contextlib.contextmanager
3833 @contextlib.contextmanager
3785 def lazy_reading(rl):
3834 def lazy_reading(rl):
3786 with rl.reading():
3835 with rl.reading():
3787 yield
3836 yield
3788
3837
3789 else:
3838 else:
3790
3839
3791 @contextlib.contextmanager
3840 @contextlib.contextmanager
3792 def lazy_reading(rl):
3841 def lazy_reading(rl):
3793 yield
3842 yield
3794
3843
3795 def doread():
3844 def doread():
3796 rl.clearcaches()
3845 rl.clearcaches()
3797 for rev in revs:
3846 for rev in revs:
3798 with lazy_reading(rl):
3847 with lazy_reading(rl):
3799 segmentforrevs(rev, rev)
3848 segmentforrevs(rev, rev)
3800
3849
3801 def doreadcachedfh():
3850 def doreadcachedfh():
3802 rl.clearcaches()
3851 rl.clearcaches()
3803 with reading(rl) as fh:
3852 with reading(rl) as fh:
3804 if fh is not None:
3853 if fh is not None:
3805 for rev in revs:
3854 for rev in revs:
3806 segmentforrevs(rev, rev, df=fh)
3855 segmentforrevs(rev, rev, df=fh)
3807 else:
3856 else:
3808 for rev in revs:
3857 for rev in revs:
3809 segmentforrevs(rev, rev)
3858 segmentforrevs(rev, rev)
3810
3859
3811 def doreadbatch():
3860 def doreadbatch():
3812 rl.clearcaches()
3861 rl.clearcaches()
3813 with lazy_reading(rl):
3862 with lazy_reading(rl):
3814 segmentforrevs(revs[0], revs[-1])
3863 segmentforrevs(revs[0], revs[-1])
3815
3864
3816 def doreadbatchcachedfh():
3865 def doreadbatchcachedfh():
3817 rl.clearcaches()
3866 rl.clearcaches()
3818 with reading(rl) as fh:
3867 with reading(rl) as fh:
3819 if fh is not None:
3868 if fh is not None:
3820 segmentforrevs(revs[0], revs[-1], df=fh)
3869 segmentforrevs(revs[0], revs[-1], df=fh)
3821 else:
3870 else:
3822 segmentforrevs(revs[0], revs[-1])
3871 segmentforrevs(revs[0], revs[-1])
3823
3872
3824 def dochunk():
3873 def dochunk():
3825 rl.clearcaches()
3874 rl.clearcaches()
3826 # chunk used to be available directly on the revlog
3875 # chunk used to be available directly on the revlog
3827 _chunk = getattr(rl, '_inner', rl)._chunk
3876 _chunk = getattr(rl, '_inner', rl)._chunk
3828 with reading(rl) as fh:
3877 with reading(rl) as fh:
3829 if fh is not None:
3878 if fh is not None:
3830 for rev in revs:
3879 for rev in revs:
3831 _chunk(rev, df=fh)
3880 _chunk(rev, df=fh)
3832 else:
3881 else:
3833 for rev in revs:
3882 for rev in revs:
3834 _chunk(rev)
3883 _chunk(rev)
3835
3884
3836 chunks = [None]
3885 chunks = [None]
3837
3886
3838 def dochunkbatch():
3887 def dochunkbatch():
3839 rl.clearcaches()
3888 rl.clearcaches()
3840 _chunks = getattr(rl, '_inner', rl)._chunks
3889 _chunks = getattr(rl, '_inner', rl)._chunks
3841 with reading(rl) as fh:
3890 with reading(rl) as fh:
3842 if fh is not None:
3891 if fh is not None:
3843 # Save chunks as a side-effect.
3892 # Save chunks as a side-effect.
3844 chunks[0] = _chunks(revs, df=fh)
3893 chunks[0] = _chunks(revs, df=fh)
3845 else:
3894 else:
3846 # Save chunks as a side-effect.
3895 # Save chunks as a side-effect.
3847 chunks[0] = _chunks(revs)
3896 chunks[0] = _chunks(revs)
3848
3897
3849 def docompress(compressor):
3898 def docompress(compressor):
3850 rl.clearcaches()
3899 rl.clearcaches()
3851
3900
3852 compressor_holder = getattr(rl, '_inner', rl)
3901 compressor_holder = getattr(rl, '_inner', rl)
3853
3902
3854 try:
3903 try:
3855 # Swap in the requested compression engine.
3904 # Swap in the requested compression engine.
3856 oldcompressor = compressor_holder._compressor
3905 oldcompressor = compressor_holder._compressor
3857 compressor_holder._compressor = compressor
3906 compressor_holder._compressor = compressor
3858 for chunk in chunks[0]:
3907 for chunk in chunks[0]:
3859 rl.compress(chunk)
3908 rl.compress(chunk)
3860 finally:
3909 finally:
3861 compressor_holder._compressor = oldcompressor
3910 compressor_holder._compressor = oldcompressor
3862
3911
3863 benches = [
3912 benches = [
3864 (lambda: doread(), b'read'),
3913 (lambda: doread(), b'read'),
3865 (lambda: doreadcachedfh(), b'read w/ reused fd'),
3914 (lambda: doreadcachedfh(), b'read w/ reused fd'),
3866 (lambda: doreadbatch(), b'read batch'),
3915 (lambda: doreadbatch(), b'read batch'),
3867 (lambda: doreadbatchcachedfh(), b'read batch w/ reused fd'),
3916 (lambda: doreadbatchcachedfh(), b'read batch w/ reused fd'),
3868 (lambda: dochunk(), b'chunk'),
3917 (lambda: dochunk(), b'chunk'),
3869 (lambda: dochunkbatch(), b'chunk batch'),
3918 (lambda: dochunkbatch(), b'chunk batch'),
3870 ]
3919 ]
3871
3920
3872 for engine in sorted(engines):
3921 for engine in sorted(engines):
3873 compressor = util.compengines[engine].revlogcompressor()
3922 compressor = util.compengines[engine].revlogcompressor()
3874 benches.append(
3923 benches.append(
3875 (
3924 (
3876 functools.partial(docompress, compressor),
3925 functools.partial(docompress, compressor),
3877 b'compress w/ %s' % engine,
3926 b'compress w/ %s' % engine,
3878 )
3927 )
3879 )
3928 )
3880
3929
3881 for fn, title in benches:
3930 for fn, title in benches:
3882 timer, fm = gettimer(ui, opts)
3931 timer, fm = gettimer(ui, opts)
3883 timer(fn, title=title)
3932 timer(fn, title=title)
3884 fm.end()
3933 fm.end()
3885
3934
3886
3935
3887 @command(
3936 @command(
3888 b'perf::revlogrevision|perfrevlogrevision',
3937 b'perf::revlogrevision|perfrevlogrevision',
3889 revlogopts
3938 revlogopts
3890 + formatteropts
3939 + formatteropts
3891 + [(b'', b'cache', False, b'use caches instead of clearing')],
3940 + [(b'', b'cache', False, b'use caches instead of clearing')],
3892 b'-c|-m|FILE REV',
3941 b'-c|-m|FILE REV',
3893 )
3942 )
3894 def perfrevlogrevision(ui, repo, file_, rev=None, cache=None, **opts):
3943 def perfrevlogrevision(ui, repo, file_, rev=None, cache=None, **opts):
3895 """Benchmark obtaining a revlog revision.
3944 """Benchmark obtaining a revlog revision.
3896
3945
3897 Obtaining a revlog revision consists of roughly the following steps:
3946 Obtaining a revlog revision consists of roughly the following steps:
3898
3947
3899 1. Compute the delta chain
3948 1. Compute the delta chain
3900 2. Slice the delta chain if applicable
3949 2. Slice the delta chain if applicable
3901 3. Obtain the raw chunks for that delta chain
3950 3. Obtain the raw chunks for that delta chain
3902 4. Decompress each raw chunk
3951 4. Decompress each raw chunk
3903 5. Apply binary patches to obtain fulltext
3952 5. Apply binary patches to obtain fulltext
3904 6. Verify hash of fulltext
3953 6. Verify hash of fulltext
3905
3954
3906 This command measures the time spent in each of these phases.
3955 This command measures the time spent in each of these phases.
3907 """
3956 """
3908 opts = _byteskwargs(opts)
3957 opts = _byteskwargs(opts)
3909
3958
3910 if opts.get(b'changelog') or opts.get(b'manifest'):
3959 if opts.get(b'changelog') or opts.get(b'manifest'):
3911 file_, rev = None, file_
3960 file_, rev = None, file_
3912 elif rev is None:
3961 elif rev is None:
3913 raise error.CommandError(b'perfrevlogrevision', b'invalid arguments')
3962 raise error.CommandError(b'perfrevlogrevision', b'invalid arguments')
3914
3963
3915 r = cmdutil.openrevlog(repo, b'perfrevlogrevision', file_, opts)
3964 r = cmdutil.openrevlog(repo, b'perfrevlogrevision', file_, opts)
3916
3965
3917 # _chunkraw was renamed to _getsegmentforrevs.
3966 # _chunkraw was renamed to _getsegmentforrevs.
3918 try:
3967 try:
3919 segmentforrevs = r._inner.get_segment_for_revs
3968 segmentforrevs = r._inner.get_segment_for_revs
3920 except AttributeError:
3969 except AttributeError:
3921 try:
3970 try:
3922 segmentforrevs = r._getsegmentforrevs
3971 segmentforrevs = r._getsegmentforrevs
3923 except AttributeError:
3972 except AttributeError:
3924 segmentforrevs = r._chunkraw
3973 segmentforrevs = r._chunkraw
3925
3974
3926 node = r.lookup(rev)
3975 node = r.lookup(rev)
3927 rev = r.rev(node)
3976 rev = r.rev(node)
3928
3977
3929 if getattr(r, 'reading', None) is not None:
3978 if getattr(r, 'reading', None) is not None:
3930
3979
3931 @contextlib.contextmanager
3980 @contextlib.contextmanager
3932 def lazy_reading(r):
3981 def lazy_reading(r):
3933 with r.reading():
3982 with r.reading():
3934 yield
3983 yield
3935
3984
3936 else:
3985 else:
3937
3986
3938 @contextlib.contextmanager
3987 @contextlib.contextmanager
3939 def lazy_reading(r):
3988 def lazy_reading(r):
3940 yield
3989 yield
3941
3990
3942 def getrawchunks(data, chain):
3991 def getrawchunks(data, chain):
3943 start = r.start
3992 start = r.start
3944 length = r.length
3993 length = r.length
3945 inline = r._inline
3994 inline = r._inline
3946 try:
3995 try:
3947 iosize = r.index.entry_size
3996 iosize = r.index.entry_size
3948 except AttributeError:
3997 except AttributeError:
3949 iosize = r._io.size
3998 iosize = r._io.size
3950 buffer = util.buffer
3999 buffer = util.buffer
3951
4000
3952 chunks = []
4001 chunks = []
3953 ladd = chunks.append
4002 ladd = chunks.append
3954 for idx, item in enumerate(chain):
4003 for idx, item in enumerate(chain):
3955 offset = start(item[0])
4004 offset = start(item[0])
3956 bits = data[idx]
4005 bits = data[idx]
3957 for rev in item:
4006 for rev in item:
3958 chunkstart = start(rev)
4007 chunkstart = start(rev)
3959 if inline:
4008 if inline:
3960 chunkstart += (rev + 1) * iosize
4009 chunkstart += (rev + 1) * iosize
3961 chunklength = length(rev)
4010 chunklength = length(rev)
3962 ladd(buffer(bits, chunkstart - offset, chunklength))
4011 ladd(buffer(bits, chunkstart - offset, chunklength))
3963
4012
3964 return chunks
4013 return chunks
3965
4014
3966 def dodeltachain(rev):
4015 def dodeltachain(rev):
3967 if not cache:
4016 if not cache:
3968 r.clearcaches()
4017 r.clearcaches()
3969 r._deltachain(rev)
4018 r._deltachain(rev)
3970
4019
3971 def doread(chain):
4020 def doread(chain):
3972 if not cache:
4021 if not cache:
3973 r.clearcaches()
4022 r.clearcaches()
3974 for item in slicedchain:
4023 for item in slicedchain:
3975 with lazy_reading(r):
4024 with lazy_reading(r):
3976 segmentforrevs(item[0], item[-1])
4025 segmentforrevs(item[0], item[-1])
3977
4026
3978 def doslice(r, chain, size):
4027 def doslice(r, chain, size):
3979 for s in slicechunk(r, chain, targetsize=size):
4028 for s in slicechunk(r, chain, targetsize=size):
3980 pass
4029 pass
3981
4030
3982 def dorawchunks(data, chain):
4031 def dorawchunks(data, chain):
3983 if not cache:
4032 if not cache:
3984 r.clearcaches()
4033 r.clearcaches()
3985 getrawchunks(data, chain)
4034 getrawchunks(data, chain)
3986
4035
3987 def dodecompress(chunks):
4036 def dodecompress(chunks):
3988 decomp = r.decompress
4037 decomp = r.decompress
3989 for chunk in chunks:
4038 for chunk in chunks:
3990 decomp(chunk)
4039 decomp(chunk)
3991
4040
3992 def dopatch(text, bins):
4041 def dopatch(text, bins):
3993 if not cache:
4042 if not cache:
3994 r.clearcaches()
4043 r.clearcaches()
3995 mdiff.patches(text, bins)
4044 mdiff.patches(text, bins)
3996
4045
3997 def dohash(text):
4046 def dohash(text):
3998 if not cache:
4047 if not cache:
3999 r.clearcaches()
4048 r.clearcaches()
4000 r.checkhash(text, node, rev=rev)
4049 r.checkhash(text, node, rev=rev)
4001
4050
4002 def dorevision():
4051 def dorevision():
4003 if not cache:
4052 if not cache:
4004 r.clearcaches()
4053 r.clearcaches()
4005 r.revision(node)
4054 r.revision(node)
4006
4055
4007 try:
4056 try:
4008 from mercurial.revlogutils.deltas import slicechunk
4057 from mercurial.revlogutils.deltas import slicechunk
4009 except ImportError:
4058 except ImportError:
4010 slicechunk = getattr(revlog, '_slicechunk', None)
4059 slicechunk = getattr(revlog, '_slicechunk', None)
4011
4060
4012 size = r.length(rev)
4061 size = r.length(rev)
4013 chain = r._deltachain(rev)[0]
4062 chain = r._deltachain(rev)[0]
4014
4063
4015 with_sparse_read = False
4064 with_sparse_read = False
4016 if hasattr(r, 'data_config'):
4065 if hasattr(r, 'data_config'):
4017 with_sparse_read = r.data_config.with_sparse_read
4066 with_sparse_read = r.data_config.with_sparse_read
4018 elif hasattr(r, '_withsparseread'):
4067 elif hasattr(r, '_withsparseread'):
4019 with_sparse_read = r._withsparseread
4068 with_sparse_read = r._withsparseread
4020 if with_sparse_read:
4069 if with_sparse_read:
4021 slicedchain = (chain,)
4070 slicedchain = (chain,)
4022 else:
4071 else:
4023 slicedchain = tuple(slicechunk(r, chain, targetsize=size))
4072 slicedchain = tuple(slicechunk(r, chain, targetsize=size))
4024 data = [segmentforrevs(seg[0], seg[-1])[1] for seg in slicedchain]
4073 data = [segmentforrevs(seg[0], seg[-1])[1] for seg in slicedchain]
4025 rawchunks = getrawchunks(data, slicedchain)
4074 rawchunks = getrawchunks(data, slicedchain)
4026 bins = r._inner._chunks(chain)
4075 bins = r._inner._chunks(chain)
4027 text = bytes(bins[0])
4076 text = bytes(bins[0])
4028 bins = bins[1:]
4077 bins = bins[1:]
4029 text = mdiff.patches(text, bins)
4078 text = mdiff.patches(text, bins)
4030
4079
4031 benches = [
4080 benches = [
4032 (lambda: dorevision(), b'full'),
4081 (lambda: dorevision(), b'full'),
4033 (lambda: dodeltachain(rev), b'deltachain'),
4082 (lambda: dodeltachain(rev), b'deltachain'),
4034 (lambda: doread(chain), b'read'),
4083 (lambda: doread(chain), b'read'),
4035 ]
4084 ]
4036
4085
4037 if with_sparse_read:
4086 if with_sparse_read:
4038 slicing = (lambda: doslice(r, chain, size), b'slice-sparse-chain')
4087 slicing = (lambda: doslice(r, chain, size), b'slice-sparse-chain')
4039 benches.append(slicing)
4088 benches.append(slicing)
4040
4089
4041 benches.extend(
4090 benches.extend(
4042 [
4091 [
4043 (lambda: dorawchunks(data, slicedchain), b'rawchunks'),
4092 (lambda: dorawchunks(data, slicedchain), b'rawchunks'),
4044 (lambda: dodecompress(rawchunks), b'decompress'),
4093 (lambda: dodecompress(rawchunks), b'decompress'),
4045 (lambda: dopatch(text, bins), b'patch'),
4094 (lambda: dopatch(text, bins), b'patch'),
4046 (lambda: dohash(text), b'hash'),
4095 (lambda: dohash(text), b'hash'),
4047 ]
4096 ]
4048 )
4097 )
4049
4098
4050 timer, fm = gettimer(ui, opts)
4099 timer, fm = gettimer(ui, opts)
4051 for fn, title in benches:
4100 for fn, title in benches:
4052 timer(fn, title=title)
4101 timer(fn, title=title)
4053 fm.end()
4102 fm.end()
4054
4103
4055
4104
4056 @command(
4105 @command(
4057 b'perf::revset|perfrevset',
4106 b'perf::revset|perfrevset',
4058 [
4107 [
4059 (b'C', b'clear', False, b'clear volatile cache between each call.'),
4108 (b'C', b'clear', False, b'clear volatile cache between each call.'),
4060 (b'', b'contexts', False, b'obtain changectx for each revision'),
4109 (b'', b'contexts', False, b'obtain changectx for each revision'),
4061 ]
4110 ]
4062 + formatteropts,
4111 + formatteropts,
4063 b"REVSET",
4112 b"REVSET",
4064 )
4113 )
4065 def perfrevset(ui, repo, expr, clear=False, contexts=False, **opts):
4114 def perfrevset(ui, repo, expr, clear=False, contexts=False, **opts):
4066 """benchmark the execution time of a revset
4115 """benchmark the execution time of a revset
4067
4116
4068 Use the --clean option if need to evaluate the impact of build volatile
4117 Use the --clean option if need to evaluate the impact of build volatile
4069 revisions set cache on the revset execution. Volatile cache hold filtered
4118 revisions set cache on the revset execution. Volatile cache hold filtered
4070 and obsolete related cache."""
4119 and obsolete related cache."""
4071 opts = _byteskwargs(opts)
4120 opts = _byteskwargs(opts)
4072
4121
4073 timer, fm = gettimer(ui, opts)
4122 timer, fm = gettimer(ui, opts)
4074
4123
4075 def d():
4124 def d():
4076 if clear:
4125 if clear:
4077 repo.invalidatevolatilesets()
4126 repo.invalidatevolatilesets()
4078 if contexts:
4127 if contexts:
4079 for ctx in repo.set(expr):
4128 for ctx in repo.set(expr):
4080 pass
4129 pass
4081 else:
4130 else:
4082 for r in repo.revs(expr):
4131 for r in repo.revs(expr):
4083 pass
4132 pass
4084
4133
4085 timer(d)
4134 timer(d)
4086 fm.end()
4135 fm.end()
4087
4136
4088
4137
4089 @command(
4138 @command(
4090 b'perf::volatilesets|perfvolatilesets',
4139 b'perf::volatilesets|perfvolatilesets',
4091 [
4140 [
4092 (b'', b'clear-obsstore', False, b'drop obsstore between each call.'),
4141 (b'', b'clear-obsstore', False, b'drop obsstore between each call.'),
4093 ]
4142 ]
4094 + formatteropts,
4143 + formatteropts,
4095 )
4144 )
4096 def perfvolatilesets(ui, repo, *names, **opts):
4145 def perfvolatilesets(ui, repo, *names, **opts):
4097 """benchmark the computation of various volatile set
4146 """benchmark the computation of various volatile set
4098
4147
4099 Volatile set computes element related to filtering and obsolescence."""
4148 Volatile set computes element related to filtering and obsolescence."""
4100 opts = _byteskwargs(opts)
4149 opts = _byteskwargs(opts)
4101 timer, fm = gettimer(ui, opts)
4150 timer, fm = gettimer(ui, opts)
4102 repo = repo.unfiltered()
4151 repo = repo.unfiltered()
4103
4152
4104 def getobs(name):
4153 def getobs(name):
4105 def d():
4154 def d():
4106 repo.invalidatevolatilesets()
4155 repo.invalidatevolatilesets()
4107 if opts[b'clear_obsstore']:
4156 if opts[b'clear_obsstore']:
4108 clearfilecache(repo, b'obsstore')
4157 clearfilecache(repo, b'obsstore')
4109 obsolete.getrevs(repo, name)
4158 obsolete.getrevs(repo, name)
4110
4159
4111 return d
4160 return d
4112
4161
4113 allobs = sorted(obsolete.cachefuncs)
4162 allobs = sorted(obsolete.cachefuncs)
4114 if names:
4163 if names:
4115 allobs = [n for n in allobs if n in names]
4164 allobs = [n for n in allobs if n in names]
4116
4165
4117 for name in allobs:
4166 for name in allobs:
4118 timer(getobs(name), title=name)
4167 timer(getobs(name), title=name)
4119
4168
4120 def getfiltered(name):
4169 def getfiltered(name):
4121 def d():
4170 def d():
4122 repo.invalidatevolatilesets()
4171 repo.invalidatevolatilesets()
4123 if opts[b'clear_obsstore']:
4172 if opts[b'clear_obsstore']:
4124 clearfilecache(repo, b'obsstore')
4173 clearfilecache(repo, b'obsstore')
4125 repoview.filterrevs(repo, name)
4174 repoview.filterrevs(repo, name)
4126
4175
4127 return d
4176 return d
4128
4177
4129 allfilter = sorted(repoview.filtertable)
4178 allfilter = sorted(repoview.filtertable)
4130 if names:
4179 if names:
4131 allfilter = [n for n in allfilter if n in names]
4180 allfilter = [n for n in allfilter if n in names]
4132
4181
4133 for name in allfilter:
4182 for name in allfilter:
4134 timer(getfiltered(name), title=name)
4183 timer(getfiltered(name), title=name)
4135 fm.end()
4184 fm.end()
4136
4185
4137
4186
4138 @command(
4187 @command(
4139 b'perf::branchmap|perfbranchmap',
4188 b'perf::branchmap|perfbranchmap',
4140 [
4189 [
4141 (b'f', b'full', False, b'Includes build time of subset'),
4190 (b'f', b'full', False, b'Includes build time of subset'),
4142 (
4191 (
4143 b'',
4192 b'',
4144 b'clear-revbranch',
4193 b'clear-revbranch',
4145 False,
4194 False,
4146 b'purge the revbranch cache between computation',
4195 b'purge the revbranch cache between computation',
4147 ),
4196 ),
4148 ]
4197 ]
4149 + formatteropts,
4198 + formatteropts,
4150 )
4199 )
4151 def perfbranchmap(ui, repo, *filternames, **opts):
4200 def perfbranchmap(ui, repo, *filternames, **opts):
4152 """benchmark the update of a branchmap
4201 """benchmark the update of a branchmap
4153
4202
4154 This benchmarks the full repo.branchmap() call with read and write disabled
4203 This benchmarks the full repo.branchmap() call with read and write disabled
4155 """
4204 """
4156 opts = _byteskwargs(opts)
4205 opts = _byteskwargs(opts)
4157 full = opts.get(b"full", False)
4206 full = opts.get(b"full", False)
4158 clear_revbranch = opts.get(b"clear_revbranch", False)
4207 clear_revbranch = opts.get(b"clear_revbranch", False)
4159 timer, fm = gettimer(ui, opts)
4208 timer, fm = gettimer(ui, opts)
4160
4209
4161 def getbranchmap(filtername):
4210 def getbranchmap(filtername):
4162 """generate a benchmark function for the filtername"""
4211 """generate a benchmark function for the filtername"""
4163 if filtername is None:
4212 if filtername is None:
4164 view = repo
4213 view = repo
4165 else:
4214 else:
4166 view = repo.filtered(filtername)
4215 view = repo.filtered(filtername)
4167 if util.safehasattr(view._branchcaches, '_per_filter'):
4216 if util.safehasattr(view._branchcaches, '_per_filter'):
4168 filtered = view._branchcaches._per_filter
4217 filtered = view._branchcaches._per_filter
4169 else:
4218 else:
4170 # older versions
4219 # older versions
4171 filtered = view._branchcaches
4220 filtered = view._branchcaches
4172
4221
4173 def d():
4222 def d():
4174 if clear_revbranch:
4223 if clear_revbranch:
4175 repo.revbranchcache()._clear()
4224 repo.revbranchcache()._clear()
4176 if full:
4225 if full:
4177 view._branchcaches.clear()
4226 view._branchcaches.clear()
4178 else:
4227 else:
4179 filtered.pop(filtername, None)
4228 filtered.pop(filtername, None)
4180 view.branchmap()
4229 view.branchmap()
4181
4230
4182 return d
4231 return d
4183
4232
4184 # add filter in smaller subset to bigger subset
4233 # add filter in smaller subset to bigger subset
4185 possiblefilters = set(repoview.filtertable)
4234 possiblefilters = set(repoview.filtertable)
4186 if filternames:
4235 if filternames:
4187 possiblefilters &= set(filternames)
4236 possiblefilters &= set(filternames)
4188 subsettable = getbranchmapsubsettable()
4237 subsettable = getbranchmapsubsettable()
4189 allfilters = []
4238 allfilters = []
4190 while possiblefilters:
4239 while possiblefilters:
4191 for name in possiblefilters:
4240 for name in possiblefilters:
4192 subset = subsettable.get(name)
4241 subset = subsettable.get(name)
4193 if subset not in possiblefilters:
4242 if subset not in possiblefilters:
4194 break
4243 break
4195 else:
4244 else:
4196 assert False, b'subset cycle %s!' % possiblefilters
4245 assert False, b'subset cycle %s!' % possiblefilters
4197 allfilters.append(name)
4246 allfilters.append(name)
4198 possiblefilters.remove(name)
4247 possiblefilters.remove(name)
4199
4248
4200 # warm the cache
4249 # warm the cache
4201 if not full:
4250 if not full:
4202 for name in allfilters:
4251 for name in allfilters:
4203 repo.filtered(name).branchmap()
4252 repo.filtered(name).branchmap()
4204 if not filternames or b'unfiltered' in filternames:
4253 if not filternames or b'unfiltered' in filternames:
4205 # add unfiltered
4254 # add unfiltered
4206 allfilters.append(None)
4255 allfilters.append(None)
4207
4256
4208 if util.safehasattr(branchmap.branchcache, 'fromfile'):
4257 old_branch_cache_from_file = None
4258 branchcacheread = None
4259 if util.safehasattr(branchmap, 'branch_cache_from_file'):
4260 old_branch_cache_from_file = branchmap.branch_cache_from_file
4261 branchmap.branch_cache_from_file = lambda *args: None
4262 elif util.safehasattr(branchmap.branchcache, 'fromfile'):
4209 branchcacheread = safeattrsetter(branchmap.branchcache, b'fromfile')
4263 branchcacheread = safeattrsetter(branchmap.branchcache, b'fromfile')
4210 branchcacheread.set(classmethod(lambda *args: None))
4264 branchcacheread.set(classmethod(lambda *args: None))
4211 else:
4265 else:
4212 # older versions
4266 # older versions
4213 branchcacheread = safeattrsetter(branchmap, b'read')
4267 branchcacheread = safeattrsetter(branchmap, b'read')
4214 branchcacheread.set(lambda *args: None)
4268 branchcacheread.set(lambda *args: None)
4269 if util.safehasattr(branchmap, '_LocalBranchCache'):
4270 branchcachewrite = safeattrsetter(branchmap._LocalBranchCache, b'write')
4271 branchcachewrite.set(lambda *args: None)
4272 else:
4215 branchcachewrite = safeattrsetter(branchmap.branchcache, b'write')
4273 branchcachewrite = safeattrsetter(branchmap.branchcache, b'write')
4216 branchcachewrite.set(lambda *args: None)
4274 branchcachewrite.set(lambda *args: None)
4217 try:
4275 try:
4218 for name in allfilters:
4276 for name in allfilters:
4219 printname = name
4277 printname = name
4220 if name is None:
4278 if name is None:
4221 printname = b'unfiltered'
4279 printname = b'unfiltered'
4222 timer(getbranchmap(name), title=printname)
4280 timer(getbranchmap(name), title=printname)
4223 finally:
4281 finally:
4282 if old_branch_cache_from_file is not None:
4283 branchmap.branch_cache_from_file = old_branch_cache_from_file
4284 if branchcacheread is not None:
4224 branchcacheread.restore()
4285 branchcacheread.restore()
4225 branchcachewrite.restore()
4286 branchcachewrite.restore()
4226 fm.end()
4287 fm.end()
4227
4288
4228
4289
4229 @command(
4290 @command(
4230 b'perf::branchmapupdate|perfbranchmapupdate',
4291 b'perf::branchmapupdate|perfbranchmapupdate',
4231 [
4292 [
4232 (b'', b'base', [], b'subset of revision to start from'),
4293 (b'', b'base', [], b'subset of revision to start from'),
4233 (b'', b'target', [], b'subset of revision to end with'),
4294 (b'', b'target', [], b'subset of revision to end with'),
4234 (b'', b'clear-caches', False, b'clear cache between each runs'),
4295 (b'', b'clear-caches', False, b'clear cache between each runs'),
4235 ]
4296 ]
4236 + formatteropts,
4297 + formatteropts,
4237 )
4298 )
4238 def perfbranchmapupdate(ui, repo, base=(), target=(), **opts):
4299 def perfbranchmapupdate(ui, repo, base=(), target=(), **opts):
4239 """benchmark branchmap update from for <base> revs to <target> revs
4300 """benchmark branchmap update from for <base> revs to <target> revs
4240
4301
4241 If `--clear-caches` is passed, the following items will be reset before
4302 If `--clear-caches` is passed, the following items will be reset before
4242 each update:
4303 each update:
4243 * the changelog instance and associated indexes
4304 * the changelog instance and associated indexes
4244 * the rev-branch-cache instance
4305 * the rev-branch-cache instance
4245
4306
4246 Examples:
4307 Examples:
4247
4308
4248 # update for the one last revision
4309 # update for the one last revision
4249 $ hg perfbranchmapupdate --base 'not tip' --target 'tip'
4310 $ hg perfbranchmapupdate --base 'not tip' --target 'tip'
4250
4311
4251 $ update for change coming with a new branch
4312 $ update for change coming with a new branch
4252 $ hg perfbranchmapupdate --base 'stable' --target 'default'
4313 $ hg perfbranchmapupdate --base 'stable' --target 'default'
4253 """
4314 """
4254 from mercurial import branchmap
4315 from mercurial import branchmap
4255 from mercurial import repoview
4316 from mercurial import repoview
4256
4317
4257 opts = _byteskwargs(opts)
4318 opts = _byteskwargs(opts)
4258 timer, fm = gettimer(ui, opts)
4319 timer, fm = gettimer(ui, opts)
4259 clearcaches = opts[b'clear_caches']
4320 clearcaches = opts[b'clear_caches']
4260 unfi = repo.unfiltered()
4321 unfi = repo.unfiltered()
4261 x = [None] # used to pass data between closure
4322 x = [None] # used to pass data between closure
4262
4323
4263 # we use a `list` here to avoid possible side effect from smartset
4324 # we use a `list` here to avoid possible side effect from smartset
4264 baserevs = list(scmutil.revrange(repo, base))
4325 baserevs = list(scmutil.revrange(repo, base))
4265 targetrevs = list(scmutil.revrange(repo, target))
4326 targetrevs = list(scmutil.revrange(repo, target))
4266 if not baserevs:
4327 if not baserevs:
4267 raise error.Abort(b'no revisions selected for --base')
4328 raise error.Abort(b'no revisions selected for --base')
4268 if not targetrevs:
4329 if not targetrevs:
4269 raise error.Abort(b'no revisions selected for --target')
4330 raise error.Abort(b'no revisions selected for --target')
4270
4331
4271 # make sure the target branchmap also contains the one in the base
4332 # make sure the target branchmap also contains the one in the base
4272 targetrevs = list(set(baserevs) | set(targetrevs))
4333 targetrevs = list(set(baserevs) | set(targetrevs))
4273 targetrevs.sort()
4334 targetrevs.sort()
4274
4335
4275 cl = repo.changelog
4336 cl = repo.changelog
4276 allbaserevs = list(cl.ancestors(baserevs, inclusive=True))
4337 allbaserevs = list(cl.ancestors(baserevs, inclusive=True))
4277 allbaserevs.sort()
4338 allbaserevs.sort()
4278 alltargetrevs = frozenset(cl.ancestors(targetrevs, inclusive=True))
4339 alltargetrevs = frozenset(cl.ancestors(targetrevs, inclusive=True))
4279
4340
4280 newrevs = list(alltargetrevs.difference(allbaserevs))
4341 newrevs = list(alltargetrevs.difference(allbaserevs))
4281 newrevs.sort()
4342 newrevs.sort()
4282
4343
4283 allrevs = frozenset(unfi.changelog.revs())
4344 allrevs = frozenset(unfi.changelog.revs())
4284 basefilterrevs = frozenset(allrevs.difference(allbaserevs))
4345 basefilterrevs = frozenset(allrevs.difference(allbaserevs))
4285 targetfilterrevs = frozenset(allrevs.difference(alltargetrevs))
4346 targetfilterrevs = frozenset(allrevs.difference(alltargetrevs))
4286
4347
4287 def basefilter(repo, visibilityexceptions=None):
4348 def basefilter(repo, visibilityexceptions=None):
4288 return basefilterrevs
4349 return basefilterrevs
4289
4350
4290 def targetfilter(repo, visibilityexceptions=None):
4351 def targetfilter(repo, visibilityexceptions=None):
4291 return targetfilterrevs
4352 return targetfilterrevs
4292
4353
4293 msg = b'benchmark of branchmap with %d revisions with %d new ones\n'
4354 msg = b'benchmark of branchmap with %d revisions with %d new ones\n'
4294 ui.status(msg % (len(allbaserevs), len(newrevs)))
4355 ui.status(msg % (len(allbaserevs), len(newrevs)))
4295 if targetfilterrevs:
4356 if targetfilterrevs:
4296 msg = b'(%d revisions still filtered)\n'
4357 msg = b'(%d revisions still filtered)\n'
4297 ui.status(msg % len(targetfilterrevs))
4358 ui.status(msg % len(targetfilterrevs))
4298
4359
4299 try:
4360 try:
4300 repoview.filtertable[b'__perf_branchmap_update_base'] = basefilter
4361 repoview.filtertable[b'__perf_branchmap_update_base'] = basefilter
4301 repoview.filtertable[b'__perf_branchmap_update_target'] = targetfilter
4362 repoview.filtertable[b'__perf_branchmap_update_target'] = targetfilter
4302
4363
4303 baserepo = repo.filtered(b'__perf_branchmap_update_base')
4364 baserepo = repo.filtered(b'__perf_branchmap_update_base')
4304 targetrepo = repo.filtered(b'__perf_branchmap_update_target')
4365 targetrepo = repo.filtered(b'__perf_branchmap_update_target')
4305
4366
4367 bcache = repo.branchmap()
4368 copy_method = 'copy'
4369
4370 copy_base_kwargs = copy_base_kwargs = {}
4371 if hasattr(bcache, 'copy'):
4372 if 'repo' in getargspec(bcache.copy).args:
4373 copy_base_kwargs = {"repo": baserepo}
4374 copy_target_kwargs = {"repo": targetrepo}
4375 else:
4376 copy_method = 'inherit_for'
4377 copy_base_kwargs = {"repo": baserepo}
4378 copy_target_kwargs = {"repo": targetrepo}
4379
4306 # try to find an existing branchmap to reuse
4380 # try to find an existing branchmap to reuse
4307 subsettable = getbranchmapsubsettable()
4381 subsettable = getbranchmapsubsettable()
4308 candidatefilter = subsettable.get(None)
4382 candidatefilter = subsettable.get(None)
4309 while candidatefilter is not None:
4383 while candidatefilter is not None:
4310 candidatebm = repo.filtered(candidatefilter).branchmap()
4384 candidatebm = repo.filtered(candidatefilter).branchmap()
4311 if candidatebm.validfor(baserepo):
4385 if candidatebm.validfor(baserepo):
4312 filtered = repoview.filterrevs(repo, candidatefilter)
4386 filtered = repoview.filterrevs(repo, candidatefilter)
4313 missing = [r for r in allbaserevs if r in filtered]
4387 missing = [r for r in allbaserevs if r in filtered]
4314 base = candidatebm.copy()
4388 base = getattr(candidatebm, copy_method)(**copy_base_kwargs)
4315 base.update(baserepo, missing)
4389 base.update(baserepo, missing)
4316 break
4390 break
4317 candidatefilter = subsettable.get(candidatefilter)
4391 candidatefilter = subsettable.get(candidatefilter)
4318 else:
4392 else:
4319 # no suitable subset where found
4393 # no suitable subset where found
4320 base = branchmap.branchcache()
4394 base = branchmap.branchcache()
4321 base.update(baserepo, allbaserevs)
4395 base.update(baserepo, allbaserevs)
4322
4396
4323 def setup():
4397 def setup():
4324 x[0] = base.copy()
4398 x[0] = getattr(base, copy_method)(**copy_target_kwargs)
4325 if clearcaches:
4399 if clearcaches:
4326 unfi._revbranchcache = None
4400 unfi._revbranchcache = None
4327 clearchangelog(repo)
4401 clearchangelog(repo)
4328
4402
4329 def bench():
4403 def bench():
4330 x[0].update(targetrepo, newrevs)
4404 x[0].update(targetrepo, newrevs)
4331
4405
4332 timer(bench, setup=setup)
4406 timer(bench, setup=setup)
4333 fm.end()
4407 fm.end()
4334 finally:
4408 finally:
4335 repoview.filtertable.pop(b'__perf_branchmap_update_base', None)
4409 repoview.filtertable.pop(b'__perf_branchmap_update_base', None)
4336 repoview.filtertable.pop(b'__perf_branchmap_update_target', None)
4410 repoview.filtertable.pop(b'__perf_branchmap_update_target', None)
4337
4411
4338
4412
4339 @command(
4413 @command(
4340 b'perf::branchmapload|perfbranchmapload',
4414 b'perf::branchmapload|perfbranchmapload',
4341 [
4415 [
4342 (b'f', b'filter', b'', b'Specify repoview filter'),
4416 (b'f', b'filter', b'', b'Specify repoview filter'),
4343 (b'', b'list', False, b'List brachmap filter caches'),
4417 (b'', b'list', False, b'List brachmap filter caches'),
4344 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
4418 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
4345 ]
4419 ]
4346 + formatteropts,
4420 + formatteropts,
4347 )
4421 )
4348 def perfbranchmapload(ui, repo, filter=b'', list=False, **opts):
4422 def perfbranchmapload(ui, repo, filter=b'', list=False, **opts):
4349 """benchmark reading the branchmap"""
4423 """benchmark reading the branchmap"""
4350 opts = _byteskwargs(opts)
4424 opts = _byteskwargs(opts)
4351 clearrevlogs = opts[b'clear_revlogs']
4425 clearrevlogs = opts[b'clear_revlogs']
4352
4426
4353 if list:
4427 if list:
4354 for name, kind, st in repo.cachevfs.readdir(stat=True):
4428 for name, kind, st in repo.cachevfs.readdir(stat=True):
4355 if name.startswith(b'branch2'):
4429 if name.startswith(b'branch2'):
4356 filtername = name.partition(b'-')[2] or b'unfiltered'
4430 filtername = name.partition(b'-')[2] or b'unfiltered'
4357 ui.status(
4431 ui.status(
4358 b'%s - %s\n' % (filtername, util.bytecount(st.st_size))
4432 b'%s - %s\n' % (filtername, util.bytecount(st.st_size))
4359 )
4433 )
4360 return
4434 return
4361 if not filter:
4435 if not filter:
4362 filter = None
4436 filter = None
4363 subsettable = getbranchmapsubsettable()
4437 subsettable = getbranchmapsubsettable()
4364 if filter is None:
4438 if filter is None:
4365 repo = repo.unfiltered()
4439 repo = repo.unfiltered()
4366 else:
4440 else:
4367 repo = repoview.repoview(repo, filter)
4441 repo = repoview.repoview(repo, filter)
4368
4442
4369 repo.branchmap() # make sure we have a relevant, up to date branchmap
4443 repo.branchmap() # make sure we have a relevant, up to date branchmap
4370
4444
4371 try:
4445 fromfile = getattr(branchmap, 'branch_cache_from_file', None)
4372 fromfile = branchmap.branchcache.fromfile
4446 if fromfile is None:
4373 except AttributeError:
4447 fromfile = getattr(branchmap.branchcache, 'fromfile', None)
4374 # older versions
4448 if fromfile is None:
4375 fromfile = branchmap.read
4449 fromfile = branchmap.read
4376
4450
4377 currentfilter = filter
4451 currentfilter = filter
4378 # try once without timer, the filter may not be cached
4452 # try once without timer, the filter may not be cached
4379 while fromfile(repo) is None:
4453 while fromfile(repo) is None:
4380 currentfilter = subsettable.get(currentfilter)
4454 currentfilter = subsettable.get(currentfilter)
4381 if currentfilter is None:
4455 if currentfilter is None:
4382 raise error.Abort(
4456 raise error.Abort(
4383 b'No branchmap cached for %s repo' % (filter or b'unfiltered')
4457 b'No branchmap cached for %s repo' % (filter or b'unfiltered')
4384 )
4458 )
4385 repo = repo.filtered(currentfilter)
4459 repo = repo.filtered(currentfilter)
4386 timer, fm = gettimer(ui, opts)
4460 timer, fm = gettimer(ui, opts)
4387
4461
4388 def setup():
4462 def setup():
4389 if clearrevlogs:
4463 if clearrevlogs:
4390 clearchangelog(repo)
4464 clearchangelog(repo)
4391
4465
4392 def bench():
4466 def bench():
4393 fromfile(repo)
4467 fromfile(repo)
4394
4468
4395 timer(bench, setup=setup)
4469 timer(bench, setup=setup)
4396 fm.end()
4470 fm.end()
4397
4471
4398
4472
4399 @command(b'perf::loadmarkers|perfloadmarkers')
4473 @command(b'perf::loadmarkers|perfloadmarkers')
4400 def perfloadmarkers(ui, repo):
4474 def perfloadmarkers(ui, repo):
4401 """benchmark the time to parse the on-disk markers for a repo
4475 """benchmark the time to parse the on-disk markers for a repo
4402
4476
4403 Result is the number of markers in the repo."""
4477 Result is the number of markers in the repo."""
4404 timer, fm = gettimer(ui)
4478 timer, fm = gettimer(ui)
4405 svfs = getsvfs(repo)
4479 svfs = getsvfs(repo)
4406 timer(lambda: len(obsolete.obsstore(repo, svfs)))
4480 timer(lambda: len(obsolete.obsstore(repo, svfs)))
4407 fm.end()
4481 fm.end()
4408
4482
4409
4483
4410 @command(
4484 @command(
4411 b'perf::lrucachedict|perflrucachedict',
4485 b'perf::lrucachedict|perflrucachedict',
4412 formatteropts
4486 formatteropts
4413 + [
4487 + [
4414 (b'', b'costlimit', 0, b'maximum total cost of items in cache'),
4488 (b'', b'costlimit', 0, b'maximum total cost of items in cache'),
4415 (b'', b'mincost', 0, b'smallest cost of items in cache'),
4489 (b'', b'mincost', 0, b'smallest cost of items in cache'),
4416 (b'', b'maxcost', 100, b'maximum cost of items in cache'),
4490 (b'', b'maxcost', 100, b'maximum cost of items in cache'),
4417 (b'', b'size', 4, b'size of cache'),
4491 (b'', b'size', 4, b'size of cache'),
4418 (b'', b'gets', 10000, b'number of key lookups'),
4492 (b'', b'gets', 10000, b'number of key lookups'),
4419 (b'', b'sets', 10000, b'number of key sets'),
4493 (b'', b'sets', 10000, b'number of key sets'),
4420 (b'', b'mixed', 10000, b'number of mixed mode operations'),
4494 (b'', b'mixed', 10000, b'number of mixed mode operations'),
4421 (
4495 (
4422 b'',
4496 b'',
4423 b'mixedgetfreq',
4497 b'mixedgetfreq',
4424 50,
4498 50,
4425 b'frequency of get vs set ops in mixed mode',
4499 b'frequency of get vs set ops in mixed mode',
4426 ),
4500 ),
4427 ],
4501 ],
4428 norepo=True,
4502 norepo=True,
4429 )
4503 )
4430 def perflrucache(
4504 def perflrucache(
4431 ui,
4505 ui,
4432 mincost=0,
4506 mincost=0,
4433 maxcost=100,
4507 maxcost=100,
4434 costlimit=0,
4508 costlimit=0,
4435 size=4,
4509 size=4,
4436 gets=10000,
4510 gets=10000,
4437 sets=10000,
4511 sets=10000,
4438 mixed=10000,
4512 mixed=10000,
4439 mixedgetfreq=50,
4513 mixedgetfreq=50,
4440 **opts
4514 **opts
4441 ):
4515 ):
4442 opts = _byteskwargs(opts)
4516 opts = _byteskwargs(opts)
4443
4517
4444 def doinit():
4518 def doinit():
4445 for i in _xrange(10000):
4519 for i in _xrange(10000):
4446 util.lrucachedict(size)
4520 util.lrucachedict(size)
4447
4521
4448 costrange = list(range(mincost, maxcost + 1))
4522 costrange = list(range(mincost, maxcost + 1))
4449
4523
4450 values = []
4524 values = []
4451 for i in _xrange(size):
4525 for i in _xrange(size):
4452 values.append(random.randint(0, _maxint))
4526 values.append(random.randint(0, _maxint))
4453
4527
4454 # Get mode fills the cache and tests raw lookup performance with no
4528 # Get mode fills the cache and tests raw lookup performance with no
4455 # eviction.
4529 # eviction.
4456 getseq = []
4530 getseq = []
4457 for i in _xrange(gets):
4531 for i in _xrange(gets):
4458 getseq.append(random.choice(values))
4532 getseq.append(random.choice(values))
4459
4533
4460 def dogets():
4534 def dogets():
4461 d = util.lrucachedict(size)
4535 d = util.lrucachedict(size)
4462 for v in values:
4536 for v in values:
4463 d[v] = v
4537 d[v] = v
4464 for key in getseq:
4538 for key in getseq:
4465 value = d[key]
4539 value = d[key]
4466 value # silence pyflakes warning
4540 value # silence pyflakes warning
4467
4541
4468 def dogetscost():
4542 def dogetscost():
4469 d = util.lrucachedict(size, maxcost=costlimit)
4543 d = util.lrucachedict(size, maxcost=costlimit)
4470 for i, v in enumerate(values):
4544 for i, v in enumerate(values):
4471 d.insert(v, v, cost=costs[i])
4545 d.insert(v, v, cost=costs[i])
4472 for key in getseq:
4546 for key in getseq:
4473 try:
4547 try:
4474 value = d[key]
4548 value = d[key]
4475 value # silence pyflakes warning
4549 value # silence pyflakes warning
4476 except KeyError:
4550 except KeyError:
4477 pass
4551 pass
4478
4552
4479 # Set mode tests insertion speed with cache eviction.
4553 # Set mode tests insertion speed with cache eviction.
4480 setseq = []
4554 setseq = []
4481 costs = []
4555 costs = []
4482 for i in _xrange(sets):
4556 for i in _xrange(sets):
4483 setseq.append(random.randint(0, _maxint))
4557 setseq.append(random.randint(0, _maxint))
4484 costs.append(random.choice(costrange))
4558 costs.append(random.choice(costrange))
4485
4559
4486 def doinserts():
4560 def doinserts():
4487 d = util.lrucachedict(size)
4561 d = util.lrucachedict(size)
4488 for v in setseq:
4562 for v in setseq:
4489 d.insert(v, v)
4563 d.insert(v, v)
4490
4564
4491 def doinsertscost():
4565 def doinsertscost():
4492 d = util.lrucachedict(size, maxcost=costlimit)
4566 d = util.lrucachedict(size, maxcost=costlimit)
4493 for i, v in enumerate(setseq):
4567 for i, v in enumerate(setseq):
4494 d.insert(v, v, cost=costs[i])
4568 d.insert(v, v, cost=costs[i])
4495
4569
4496 def dosets():
4570 def dosets():
4497 d = util.lrucachedict(size)
4571 d = util.lrucachedict(size)
4498 for v in setseq:
4572 for v in setseq:
4499 d[v] = v
4573 d[v] = v
4500
4574
4501 # Mixed mode randomly performs gets and sets with eviction.
4575 # Mixed mode randomly performs gets and sets with eviction.
4502 mixedops = []
4576 mixedops = []
4503 for i in _xrange(mixed):
4577 for i in _xrange(mixed):
4504 r = random.randint(0, 100)
4578 r = random.randint(0, 100)
4505 if r < mixedgetfreq:
4579 if r < mixedgetfreq:
4506 op = 0
4580 op = 0
4507 else:
4581 else:
4508 op = 1
4582 op = 1
4509
4583
4510 mixedops.append(
4584 mixedops.append(
4511 (op, random.randint(0, size * 2), random.choice(costrange))
4585 (op, random.randint(0, size * 2), random.choice(costrange))
4512 )
4586 )
4513
4587
4514 def domixed():
4588 def domixed():
4515 d = util.lrucachedict(size)
4589 d = util.lrucachedict(size)
4516
4590
4517 for op, v, cost in mixedops:
4591 for op, v, cost in mixedops:
4518 if op == 0:
4592 if op == 0:
4519 try:
4593 try:
4520 d[v]
4594 d[v]
4521 except KeyError:
4595 except KeyError:
4522 pass
4596 pass
4523 else:
4597 else:
4524 d[v] = v
4598 d[v] = v
4525
4599
4526 def domixedcost():
4600 def domixedcost():
4527 d = util.lrucachedict(size, maxcost=costlimit)
4601 d = util.lrucachedict(size, maxcost=costlimit)
4528
4602
4529 for op, v, cost in mixedops:
4603 for op, v, cost in mixedops:
4530 if op == 0:
4604 if op == 0:
4531 try:
4605 try:
4532 d[v]
4606 d[v]
4533 except KeyError:
4607 except KeyError:
4534 pass
4608 pass
4535 else:
4609 else:
4536 d.insert(v, v, cost=cost)
4610 d.insert(v, v, cost=cost)
4537
4611
4538 benches = [
4612 benches = [
4539 (doinit, b'init'),
4613 (doinit, b'init'),
4540 ]
4614 ]
4541
4615
4542 if costlimit:
4616 if costlimit:
4543 benches.extend(
4617 benches.extend(
4544 [
4618 [
4545 (dogetscost, b'gets w/ cost limit'),
4619 (dogetscost, b'gets w/ cost limit'),
4546 (doinsertscost, b'inserts w/ cost limit'),
4620 (doinsertscost, b'inserts w/ cost limit'),
4547 (domixedcost, b'mixed w/ cost limit'),
4621 (domixedcost, b'mixed w/ cost limit'),
4548 ]
4622 ]
4549 )
4623 )
4550 else:
4624 else:
4551 benches.extend(
4625 benches.extend(
4552 [
4626 [
4553 (dogets, b'gets'),
4627 (dogets, b'gets'),
4554 (doinserts, b'inserts'),
4628 (doinserts, b'inserts'),
4555 (dosets, b'sets'),
4629 (dosets, b'sets'),
4556 (domixed, b'mixed'),
4630 (domixed, b'mixed'),
4557 ]
4631 ]
4558 )
4632 )
4559
4633
4560 for fn, title in benches:
4634 for fn, title in benches:
4561 timer, fm = gettimer(ui, opts)
4635 timer, fm = gettimer(ui, opts)
4562 timer(fn, title=title)
4636 timer(fn, title=title)
4563 fm.end()
4637 fm.end()
4564
4638
4565
4639
4566 @command(
4640 @command(
4567 b'perf::write|perfwrite',
4641 b'perf::write|perfwrite',
4568 formatteropts
4642 formatteropts
4569 + [
4643 + [
4570 (b'', b'write-method', b'write', b'ui write method'),
4644 (b'', b'write-method', b'write', b'ui write method'),
4571 (b'', b'nlines', 100, b'number of lines'),
4645 (b'', b'nlines', 100, b'number of lines'),
4572 (b'', b'nitems', 100, b'number of items (per line)'),
4646 (b'', b'nitems', 100, b'number of items (per line)'),
4573 (b'', b'item', b'x', b'item that is written'),
4647 (b'', b'item', b'x', b'item that is written'),
4574 (b'', b'batch-line', None, b'pass whole line to write method at once'),
4648 (b'', b'batch-line', None, b'pass whole line to write method at once'),
4575 (b'', b'flush-line', None, b'flush after each line'),
4649 (b'', b'flush-line', None, b'flush after each line'),
4576 ],
4650 ],
4577 )
4651 )
4578 def perfwrite(ui, repo, **opts):
4652 def perfwrite(ui, repo, **opts):
4579 """microbenchmark ui.write (and others)"""
4653 """microbenchmark ui.write (and others)"""
4580 opts = _byteskwargs(opts)
4654 opts = _byteskwargs(opts)
4581
4655
4582 write = getattr(ui, _sysstr(opts[b'write_method']))
4656 write = getattr(ui, _sysstr(opts[b'write_method']))
4583 nlines = int(opts[b'nlines'])
4657 nlines = int(opts[b'nlines'])
4584 nitems = int(opts[b'nitems'])
4658 nitems = int(opts[b'nitems'])
4585 item = opts[b'item']
4659 item = opts[b'item']
4586 batch_line = opts.get(b'batch_line')
4660 batch_line = opts.get(b'batch_line')
4587 flush_line = opts.get(b'flush_line')
4661 flush_line = opts.get(b'flush_line')
4588
4662
4589 if batch_line:
4663 if batch_line:
4590 line = item * nitems + b'\n'
4664 line = item * nitems + b'\n'
4591
4665
4592 def benchmark():
4666 def benchmark():
4593 for i in pycompat.xrange(nlines):
4667 for i in pycompat.xrange(nlines):
4594 if batch_line:
4668 if batch_line:
4595 write(line)
4669 write(line)
4596 else:
4670 else:
4597 for i in pycompat.xrange(nitems):
4671 for i in pycompat.xrange(nitems):
4598 write(item)
4672 write(item)
4599 write(b'\n')
4673 write(b'\n')
4600 if flush_line:
4674 if flush_line:
4601 ui.flush()
4675 ui.flush()
4602 ui.flush()
4676 ui.flush()
4603
4677
4604 timer, fm = gettimer(ui, opts)
4678 timer, fm = gettimer(ui, opts)
4605 timer(benchmark)
4679 timer(benchmark)
4606 fm.end()
4680 fm.end()
4607
4681
4608
4682
4609 def uisetup(ui):
4683 def uisetup(ui):
4610 if util.safehasattr(cmdutil, b'openrevlog') and not util.safehasattr(
4684 if util.safehasattr(cmdutil, b'openrevlog') and not util.safehasattr(
4611 commands, b'debugrevlogopts'
4685 commands, b'debugrevlogopts'
4612 ):
4686 ):
4613 # for "historical portability":
4687 # for "historical portability":
4614 # In this case, Mercurial should be 1.9 (or a79fea6b3e77) -
4688 # In this case, Mercurial should be 1.9 (or a79fea6b3e77) -
4615 # 3.7 (or 5606f7d0d063). Therefore, '--dir' option for
4689 # 3.7 (or 5606f7d0d063). Therefore, '--dir' option for
4616 # openrevlog() should cause failure, because it has been
4690 # openrevlog() should cause failure, because it has been
4617 # available since 3.5 (or 49c583ca48c4).
4691 # available since 3.5 (or 49c583ca48c4).
4618 def openrevlog(orig, repo, cmd, file_, opts):
4692 def openrevlog(orig, repo, cmd, file_, opts):
4619 if opts.get(b'dir') and not util.safehasattr(repo, b'dirlog'):
4693 if opts.get(b'dir') and not util.safehasattr(repo, b'dirlog'):
4620 raise error.Abort(
4694 raise error.Abort(
4621 b"This version doesn't support --dir option",
4695 b"This version doesn't support --dir option",
4622 hint=b"use 3.5 or later",
4696 hint=b"use 3.5 or later",
4623 )
4697 )
4624 return orig(repo, cmd, file_, opts)
4698 return orig(repo, cmd, file_, opts)
4625
4699
4626 name = _sysstr(b'openrevlog')
4700 name = _sysstr(b'openrevlog')
4627 extensions.wrapfunction(cmdutil, name, openrevlog)
4701 extensions.wrapfunction(cmdutil, name, openrevlog)
4628
4702
4629
4703
4630 @command(
4704 @command(
4631 b'perf::progress|perfprogress',
4705 b'perf::progress|perfprogress',
4632 formatteropts
4706 formatteropts
4633 + [
4707 + [
4634 (b'', b'topic', b'topic', b'topic for progress messages'),
4708 (b'', b'topic', b'topic', b'topic for progress messages'),
4635 (b'c', b'total', 1000000, b'total value we are progressing to'),
4709 (b'c', b'total', 1000000, b'total value we are progressing to'),
4636 ],
4710 ],
4637 norepo=True,
4711 norepo=True,
4638 )
4712 )
4639 def perfprogress(ui, topic=None, total=None, **opts):
4713 def perfprogress(ui, topic=None, total=None, **opts):
4640 """printing of progress bars"""
4714 """printing of progress bars"""
4641 opts = _byteskwargs(opts)
4715 opts = _byteskwargs(opts)
4642
4716
4643 timer, fm = gettimer(ui, opts)
4717 timer, fm = gettimer(ui, opts)
4644
4718
4645 def doprogress():
4719 def doprogress():
4646 with ui.makeprogress(topic, total=total) as progress:
4720 with ui.makeprogress(topic, total=total) as progress:
4647 for i in _xrange(total):
4721 for i in _xrange(total):
4648 progress.increment()
4722 progress.increment()
4649
4723
4650 timer(doprogress)
4724 timer(doprogress)
4651 fm.end()
4725 fm.end()
@@ -1,823 +1,826 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''largefiles utility code: must not import other modules in this package.'''
9 '''largefiles utility code: must not import other modules in this package.'''
10
10
11 import contextlib
11 import contextlib
12 import copy
12 import copy
13 import os
13 import os
14 import stat
14 import stat
15
15
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17 from mercurial.node import hex
17 from mercurial.node import hex
18 from mercurial.pycompat import open
18 from mercurial.pycompat import open
19
19
20 from mercurial import (
20 from mercurial import (
21 dirstate,
21 dirstate,
22 encoding,
22 encoding,
23 error,
23 error,
24 httpconnection,
24 httpconnection,
25 match as matchmod,
25 match as matchmod,
26 pycompat,
26 pycompat,
27 requirements,
27 requirements,
28 scmutil,
28 scmutil,
29 sparse,
29 sparse,
30 util,
30 util,
31 vfs as vfsmod,
31 vfs as vfsmod,
32 )
32 )
33 from mercurial.utils import hashutil
33 from mercurial.utils import hashutil
34 from mercurial.dirstateutils import timestamp
34 from mercurial.dirstateutils import timestamp
35
35
36 shortname = b'.hglf'
36 shortname = b'.hglf'
37 shortnameslash = shortname + b'/'
37 shortnameslash = shortname + b'/'
38 longname = b'largefiles'
38 longname = b'largefiles'
39
39
40 # -- Private worker functions ------------------------------------------
40 # -- Private worker functions ------------------------------------------
41
41
42
42
43 @contextlib.contextmanager
43 @contextlib.contextmanager
44 def lfstatus(repo, value=True):
44 def lfstatus(repo, value=True):
45 oldvalue = getattr(repo, 'lfstatus', False)
45 oldvalue = getattr(repo, 'lfstatus', False)
46 repo.lfstatus = value
46 repo.lfstatus = value
47 try:
47 try:
48 yield
48 yield
49 finally:
49 finally:
50 repo.lfstatus = oldvalue
50 repo.lfstatus = oldvalue
51
51
52
52
53 def getminsize(ui, assumelfiles, opt, default=10):
53 def getminsize(ui, assumelfiles, opt, default=10):
54 lfsize = opt
54 lfsize = opt
55 if not lfsize and assumelfiles:
55 if not lfsize and assumelfiles:
56 lfsize = ui.config(longname, b'minsize', default=default)
56 lfsize = ui.config(longname, b'minsize', default=default)
57 if lfsize:
57 if lfsize:
58 try:
58 try:
59 lfsize = float(lfsize)
59 lfsize = float(lfsize)
60 except ValueError:
60 except ValueError:
61 raise error.Abort(
61 raise error.Abort(
62 _(b'largefiles: size must be number (not %s)\n') % lfsize
62 _(b'largefiles: size must be number (not %s)\n') % lfsize
63 )
63 )
64 if lfsize is None:
64 if lfsize is None:
65 raise error.Abort(_(b'minimum size for largefiles must be specified'))
65 raise error.Abort(_(b'minimum size for largefiles must be specified'))
66 return lfsize
66 return lfsize
67
67
68
68
69 def link(src, dest):
69 def link(src, dest):
70 """Try to create hardlink - if that fails, efficiently make a copy."""
70 """Try to create hardlink - if that fails, efficiently make a copy."""
71 util.makedirs(os.path.dirname(dest))
71 util.makedirs(os.path.dirname(dest))
72 try:
72 try:
73 util.oslink(src, dest)
73 util.oslink(src, dest)
74 except OSError:
74 except OSError:
75 # if hardlinks fail, fallback on atomic copy
75 # if hardlinks fail, fallback on atomic copy
76 with open(src, b'rb') as srcf, util.atomictempfile(dest) as dstf:
76 with open(src, b'rb') as srcf, util.atomictempfile(dest) as dstf:
77 for chunk in util.filechunkiter(srcf):
77 for chunk in util.filechunkiter(srcf):
78 dstf.write(chunk)
78 dstf.write(chunk)
79 os.chmod(dest, os.stat(src).st_mode)
79 os.chmod(dest, os.stat(src).st_mode)
80
80
81
81
82 def usercachepath(ui, hash):
82 def usercachepath(ui, hash):
83 """Return the correct location in the "global" largefiles cache for a file
83 """Return the correct location in the "global" largefiles cache for a file
84 with the given hash.
84 with the given hash.
85 This cache is used for sharing of largefiles across repositories - both
85 This cache is used for sharing of largefiles across repositories - both
86 to preserve download bandwidth and storage space."""
86 to preserve download bandwidth and storage space."""
87 return os.path.join(_usercachedir(ui), hash)
87 return os.path.join(_usercachedir(ui), hash)
88
88
89
89
90 def _usercachedir(ui, name=longname):
90 def _usercachedir(ui, name=longname):
91 '''Return the location of the "global" largefiles cache.'''
91 '''Return the location of the "global" largefiles cache.'''
92 path = ui.configpath(name, b'usercache')
92 path = ui.configpath(name, b'usercache')
93 if path:
93 if path:
94 return path
94 return path
95
95
96 hint = None
96 hint = None
97
97
98 if pycompat.iswindows:
98 if pycompat.iswindows:
99 appdata = encoding.environ.get(
99 appdata = encoding.environ.get(
100 b'LOCALAPPDATA', encoding.environ.get(b'APPDATA')
100 b'LOCALAPPDATA', encoding.environ.get(b'APPDATA')
101 )
101 )
102 if appdata:
102 if appdata:
103 return os.path.join(appdata, name)
103 return os.path.join(appdata, name)
104
104
105 hint = _(b"define %s or %s in the environment, or set %s.usercache") % (
105 hint = _(b"define %s or %s in the environment, or set %s.usercache") % (
106 b"LOCALAPPDATA",
106 b"LOCALAPPDATA",
107 b"APPDATA",
107 b"APPDATA",
108 name,
108 name,
109 )
109 )
110 elif pycompat.isdarwin:
110 elif pycompat.isdarwin:
111 home = encoding.environ.get(b'HOME')
111 home = encoding.environ.get(b'HOME')
112 if home:
112 if home:
113 return os.path.join(home, b'Library', b'Caches', name)
113 return os.path.join(home, b'Library', b'Caches', name)
114
114
115 hint = _(b"define %s in the environment, or set %s.usercache") % (
115 hint = _(b"define %s in the environment, or set %s.usercache") % (
116 b"HOME",
116 b"HOME",
117 name,
117 name,
118 )
118 )
119 elif pycompat.isposix:
119 elif pycompat.isposix:
120 path = encoding.environ.get(b'XDG_CACHE_HOME')
120 path = encoding.environ.get(b'XDG_CACHE_HOME')
121 if path:
121 if path:
122 return os.path.join(path, name)
122 return os.path.join(path, name)
123 home = encoding.environ.get(b'HOME')
123 home = encoding.environ.get(b'HOME')
124 if home:
124 if home:
125 return os.path.join(home, b'.cache', name)
125 return os.path.join(home, b'.cache', name)
126
126
127 hint = _(b"define %s or %s in the environment, or set %s.usercache") % (
127 hint = _(b"define %s or %s in the environment, or set %s.usercache") % (
128 b"XDG_CACHE_HOME",
128 b"XDG_CACHE_HOME",
129 b"HOME",
129 b"HOME",
130 name,
130 name,
131 )
131 )
132 else:
132 else:
133 raise error.Abort(
133 raise error.Abort(
134 _(b'unknown operating system: %s\n') % pycompat.osname
134 _(b'unknown operating system: %s\n') % pycompat.osname
135 )
135 )
136
136
137 raise error.Abort(_(b'unknown %s usercache location') % name, hint=hint)
137 raise error.Abort(_(b'unknown %s usercache location') % name, hint=hint)
138
138
139
139
140 def inusercache(ui, hash):
140 def inusercache(ui, hash):
141 path = usercachepath(ui, hash)
141 path = usercachepath(ui, hash)
142 return os.path.exists(path)
142 return os.path.exists(path)
143
143
144
144
145 def findfile(repo, hash):
145 def findfile(repo, hash):
146 """Return store path of the largefile with the specified hash.
146 """Return store path of the largefile with the specified hash.
147 As a side effect, the file might be linked from user cache.
147 As a side effect, the file might be linked from user cache.
148 Return None if the file can't be found locally."""
148 Return None if the file can't be found locally."""
149 path, exists = findstorepath(repo, hash)
149 path, exists = findstorepath(repo, hash)
150 if exists:
150 if exists:
151 repo.ui.note(_(b'found %s in store\n') % hash)
151 repo.ui.note(_(b'found %s in store\n') % hash)
152 return path
152 return path
153 elif inusercache(repo.ui, hash):
153 elif inusercache(repo.ui, hash):
154 repo.ui.note(_(b'found %s in system cache\n') % hash)
154 repo.ui.note(_(b'found %s in system cache\n') % hash)
155 path = storepath(repo, hash)
155 path = storepath(repo, hash)
156 link(usercachepath(repo.ui, hash), path)
156 link(usercachepath(repo.ui, hash), path)
157 return path
157 return path
158 return None
158 return None
159
159
160
160
161 class largefilesdirstate(dirstate.dirstate):
161 class largefilesdirstate(dirstate.dirstate):
162 _large_file_dirstate = True
162 _large_file_dirstate = True
163 _tr_key_suffix = b'-large-files'
163 _tr_key_suffix = b'-large-files'
164
164
165 def __getitem__(self, key):
165 def __getitem__(self, key):
166 return super(largefilesdirstate, self).__getitem__(unixpath(key))
166 return super(largefilesdirstate, self).__getitem__(unixpath(key))
167
167
168 def set_tracked(self, f):
168 def set_tracked(self, f):
169 return super(largefilesdirstate, self).set_tracked(unixpath(f))
169 return super(largefilesdirstate, self).set_tracked(unixpath(f))
170
170
171 def set_untracked(self, f):
171 def set_untracked(self, f):
172 return super(largefilesdirstate, self).set_untracked(unixpath(f))
172 return super(largefilesdirstate, self).set_untracked(unixpath(f))
173
173
174 def normal(self, f, parentfiledata=None):
174 def normal(self, f, parentfiledata=None):
175 # not sure if we should pass the `parentfiledata` down or throw it
175 # not sure if we should pass the `parentfiledata` down or throw it
176 # away. So throwing it away to stay on the safe side.
176 # away. So throwing it away to stay on the safe side.
177 return super(largefilesdirstate, self).normal(unixpath(f))
177 return super(largefilesdirstate, self).normal(unixpath(f))
178
178
179 def remove(self, f):
179 def remove(self, f):
180 return super(largefilesdirstate, self).remove(unixpath(f))
180 return super(largefilesdirstate, self).remove(unixpath(f))
181
181
182 def add(self, f):
182 def add(self, f):
183 return super(largefilesdirstate, self).add(unixpath(f))
183 return super(largefilesdirstate, self).add(unixpath(f))
184
184
185 def drop(self, f):
185 def drop(self, f):
186 return super(largefilesdirstate, self).drop(unixpath(f))
186 return super(largefilesdirstate, self).drop(unixpath(f))
187
187
188 def forget(self, f):
188 def forget(self, f):
189 return super(largefilesdirstate, self).forget(unixpath(f))
189 return super(largefilesdirstate, self).forget(unixpath(f))
190
190
191 def normallookup(self, f):
191 def normallookup(self, f):
192 return super(largefilesdirstate, self).normallookup(unixpath(f))
192 return super(largefilesdirstate, self).normallookup(unixpath(f))
193
193
194 def _ignore(self, f):
194 def _ignore(self, f):
195 return False
195 return False
196
196
197 def write(self, tr):
197 def write(self, tr):
198 # (1) disable PENDING mode always
198 # (1) disable PENDING mode always
199 # (lfdirstate isn't yet managed as a part of the transaction)
199 # (lfdirstate isn't yet managed as a part of the transaction)
200 # (2) avoid develwarn 'use dirstate.write with ....'
200 # (2) avoid develwarn 'use dirstate.write with ....'
201 if tr:
201 if tr:
202 tr.addbackup(b'largefiles/dirstate', location=b'plain')
202 tr.addbackup(b'largefiles/dirstate', location=b'plain')
203 super(largefilesdirstate, self).write(None)
203 super(largefilesdirstate, self).write(None)
204
204
205
205
206 def openlfdirstate(ui, repo, create=True):
206 def openlfdirstate(ui, repo, create=True):
207 """
207 """
208 Return a dirstate object that tracks largefiles: i.e. its root is
208 Return a dirstate object that tracks largefiles: i.e. its root is
209 the repo root, but it is saved in .hg/largefiles/dirstate.
209 the repo root, but it is saved in .hg/largefiles/dirstate.
210
210
211 If a dirstate object already exists and is being used for a 'changing_*'
211 If a dirstate object already exists and is being used for a 'changing_*'
212 context, it will be returned.
212 context, it will be returned.
213 """
213 """
214 sub_dirstate = getattr(repo.dirstate, '_sub_dirstate', None)
214 sub_dirstate = getattr(repo.dirstate, '_sub_dirstate', None)
215 if sub_dirstate is not None:
215 if sub_dirstate is not None:
216 return sub_dirstate
216 return sub_dirstate
217 vfs = repo.vfs
217 vfs = repo.vfs
218 lfstoredir = longname
218 lfstoredir = longname
219 opener = vfsmod.vfs(vfs.join(lfstoredir))
219 opener = vfsmod.vfs(vfs.join(lfstoredir))
220 use_dirstate_v2 = requirements.DIRSTATE_V2_REQUIREMENT in repo.requirements
220 use_dirstate_v2 = requirements.DIRSTATE_V2_REQUIREMENT in repo.requirements
221 lfdirstate = largefilesdirstate(
221 lfdirstate = largefilesdirstate(
222 opener,
222 opener,
223 ui,
223 ui,
224 repo.root,
224 repo.root,
225 repo.dirstate._validate,
225 repo.dirstate._validate,
226 lambda: sparse.matcher(repo),
226 lambda: sparse.matcher(repo),
227 repo.nodeconstants,
227 repo.nodeconstants,
228 use_dirstate_v2,
228 use_dirstate_v2,
229 )
229 )
230
230
231 # If the largefiles dirstate does not exist, populate and create
231 # If the largefiles dirstate does not exist, populate and create
232 # it. This ensures that we create it on the first meaningful
232 # it. This ensures that we create it on the first meaningful
233 # largefiles operation in a new clone.
233 # largefiles operation in a new clone.
234 if create and not vfs.exists(vfs.join(lfstoredir, b'dirstate')):
234 if create and not vfs.exists(vfs.join(lfstoredir, b'dirstate')):
235 try:
235 try:
236 with repo.wlock(wait=False), lfdirstate.changing_files(repo):
236 with repo.wlock(wait=False), lfdirstate.changing_files(repo):
237 matcher = getstandinmatcher(repo)
237 matcher = getstandinmatcher(repo)
238 standins = repo.dirstate.walk(
238 standins = repo.dirstate.walk(
239 matcher, subrepos=[], unknown=False, ignored=False
239 matcher, subrepos=[], unknown=False, ignored=False
240 )
240 )
241
241
242 if len(standins) > 0:
242 if len(standins) > 0:
243 vfs.makedirs(lfstoredir)
243 vfs.makedirs(lfstoredir)
244
244
245 for standin in standins:
245 for standin in standins:
246 lfile = splitstandin(standin)
246 lfile = splitstandin(standin)
247 lfdirstate.hacky_extension_update_file(
247 lfdirstate.hacky_extension_update_file(
248 lfile,
248 lfile,
249 p1_tracked=True,
249 p1_tracked=True,
250 wc_tracked=True,
250 wc_tracked=True,
251 possibly_dirty=True,
251 possibly_dirty=True,
252 )
252 )
253 except error.LockError:
253 except error.LockError:
254 # Assume that whatever was holding the lock was important.
254 # Assume that whatever was holding the lock was important.
255 # If we were doing something important, we would already have
255 # If we were doing something important, we would already have
256 # either the lock or a largefile dirstate.
256 # either the lock or a largefile dirstate.
257 pass
257 pass
258 return lfdirstate
258 return lfdirstate
259
259
260
260
261 def lfdirstatestatus(lfdirstate, repo):
261 def lfdirstatestatus(lfdirstate, repo):
262 pctx = repo[b'.']
262 pctx = repo[b'.']
263 match = matchmod.always()
263 match = matchmod.always()
264 unsure, s, mtime_boundary = lfdirstate.status(
264 unsure, s, mtime_boundary = lfdirstate.status(
265 match, subrepos=[], ignored=False, clean=False, unknown=False
265 match, subrepos=[], ignored=False, clean=False, unknown=False
266 )
266 )
267 modified, clean = s.modified, s.clean
267 modified, clean = s.modified, s.clean
268 wctx = repo[None]
268 wctx = repo[None]
269 for lfile in unsure:
269 for lfile in unsure:
270 try:
270 try:
271 fctx = pctx[standin(lfile)]
271 fctx = pctx[standin(lfile)]
272 except LookupError:
272 except LookupError:
273 fctx = None
273 fctx = None
274 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
274 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
275 modified.append(lfile)
275 modified.append(lfile)
276 else:
276 else:
277 clean.append(lfile)
277 clean.append(lfile)
278 st = wctx[lfile].lstat()
278 st = wctx[lfile].lstat()
279 mode = st.st_mode
279 mode = st.st_mode
280 size = st.st_size
280 size = st.st_size
281 mtime = timestamp.reliable_mtime_of(st, mtime_boundary)
281 mtime = timestamp.reliable_mtime_of(st, mtime_boundary)
282 if mtime is not None:
282 if mtime is not None:
283 cache_data = (mode, size, mtime)
283 cache_data = (mode, size, mtime)
284 lfdirstate.set_clean(lfile, cache_data)
284 lfdirstate.set_clean(lfile, cache_data)
285 return s
285 return s
286
286
287
287
288 def listlfiles(repo, rev=None, matcher=None):
288 def listlfiles(repo, rev=None, matcher=None):
289 """return a list of largefiles in the working copy or the
289 """return a list of largefiles in the working copy or the
290 specified changeset"""
290 specified changeset"""
291
291
292 if matcher is None:
292 if matcher is None:
293 matcher = getstandinmatcher(repo)
293 matcher = getstandinmatcher(repo)
294
294
295 # ignore unknown files in working directory
295 # ignore unknown files in working directory
296 return [
296 return [
297 splitstandin(f)
297 splitstandin(f)
298 for f in repo[rev].walk(matcher)
298 for f in repo[rev].walk(matcher)
299 if rev is not None or repo.dirstate.get_entry(f).any_tracked
299 if rev is not None or repo.dirstate.get_entry(f).any_tracked
300 ]
300 ]
301
301
302
302
303 def instore(repo, hash, forcelocal=False):
303 def instore(repo, hash, forcelocal=False):
304 '''Return true if a largefile with the given hash exists in the store'''
304 '''Return true if a largefile with the given hash exists in the store'''
305 return os.path.exists(storepath(repo, hash, forcelocal))
305 return os.path.exists(storepath(repo, hash, forcelocal))
306
306
307
307
308 def storepath(repo, hash, forcelocal=False):
308 def storepath(repo, hash, forcelocal=False):
309 """Return the correct location in the repository largefiles store for a
309 """Return the correct location in the repository largefiles store for a
310 file with the given hash."""
310 file with the given hash."""
311 if not forcelocal and repo.shared():
311 if not forcelocal and repo.shared():
312 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
312 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
313 return repo.vfs.join(longname, hash)
313 return repo.vfs.join(longname, hash)
314
314
315
315
316 def findstorepath(repo, hash):
316 def findstorepath(repo, hash):
317 """Search through the local store path(s) to find the file for the given
317 """Search through the local store path(s) to find the file for the given
318 hash. If the file is not found, its path in the primary store is returned.
318 hash. If the file is not found, its path in the primary store is returned.
319 The return value is a tuple of (path, exists(path)).
319 The return value is a tuple of (path, exists(path)).
320 """
320 """
321 # For shared repos, the primary store is in the share source. But for
321 # For shared repos, the primary store is in the share source. But for
322 # backward compatibility, force a lookup in the local store if it wasn't
322 # backward compatibility, force a lookup in the local store if it wasn't
323 # found in the share source.
323 # found in the share source.
324 path = storepath(repo, hash, False)
324 path = storepath(repo, hash, False)
325
325
326 if instore(repo, hash):
326 if instore(repo, hash):
327 return (path, True)
327 return (path, True)
328 elif repo.shared() and instore(repo, hash, True):
328 elif repo.shared() and instore(repo, hash, True):
329 return storepath(repo, hash, True), True
329 return storepath(repo, hash, True), True
330
330
331 return (path, False)
331 return (path, False)
332
332
333
333
334 def copyfromcache(repo, hash, filename):
334 def copyfromcache(repo, hash, filename):
335 """Copy the specified largefile from the repo or system cache to
335 """Copy the specified largefile from the repo or system cache to
336 filename in the repository. Return true on success or false if the
336 filename in the repository. Return true on success or false if the
337 file was not found in either cache (which should not happened:
337 file was not found in either cache (which should not happened:
338 this is meant to be called only after ensuring that the needed
338 this is meant to be called only after ensuring that the needed
339 largefile exists in the cache)."""
339 largefile exists in the cache)."""
340 wvfs = repo.wvfs
340 wvfs = repo.wvfs
341 path = findfile(repo, hash)
341 path = findfile(repo, hash)
342 if path is None:
342 if path is None:
343 return False
343 return False
344 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
344 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
345 # The write may fail before the file is fully written, but we
345 # The write may fail before the file is fully written, but we
346 # don't use atomic writes in the working copy.
346 # don't use atomic writes in the working copy.
347 with open(path, b'rb') as srcfd, wvfs(filename, b'wb') as destfd:
347 with open(path, b'rb') as srcfd, wvfs(filename, b'wb') as destfd:
348 gothash = copyandhash(util.filechunkiter(srcfd), destfd)
348 gothash = copyandhash(util.filechunkiter(srcfd), destfd)
349 if gothash != hash:
349 if gothash != hash:
350 repo.ui.warn(
350 repo.ui.warn(
351 _(b'%s: data corruption in %s with hash %s\n')
351 _(b'%s: data corruption in %s with hash %s\n')
352 % (filename, path, gothash)
352 % (filename, path, gothash)
353 )
353 )
354 wvfs.unlink(filename)
354 wvfs.unlink(filename)
355 return False
355 return False
356 return True
356 return True
357
357
358
358
359 def copytostore(repo, ctx, file, fstandin):
359 def copytostore(repo, ctx, file, fstandin):
360 wvfs = repo.wvfs
360 wvfs = repo.wvfs
361 hash = readasstandin(ctx[fstandin])
361 hash = readasstandin(ctx[fstandin])
362 if instore(repo, hash):
362 if instore(repo, hash):
363 return
363 return
364 if wvfs.exists(file):
364 if wvfs.exists(file):
365 copytostoreabsolute(repo, wvfs.join(file), hash)
365 copytostoreabsolute(repo, wvfs.join(file), hash)
366 else:
366 else:
367 repo.ui.warn(
367 repo.ui.warn(
368 _(b"%s: largefile %s not available from local store\n")
368 _(b"%s: largefile %s not available from local store\n")
369 % (file, hash)
369 % (file, hash)
370 )
370 )
371
371
372
372
373 def copyalltostore(repo, node):
373 def copyalltostore(repo, node):
374 '''Copy all largefiles in a given revision to the store'''
374 '''Copy all largefiles in a given revision to the store'''
375
375
376 ctx = repo[node]
376 ctx = repo[node]
377 for filename in ctx.files():
377 for filename in ctx.files():
378 realfile = splitstandin(filename)
378 realfile = splitstandin(filename)
379 if realfile is not None and filename in ctx.manifest():
379 if realfile is not None and filename in ctx.manifest():
380 copytostore(repo, ctx, realfile, filename)
380 copytostore(repo, ctx, realfile, filename)
381
381
382
382
383 def copytostoreabsolute(repo, file, hash):
383 def copytostoreabsolute(repo, file, hash):
384 if inusercache(repo.ui, hash):
384 if inusercache(repo.ui, hash):
385 link(usercachepath(repo.ui, hash), storepath(repo, hash))
385 link(usercachepath(repo.ui, hash), storepath(repo, hash))
386 else:
386 else:
387 util.makedirs(os.path.dirname(storepath(repo, hash)))
387 util.makedirs(os.path.dirname(storepath(repo, hash)))
388 with open(file, b'rb') as srcf:
388 with open(file, b'rb') as srcf:
389 with util.atomictempfile(
389 with util.atomictempfile(
390 storepath(repo, hash), createmode=repo.store.createmode
390 storepath(repo, hash), createmode=repo.store.createmode
391 ) as dstf:
391 ) as dstf:
392 for chunk in util.filechunkiter(srcf):
392 for chunk in util.filechunkiter(srcf):
393 dstf.write(chunk)
393 dstf.write(chunk)
394 linktousercache(repo, hash)
394 linktousercache(repo, hash)
395
395
396
396
397 def linktousercache(repo, hash):
397 def linktousercache(repo, hash):
398 """Link / copy the largefile with the specified hash from the store
398 """Link / copy the largefile with the specified hash from the store
399 to the cache."""
399 to the cache."""
400 path = usercachepath(repo.ui, hash)
400 path = usercachepath(repo.ui, hash)
401 link(storepath(repo, hash), path)
401 link(storepath(repo, hash), path)
402
402
403
403
404 def getstandinmatcher(repo, rmatcher=None):
404 def getstandinmatcher(repo, rmatcher=None):
405 '''Return a match object that applies rmatcher to the standin directory'''
405 '''Return a match object that applies rmatcher to the standin directory'''
406 wvfs = repo.wvfs
406 wvfs = repo.wvfs
407 standindir = shortname
407 standindir = shortname
408
408
409 # no warnings about missing files or directories
409 # no warnings about missing files or directories
410 badfn = lambda f, msg: None
410 badfn = lambda f, msg: None
411
411
412 if rmatcher and not rmatcher.always():
412 if rmatcher and not rmatcher.always():
413 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
413 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
414 if not pats:
414 if not pats:
415 pats = [wvfs.join(standindir)]
415 pats = [wvfs.join(standindir)]
416 match = scmutil.match(repo[None], pats, badfn=badfn)
416 match = scmutil.match(repo[None], pats, badfn=badfn)
417 else:
417 else:
418 # no patterns: relative to repo root
418 # no patterns: relative to repo root
419 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
419 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
420 return match
420 return match
421
421
422
422
423 def composestandinmatcher(repo, rmatcher):
423 def composestandinmatcher(repo, rmatcher):
424 """Return a matcher that accepts standins corresponding to the
424 """Return a matcher that accepts standins corresponding to the
425 files accepted by rmatcher. Pass the list of files in the matcher
425 files accepted by rmatcher. Pass the list of files in the matcher
426 as the paths specified by the user."""
426 as the paths specified by the user."""
427 smatcher = getstandinmatcher(repo, rmatcher)
427 smatcher = getstandinmatcher(repo, rmatcher)
428 isstandin = smatcher.matchfn
428 isstandin = smatcher.matchfn
429
429
430 def composedmatchfn(f):
430 def composedmatchfn(f):
431 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
431 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
432
432
433 smatcher._was_tampered_with = True
433 smatcher.matchfn = composedmatchfn
434 smatcher.matchfn = composedmatchfn
434
435
435 return smatcher
436 return smatcher
436
437
437
438
438 def standin(filename):
439 def standin(filename):
439 """Return the repo-relative path to the standin for the specified big
440 """Return the repo-relative path to the standin for the specified big
440 file."""
441 file."""
441 # Notes:
442 # Notes:
442 # 1) Some callers want an absolute path, but for instance addlargefiles
443 # 1) Some callers want an absolute path, but for instance addlargefiles
443 # needs it repo-relative so it can be passed to repo[None].add(). So
444 # needs it repo-relative so it can be passed to repo[None].add(). So
444 # leave it up to the caller to use repo.wjoin() to get an absolute path.
445 # leave it up to the caller to use repo.wjoin() to get an absolute path.
445 # 2) Join with '/' because that's what dirstate always uses, even on
446 # 2) Join with '/' because that's what dirstate always uses, even on
446 # Windows. Change existing separator to '/' first in case we are
447 # Windows. Change existing separator to '/' first in case we are
447 # passed filenames from an external source (like the command line).
448 # passed filenames from an external source (like the command line).
448 return shortnameslash + util.pconvert(filename)
449 return shortnameslash + util.pconvert(filename)
449
450
450
451
451 def isstandin(filename):
452 def isstandin(filename):
452 """Return true if filename is a big file standin. filename must be
453 """Return true if filename is a big file standin. filename must be
453 in Mercurial's internal form (slash-separated)."""
454 in Mercurial's internal form (slash-separated)."""
454 return filename.startswith(shortnameslash)
455 return filename.startswith(shortnameslash)
455
456
456
457
457 def splitstandin(filename):
458 def splitstandin(filename):
458 # Split on / because that's what dirstate always uses, even on Windows.
459 # Split on / because that's what dirstate always uses, even on Windows.
459 # Change local separator to / first just in case we are passed filenames
460 # Change local separator to / first just in case we are passed filenames
460 # from an external source (like the command line).
461 # from an external source (like the command line).
461 bits = util.pconvert(filename).split(b'/', 1)
462 bits = util.pconvert(filename).split(b'/', 1)
462 if len(bits) == 2 and bits[0] == shortname:
463 if len(bits) == 2 and bits[0] == shortname:
463 return bits[1]
464 return bits[1]
464 else:
465 else:
465 return None
466 return None
466
467
467
468
468 def updatestandin(repo, lfile, standin):
469 def updatestandin(repo, lfile, standin):
469 """Re-calculate hash value of lfile and write it into standin
470 """Re-calculate hash value of lfile and write it into standin
470
471
471 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
472 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
472 """
473 """
473 file = repo.wjoin(lfile)
474 file = repo.wjoin(lfile)
474 if repo.wvfs.exists(lfile):
475 if repo.wvfs.exists(lfile):
475 hash = hashfile(file)
476 hash = hashfile(file)
476 executable = getexecutable(file)
477 executable = getexecutable(file)
477 writestandin(repo, standin, hash, executable)
478 writestandin(repo, standin, hash, executable)
478 else:
479 else:
479 raise error.Abort(_(b'%s: file not found!') % lfile)
480 raise error.Abort(_(b'%s: file not found!') % lfile)
480
481
481
482
482 def readasstandin(fctx):
483 def readasstandin(fctx):
483 """read hex hash from given filectx of standin file
484 """read hex hash from given filectx of standin file
484
485
485 This encapsulates how "standin" data is stored into storage layer."""
486 This encapsulates how "standin" data is stored into storage layer."""
486 return fctx.data().strip()
487 return fctx.data().strip()
487
488
488
489
489 def writestandin(repo, standin, hash, executable):
490 def writestandin(repo, standin, hash, executable):
490 '''write hash to <repo.root>/<standin>'''
491 '''write hash to <repo.root>/<standin>'''
491 repo.wwrite(standin, hash + b'\n', executable and b'x' or b'')
492 repo.wwrite(standin, hash + b'\n', executable and b'x' or b'')
492
493
493
494
494 def copyandhash(instream, outfile):
495 def copyandhash(instream, outfile):
495 """Read bytes from instream (iterable) and write them to outfile,
496 """Read bytes from instream (iterable) and write them to outfile,
496 computing the SHA-1 hash of the data along the way. Return the hash."""
497 computing the SHA-1 hash of the data along the way. Return the hash."""
497 hasher = hashutil.sha1(b'')
498 hasher = hashutil.sha1(b'')
498 for data in instream:
499 for data in instream:
499 hasher.update(data)
500 hasher.update(data)
500 outfile.write(data)
501 outfile.write(data)
501 return hex(hasher.digest())
502 return hex(hasher.digest())
502
503
503
504
504 def hashfile(file):
505 def hashfile(file):
505 if not os.path.exists(file):
506 if not os.path.exists(file):
506 return b''
507 return b''
507 with open(file, b'rb') as fd:
508 with open(file, b'rb') as fd:
508 return hexsha1(fd)
509 return hexsha1(fd)
509
510
510
511
511 def getexecutable(filename):
512 def getexecutable(filename):
512 mode = os.stat(filename).st_mode
513 mode = os.stat(filename).st_mode
513 return (
514 return (
514 (mode & stat.S_IXUSR)
515 (mode & stat.S_IXUSR)
515 and (mode & stat.S_IXGRP)
516 and (mode & stat.S_IXGRP)
516 and (mode & stat.S_IXOTH)
517 and (mode & stat.S_IXOTH)
517 )
518 )
518
519
519
520
520 def urljoin(first, second, *arg):
521 def urljoin(first, second, *arg):
521 def join(left, right):
522 def join(left, right):
522 if not left.endswith(b'/'):
523 if not left.endswith(b'/'):
523 left += b'/'
524 left += b'/'
524 if right.startswith(b'/'):
525 if right.startswith(b'/'):
525 right = right[1:]
526 right = right[1:]
526 return left + right
527 return left + right
527
528
528 url = join(first, second)
529 url = join(first, second)
529 for a in arg:
530 for a in arg:
530 url = join(url, a)
531 url = join(url, a)
531 return url
532 return url
532
533
533
534
534 def hexsha1(fileobj):
535 def hexsha1(fileobj):
535 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
536 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
536 object data"""
537 object data"""
537 h = hashutil.sha1()
538 h = hashutil.sha1()
538 for chunk in util.filechunkiter(fileobj):
539 for chunk in util.filechunkiter(fileobj):
539 h.update(chunk)
540 h.update(chunk)
540 return hex(h.digest())
541 return hex(h.digest())
541
542
542
543
543 def httpsendfile(ui, filename):
544 def httpsendfile(ui, filename):
544 return httpconnection.httpsendfile(ui, filename, b'rb')
545 return httpconnection.httpsendfile(ui, filename, b'rb')
545
546
546
547
547 def unixpath(path):
548 def unixpath(path):
548 '''Return a version of path normalized for use with the lfdirstate.'''
549 '''Return a version of path normalized for use with the lfdirstate.'''
549 return util.pconvert(os.path.normpath(path))
550 return util.pconvert(os.path.normpath(path))
550
551
551
552
552 def islfilesrepo(repo):
553 def islfilesrepo(repo):
553 '''Return true if the repo is a largefile repo.'''
554 '''Return true if the repo is a largefile repo.'''
554 if b'largefiles' in repo.requirements:
555 if b'largefiles' in repo.requirements:
555 for entry in repo.store.data_entries():
556 for entry in repo.store.data_entries():
556 if entry.is_revlog and shortnameslash in entry.target_id:
557 if entry.is_revlog and shortnameslash in entry.target_id:
557 return True
558 return True
558
559
559 return any(openlfdirstate(repo.ui, repo, False))
560 return any(openlfdirstate(repo.ui, repo, False))
560
561
561
562
562 class storeprotonotcapable(Exception):
563 class storeprotonotcapable(Exception):
563 def __init__(self, storetypes):
564 def __init__(self, storetypes):
564 self.storetypes = storetypes
565 self.storetypes = storetypes
565
566
566
567
567 def getstandinsstate(repo):
568 def getstandinsstate(repo):
568 standins = []
569 standins = []
569 matcher = getstandinmatcher(repo)
570 matcher = getstandinmatcher(repo)
570 wctx = repo[None]
571 wctx = repo[None]
571 for standin in repo.dirstate.walk(
572 for standin in repo.dirstate.walk(
572 matcher, subrepos=[], unknown=False, ignored=False
573 matcher, subrepos=[], unknown=False, ignored=False
573 ):
574 ):
574 lfile = splitstandin(standin)
575 lfile = splitstandin(standin)
575 try:
576 try:
576 hash = readasstandin(wctx[standin])
577 hash = readasstandin(wctx[standin])
577 except IOError:
578 except IOError:
578 hash = None
579 hash = None
579 standins.append((lfile, hash))
580 standins.append((lfile, hash))
580 return standins
581 return standins
581
582
582
583
583 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
584 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
584 lfstandin = standin(lfile)
585 lfstandin = standin(lfile)
585 if lfstandin not in repo.dirstate:
586 if lfstandin not in repo.dirstate:
586 lfdirstate.hacky_extension_update_file(
587 lfdirstate.hacky_extension_update_file(
587 lfile,
588 lfile,
588 p1_tracked=False,
589 p1_tracked=False,
589 wc_tracked=False,
590 wc_tracked=False,
590 )
591 )
591 else:
592 else:
592 entry = repo.dirstate.get_entry(lfstandin)
593 entry = repo.dirstate.get_entry(lfstandin)
593 lfdirstate.hacky_extension_update_file(
594 lfdirstate.hacky_extension_update_file(
594 lfile,
595 lfile,
595 wc_tracked=entry.tracked,
596 wc_tracked=entry.tracked,
596 p1_tracked=entry.p1_tracked,
597 p1_tracked=entry.p1_tracked,
597 p2_info=entry.p2_info,
598 p2_info=entry.p2_info,
598 possibly_dirty=True,
599 possibly_dirty=True,
599 )
600 )
600
601
601
602
602 def markcommitted(orig, ctx, node):
603 def markcommitted(orig, ctx, node):
603 repo = ctx.repo()
604 repo = ctx.repo()
604
605
605 with repo.dirstate.changing_parents(repo):
606 with repo.dirstate.changing_parents(repo):
606 orig(node)
607 orig(node)
607
608
608 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
609 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
609 # because files coming from the 2nd parent are omitted in the latter.
610 # because files coming from the 2nd parent are omitted in the latter.
610 #
611 #
611 # The former should be used to get targets of "synclfdirstate",
612 # The former should be used to get targets of "synclfdirstate",
612 # because such files:
613 # because such files:
613 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
614 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
614 # - have to be marked as "n" after commit, but
615 # - have to be marked as "n" after commit, but
615 # - aren't listed in "repo[node].files()"
616 # - aren't listed in "repo[node].files()"
616
617
617 lfdirstate = openlfdirstate(repo.ui, repo)
618 lfdirstate = openlfdirstate(repo.ui, repo)
618 for f in ctx.files():
619 for f in ctx.files():
619 lfile = splitstandin(f)
620 lfile = splitstandin(f)
620 if lfile is not None:
621 if lfile is not None:
621 synclfdirstate(repo, lfdirstate, lfile, False)
622 synclfdirstate(repo, lfdirstate, lfile, False)
622
623
623 # As part of committing, copy all of the largefiles into the cache.
624 # As part of committing, copy all of the largefiles into the cache.
624 #
625 #
625 # Using "node" instead of "ctx" implies additional "repo[node]"
626 # Using "node" instead of "ctx" implies additional "repo[node]"
626 # lookup while copyalltostore(), but can omit redundant check for
627 # lookup while copyalltostore(), but can omit redundant check for
627 # files comming from the 2nd parent, which should exist in store
628 # files comming from the 2nd parent, which should exist in store
628 # at merging.
629 # at merging.
629 copyalltostore(repo, node)
630 copyalltostore(repo, node)
630
631
631
632
632 def getlfilestoupdate(oldstandins, newstandins):
633 def getlfilestoupdate(oldstandins, newstandins):
633 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
634 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
634 filelist = []
635 filelist = []
635 for f in changedstandins:
636 for f in changedstandins:
636 if f[0] not in filelist:
637 if f[0] not in filelist:
637 filelist.append(f[0])
638 filelist.append(f[0])
638 return filelist
639 return filelist
639
640
640
641
641 def getlfilestoupload(repo, missing, addfunc):
642 def getlfilestoupload(repo, missing, addfunc):
642 makeprogress = repo.ui.makeprogress
643 makeprogress = repo.ui.makeprogress
643 with makeprogress(
644 with makeprogress(
644 _(b'finding outgoing largefiles'),
645 _(b'finding outgoing largefiles'),
645 unit=_(b'revisions'),
646 unit=_(b'revisions'),
646 total=len(missing),
647 total=len(missing),
647 ) as progress:
648 ) as progress:
648 for i, n in enumerate(missing):
649 for i, n in enumerate(missing):
649 progress.update(i)
650 progress.update(i)
650 parents = [p for p in repo[n].parents() if p != repo.nullid]
651 parents = [p for p in repo[n].parents() if p != repo.nullid]
651
652
652 with lfstatus(repo, value=False):
653 with lfstatus(repo, value=False):
653 ctx = repo[n]
654 ctx = repo[n]
654
655
655 files = set(ctx.files())
656 files = set(ctx.files())
656 if len(parents) == 2:
657 if len(parents) == 2:
657 mc = ctx.manifest()
658 mc = ctx.manifest()
658 mp1 = ctx.p1().manifest()
659 mp1 = ctx.p1().manifest()
659 mp2 = ctx.p2().manifest()
660 mp2 = ctx.p2().manifest()
660 for f in mp1:
661 for f in mp1:
661 if f not in mc:
662 if f not in mc:
662 files.add(f)
663 files.add(f)
663 for f in mp2:
664 for f in mp2:
664 if f not in mc:
665 if f not in mc:
665 files.add(f)
666 files.add(f)
666 for f in mc:
667 for f in mc:
667 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
668 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
668 files.add(f)
669 files.add(f)
669 for fn in files:
670 for fn in files:
670 if isstandin(fn) and fn in ctx:
671 if isstandin(fn) and fn in ctx:
671 addfunc(fn, readasstandin(ctx[fn]))
672 addfunc(fn, readasstandin(ctx[fn]))
672
673
673
674
674 def updatestandinsbymatch(repo, match):
675 def updatestandinsbymatch(repo, match):
675 """Update standins in the working directory according to specified match
676 """Update standins in the working directory according to specified match
676
677
677 This returns (possibly modified) ``match`` object to be used for
678 This returns (possibly modified) ``match`` object to be used for
678 subsequent commit process.
679 subsequent commit process.
679 """
680 """
680
681
681 ui = repo.ui
682 ui = repo.ui
682
683
683 # Case 1: user calls commit with no specific files or
684 # Case 1: user calls commit with no specific files or
684 # include/exclude patterns: refresh and commit all files that
685 # include/exclude patterns: refresh and commit all files that
685 # are "dirty".
686 # are "dirty".
686 if match is None or match.always():
687 if match is None or match.always():
687 # Spend a bit of time here to get a list of files we know
688 # Spend a bit of time here to get a list of files we know
688 # are modified so we can compare only against those.
689 # are modified so we can compare only against those.
689 # It can cost a lot of time (several seconds)
690 # It can cost a lot of time (several seconds)
690 # otherwise to update all standins if the largefiles are
691 # otherwise to update all standins if the largefiles are
691 # large.
692 # large.
692 dirtymatch = matchmod.always()
693 dirtymatch = matchmod.always()
693 with repo.dirstate.running_status(repo):
694 with repo.dirstate.running_status(repo):
694 lfdirstate = openlfdirstate(ui, repo)
695 lfdirstate = openlfdirstate(ui, repo)
695 unsure, s, mtime_boundary = lfdirstate.status(
696 unsure, s, mtime_boundary = lfdirstate.status(
696 dirtymatch,
697 dirtymatch,
697 subrepos=[],
698 subrepos=[],
698 ignored=False,
699 ignored=False,
699 clean=False,
700 clean=False,
700 unknown=False,
701 unknown=False,
701 )
702 )
702 modifiedfiles = unsure + s.modified + s.added + s.removed
703 modifiedfiles = unsure + s.modified + s.added + s.removed
703 lfiles = listlfiles(repo)
704 lfiles = listlfiles(repo)
704 # this only loops through largefiles that exist (not
705 # this only loops through largefiles that exist (not
705 # removed/renamed)
706 # removed/renamed)
706 for lfile in lfiles:
707 for lfile in lfiles:
707 if lfile in modifiedfiles:
708 if lfile in modifiedfiles:
708 fstandin = standin(lfile)
709 fstandin = standin(lfile)
709 if repo.wvfs.exists(fstandin):
710 if repo.wvfs.exists(fstandin):
710 # this handles the case where a rebase is being
711 # this handles the case where a rebase is being
711 # performed and the working copy is not updated
712 # performed and the working copy is not updated
712 # yet.
713 # yet.
713 if repo.wvfs.exists(lfile):
714 if repo.wvfs.exists(lfile):
714 updatestandin(repo, lfile, fstandin)
715 updatestandin(repo, lfile, fstandin)
715
716
716 return match
717 return match
717
718
718 lfiles = listlfiles(repo)
719 lfiles = listlfiles(repo)
720 match._was_tampered_with = True
719 match._files = repo._subdirlfs(match.files(), lfiles)
721 match._files = repo._subdirlfs(match.files(), lfiles)
720
722
721 # Case 2: user calls commit with specified patterns: refresh
723 # Case 2: user calls commit with specified patterns: refresh
722 # any matching big files.
724 # any matching big files.
723 smatcher = composestandinmatcher(repo, match)
725 smatcher = composestandinmatcher(repo, match)
724 standins = repo.dirstate.walk(
726 standins = repo.dirstate.walk(
725 smatcher, subrepos=[], unknown=False, ignored=False
727 smatcher, subrepos=[], unknown=False, ignored=False
726 )
728 )
727
729
728 # No matching big files: get out of the way and pass control to
730 # No matching big files: get out of the way and pass control to
729 # the usual commit() method.
731 # the usual commit() method.
730 if not standins:
732 if not standins:
731 return match
733 return match
732
734
733 # Refresh all matching big files. It's possible that the
735 # Refresh all matching big files. It's possible that the
734 # commit will end up failing, in which case the big files will
736 # commit will end up failing, in which case the big files will
735 # stay refreshed. No harm done: the user modified them and
737 # stay refreshed. No harm done: the user modified them and
736 # asked to commit them, so sooner or later we're going to
738 # asked to commit them, so sooner or later we're going to
737 # refresh the standins. Might as well leave them refreshed.
739 # refresh the standins. Might as well leave them refreshed.
738 lfdirstate = openlfdirstate(ui, repo)
740 lfdirstate = openlfdirstate(ui, repo)
739 for fstandin in standins:
741 for fstandin in standins:
740 lfile = splitstandin(fstandin)
742 lfile = splitstandin(fstandin)
741 if lfdirstate.get_entry(lfile).tracked:
743 if lfdirstate.get_entry(lfile).tracked:
742 updatestandin(repo, lfile, fstandin)
744 updatestandin(repo, lfile, fstandin)
743
745
744 # Cook up a new matcher that only matches regular files or
746 # Cook up a new matcher that only matches regular files or
745 # standins corresponding to the big files requested by the
747 # standins corresponding to the big files requested by the
746 # user. Have to modify _files to prevent commit() from
748 # user. Have to modify _files to prevent commit() from
747 # complaining "not tracked" for big files.
749 # complaining "not tracked" for big files.
748 match = copy.copy(match)
750 match = copy.copy(match)
751 match._was_tampered_with = True
749 origmatchfn = match.matchfn
752 origmatchfn = match.matchfn
750
753
751 # Check both the list of largefiles and the list of
754 # Check both the list of largefiles and the list of
752 # standins because if a largefile was removed, it
755 # standins because if a largefile was removed, it
753 # won't be in the list of largefiles at this point
756 # won't be in the list of largefiles at this point
754 match._files += sorted(standins)
757 match._files += sorted(standins)
755
758
756 actualfiles = []
759 actualfiles = []
757 for f in match._files:
760 for f in match._files:
758 fstandin = standin(f)
761 fstandin = standin(f)
759
762
760 # For largefiles, only one of the normal and standin should be
763 # For largefiles, only one of the normal and standin should be
761 # committed (except if one of them is a remove). In the case of a
764 # committed (except if one of them is a remove). In the case of a
762 # standin removal, drop the normal file if it is unknown to dirstate.
765 # standin removal, drop the normal file if it is unknown to dirstate.
763 # Thus, skip plain largefile names but keep the standin.
766 # Thus, skip plain largefile names but keep the standin.
764 if f in lfiles or fstandin in standins:
767 if f in lfiles or fstandin in standins:
765 if not repo.dirstate.get_entry(fstandin).removed:
768 if not repo.dirstate.get_entry(fstandin).removed:
766 if not repo.dirstate.get_entry(f).removed:
769 if not repo.dirstate.get_entry(f).removed:
767 continue
770 continue
768 elif not repo.dirstate.get_entry(f).any_tracked:
771 elif not repo.dirstate.get_entry(f).any_tracked:
769 continue
772 continue
770
773
771 actualfiles.append(f)
774 actualfiles.append(f)
772 match._files = actualfiles
775 match._files = actualfiles
773
776
774 def matchfn(f):
777 def matchfn(f):
775 if origmatchfn(f):
778 if origmatchfn(f):
776 return f not in lfiles
779 return f not in lfiles
777 else:
780 else:
778 return f in standins
781 return f in standins
779
782
780 match.matchfn = matchfn
783 match.matchfn = matchfn
781
784
782 return match
785 return match
783
786
784
787
785 class automatedcommithook:
788 class automatedcommithook:
786 """Stateful hook to update standins at the 1st commit of resuming
789 """Stateful hook to update standins at the 1st commit of resuming
787
790
788 For efficiency, updating standins in the working directory should
791 For efficiency, updating standins in the working directory should
789 be avoided while automated committing (like rebase, transplant and
792 be avoided while automated committing (like rebase, transplant and
790 so on), because they should be updated before committing.
793 so on), because they should be updated before committing.
791
794
792 But the 1st commit of resuming automated committing (e.g. ``rebase
795 But the 1st commit of resuming automated committing (e.g. ``rebase
793 --continue``) should update them, because largefiles may be
796 --continue``) should update them, because largefiles may be
794 modified manually.
797 modified manually.
795 """
798 """
796
799
797 def __init__(self, resuming):
800 def __init__(self, resuming):
798 self.resuming = resuming
801 self.resuming = resuming
799
802
800 def __call__(self, repo, match):
803 def __call__(self, repo, match):
801 if self.resuming:
804 if self.resuming:
802 self.resuming = False # avoids updating at subsequent commits
805 self.resuming = False # avoids updating at subsequent commits
803 return updatestandinsbymatch(repo, match)
806 return updatestandinsbymatch(repo, match)
804 else:
807 else:
805 return match
808 return match
806
809
807
810
808 def getstatuswriter(ui, repo, forcibly=None):
811 def getstatuswriter(ui, repo, forcibly=None):
809 """Return the function to write largefiles specific status out
812 """Return the function to write largefiles specific status out
810
813
811 If ``forcibly`` is ``None``, this returns the last element of
814 If ``forcibly`` is ``None``, this returns the last element of
812 ``repo._lfstatuswriters`` as "default" writer function.
815 ``repo._lfstatuswriters`` as "default" writer function.
813
816
814 Otherwise, this returns the function to always write out (or
817 Otherwise, this returns the function to always write out (or
815 ignore if ``not forcibly``) status.
818 ignore if ``not forcibly``) status.
816 """
819 """
817 if forcibly is None and hasattr(repo, '_largefilesenabled'):
820 if forcibly is None and hasattr(repo, '_largefilesenabled'):
818 return repo._lfstatuswriters[-1]
821 return repo._lfstatuswriters[-1]
819 else:
822 else:
820 if forcibly:
823 if forcibly:
821 return ui.status # forcibly WRITE OUT
824 return ui.status # forcibly WRITE OUT
822 else:
825 else:
823 return lambda *msg, **opts: None # forcibly IGNORE
826 return lambda *msg, **opts: None # forcibly IGNORE
@@ -1,1924 +1,1932 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10
10
11 import contextlib
11 import contextlib
12 import copy
12 import copy
13 import os
13 import os
14
14
15 from mercurial.i18n import _
15 from mercurial.i18n import _
16
16
17 from mercurial.pycompat import open
17 from mercurial.pycompat import open
18
18
19 from mercurial.hgweb import webcommands
19 from mercurial.hgweb import webcommands
20
20
21 from mercurial import (
21 from mercurial import (
22 archival,
22 archival,
23 cmdutil,
23 cmdutil,
24 copies as copiesmod,
24 copies as copiesmod,
25 dirstate,
25 dirstate,
26 error,
26 error,
27 exchange,
27 exchange,
28 extensions,
28 extensions,
29 exthelper,
29 exthelper,
30 filemerge,
30 filemerge,
31 hg,
31 hg,
32 logcmdutil,
32 logcmdutil,
33 match as matchmod,
33 match as matchmod,
34 merge,
34 merge,
35 mergestate as mergestatemod,
35 mergestate as mergestatemod,
36 pathutil,
36 pathutil,
37 pycompat,
37 pycompat,
38 scmutil,
38 scmutil,
39 smartset,
39 smartset,
40 subrepo,
40 subrepo,
41 url as urlmod,
41 url as urlmod,
42 util,
42 util,
43 )
43 )
44
44
45 from mercurial.upgrade_utils import (
45 from mercurial.upgrade_utils import (
46 actions as upgrade_actions,
46 actions as upgrade_actions,
47 )
47 )
48
48
49 from . import (
49 from . import (
50 lfcommands,
50 lfcommands,
51 lfutil,
51 lfutil,
52 storefactory,
52 storefactory,
53 )
53 )
54
54
55 ACTION_ADD = mergestatemod.ACTION_ADD
55 ACTION_ADD = mergestatemod.ACTION_ADD
56 ACTION_DELETED_CHANGED = mergestatemod.ACTION_DELETED_CHANGED
56 ACTION_DELETED_CHANGED = mergestatemod.ACTION_DELETED_CHANGED
57 ACTION_GET = mergestatemod.ACTION_GET
57 ACTION_GET = mergestatemod.ACTION_GET
58 ACTION_KEEP = mergestatemod.ACTION_KEEP
58 ACTION_KEEP = mergestatemod.ACTION_KEEP
59 ACTION_REMOVE = mergestatemod.ACTION_REMOVE
59 ACTION_REMOVE = mergestatemod.ACTION_REMOVE
60
60
61 eh = exthelper.exthelper()
61 eh = exthelper.exthelper()
62
62
63 lfstatus = lfutil.lfstatus
63 lfstatus = lfutil.lfstatus
64
64
65 MERGE_ACTION_LARGEFILE_MARK_REMOVED = mergestatemod.MergeAction('lfmr')
65 MERGE_ACTION_LARGEFILE_MARK_REMOVED = mergestatemod.MergeAction('lfmr')
66
66
67 # -- Utility functions: commonly/repeatedly needed functionality ---------------
67 # -- Utility functions: commonly/repeatedly needed functionality ---------------
68
68
69
69
70 def composelargefilematcher(match, manifest):
70 def composelargefilematcher(match, manifest):
71 """create a matcher that matches only the largefiles in the original
71 """create a matcher that matches only the largefiles in the original
72 matcher"""
72 matcher"""
73 m = copy.copy(match)
73 m = copy.copy(match)
74 m._was_tampered_with = True
74 lfile = lambda f: lfutil.standin(f) in manifest
75 lfile = lambda f: lfutil.standin(f) in manifest
75 m._files = [lf for lf in m._files if lfile(lf)]
76 m._files = [lf for lf in m._files if lfile(lf)]
76 m._fileset = set(m._files)
77 m._fileset = set(m._files)
77 m.always = lambda: False
78 m.always = lambda: False
78 origmatchfn = m.matchfn
79 origmatchfn = m.matchfn
79 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
80 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
80 return m
81 return m
81
82
82
83
83 def composenormalfilematcher(match, manifest, exclude=None):
84 def composenormalfilematcher(match, manifest, exclude=None):
84 excluded = set()
85 excluded = set()
85 if exclude is not None:
86 if exclude is not None:
86 excluded.update(exclude)
87 excluded.update(exclude)
87
88
88 m = copy.copy(match)
89 m = copy.copy(match)
90 m._was_tampered_with = True
89 notlfile = lambda f: not (
91 notlfile = lambda f: not (
90 lfutil.isstandin(f) or lfutil.standin(f) in manifest or f in excluded
92 lfutil.isstandin(f) or lfutil.standin(f) in manifest or f in excluded
91 )
93 )
92 m._files = [lf for lf in m._files if notlfile(lf)]
94 m._files = [lf for lf in m._files if notlfile(lf)]
93 m._fileset = set(m._files)
95 m._fileset = set(m._files)
94 m.always = lambda: False
96 m.always = lambda: False
95 origmatchfn = m.matchfn
97 origmatchfn = m.matchfn
96 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
98 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
97 return m
99 return m
98
100
99
101
100 def addlargefiles(ui, repo, isaddremove, matcher, uipathfn, **opts):
102 def addlargefiles(ui, repo, isaddremove, matcher, uipathfn, **opts):
101 large = opts.get('large')
103 large = opts.get('large')
102 lfsize = lfutil.getminsize(
104 lfsize = lfutil.getminsize(
103 ui, lfutil.islfilesrepo(repo), opts.get('lfsize')
105 ui, lfutil.islfilesrepo(repo), opts.get('lfsize')
104 )
106 )
105
107
106 lfmatcher = None
108 lfmatcher = None
107 if lfutil.islfilesrepo(repo):
109 if lfutil.islfilesrepo(repo):
108 lfpats = ui.configlist(lfutil.longname, b'patterns')
110 lfpats = ui.configlist(lfutil.longname, b'patterns')
109 if lfpats:
111 if lfpats:
110 lfmatcher = matchmod.match(repo.root, b'', list(lfpats))
112 lfmatcher = matchmod.match(repo.root, b'', list(lfpats))
111
113
112 lfnames = []
114 lfnames = []
113 m = matcher
115 m = matcher
114
116
115 wctx = repo[None]
117 wctx = repo[None]
116 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
118 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
117 exact = m.exact(f)
119 exact = m.exact(f)
118 lfile = lfutil.standin(f) in wctx
120 lfile = lfutil.standin(f) in wctx
119 nfile = f in wctx
121 nfile = f in wctx
120 exists = lfile or nfile
122 exists = lfile or nfile
121
123
122 # Don't warn the user when they attempt to add a normal tracked file.
124 # Don't warn the user when they attempt to add a normal tracked file.
123 # The normal add code will do that for us.
125 # The normal add code will do that for us.
124 if exact and exists:
126 if exact and exists:
125 if lfile:
127 if lfile:
126 ui.warn(_(b'%s already a largefile\n') % uipathfn(f))
128 ui.warn(_(b'%s already a largefile\n') % uipathfn(f))
127 continue
129 continue
128
130
129 if (exact or not exists) and not lfutil.isstandin(f):
131 if (exact or not exists) and not lfutil.isstandin(f):
130 # In case the file was removed previously, but not committed
132 # In case the file was removed previously, but not committed
131 # (issue3507)
133 # (issue3507)
132 if not repo.wvfs.exists(f):
134 if not repo.wvfs.exists(f):
133 continue
135 continue
134
136
135 abovemin = (
137 abovemin = (
136 lfsize and repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024
138 lfsize and repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024
137 )
139 )
138 if large or abovemin or (lfmatcher and lfmatcher(f)):
140 if large or abovemin or (lfmatcher and lfmatcher(f)):
139 lfnames.append(f)
141 lfnames.append(f)
140 if ui.verbose or not exact:
142 if ui.verbose or not exact:
141 ui.status(_(b'adding %s as a largefile\n') % uipathfn(f))
143 ui.status(_(b'adding %s as a largefile\n') % uipathfn(f))
142
144
143 bad = []
145 bad = []
144
146
145 # Need to lock, otherwise there could be a race condition between
147 # Need to lock, otherwise there could be a race condition between
146 # when standins are created and added to the repo.
148 # when standins are created and added to the repo.
147 with repo.wlock():
149 with repo.wlock():
148 if not opts.get('dry_run'):
150 if not opts.get('dry_run'):
149 standins = []
151 standins = []
150 lfdirstate = lfutil.openlfdirstate(ui, repo)
152 lfdirstate = lfutil.openlfdirstate(ui, repo)
151 for f in lfnames:
153 for f in lfnames:
152 standinname = lfutil.standin(f)
154 standinname = lfutil.standin(f)
153 lfutil.writestandin(
155 lfutil.writestandin(
154 repo,
156 repo,
155 standinname,
157 standinname,
156 hash=b'',
158 hash=b'',
157 executable=lfutil.getexecutable(repo.wjoin(f)),
159 executable=lfutil.getexecutable(repo.wjoin(f)),
158 )
160 )
159 standins.append(standinname)
161 standins.append(standinname)
160 lfdirstate.set_tracked(f)
162 lfdirstate.set_tracked(f)
161 lfdirstate.write(repo.currenttransaction())
163 lfdirstate.write(repo.currenttransaction())
162 bad += [
164 bad += [
163 lfutil.splitstandin(f)
165 lfutil.splitstandin(f)
164 for f in repo[None].add(standins)
166 for f in repo[None].add(standins)
165 if f in m.files()
167 if f in m.files()
166 ]
168 ]
167
169
168 added = [f for f in lfnames if f not in bad]
170 added = [f for f in lfnames if f not in bad]
169 return added, bad
171 return added, bad
170
172
171
173
172 def removelargefiles(ui, repo, isaddremove, matcher, uipathfn, dryrun, **opts):
174 def removelargefiles(ui, repo, isaddremove, matcher, uipathfn, dryrun, **opts):
173 after = opts.get('after')
175 after = opts.get('after')
174 m = composelargefilematcher(matcher, repo[None].manifest())
176 m = composelargefilematcher(matcher, repo[None].manifest())
175 with lfstatus(repo):
177 with lfstatus(repo):
176 s = repo.status(match=m, clean=not isaddremove)
178 s = repo.status(match=m, clean=not isaddremove)
177 manifest = repo[None].manifest()
179 manifest = repo[None].manifest()
178 modified, added, deleted, clean = [
180 modified, added, deleted, clean = [
179 [f for f in list if lfutil.standin(f) in manifest]
181 [f for f in list if lfutil.standin(f) in manifest]
180 for list in (s.modified, s.added, s.deleted, s.clean)
182 for list in (s.modified, s.added, s.deleted, s.clean)
181 ]
183 ]
182
184
183 def warn(files, msg):
185 def warn(files, msg):
184 for f in files:
186 for f in files:
185 ui.warn(msg % uipathfn(f))
187 ui.warn(msg % uipathfn(f))
186 return int(len(files) > 0)
188 return int(len(files) > 0)
187
189
188 if after:
190 if after:
189 remove = deleted
191 remove = deleted
190 result = warn(
192 result = warn(
191 modified + added + clean, _(b'not removing %s: file still exists\n')
193 modified + added + clean, _(b'not removing %s: file still exists\n')
192 )
194 )
193 else:
195 else:
194 remove = deleted + clean
196 remove = deleted + clean
195 result = warn(
197 result = warn(
196 modified,
198 modified,
197 _(
199 _(
198 b'not removing %s: file is modified (use -f'
200 b'not removing %s: file is modified (use -f'
199 b' to force removal)\n'
201 b' to force removal)\n'
200 ),
202 ),
201 )
203 )
202 result = (
204 result = (
203 warn(
205 warn(
204 added,
206 added,
205 _(
207 _(
206 b'not removing %s: file has been marked for add'
208 b'not removing %s: file has been marked for add'
207 b' (use forget to undo)\n'
209 b' (use forget to undo)\n'
208 ),
210 ),
209 )
211 )
210 or result
212 or result
211 )
213 )
212
214
213 # Need to lock because standin files are deleted then removed from the
215 # Need to lock because standin files are deleted then removed from the
214 # repository and we could race in-between.
216 # repository and we could race in-between.
215 with repo.wlock():
217 with repo.wlock():
216 lfdirstate = lfutil.openlfdirstate(ui, repo)
218 lfdirstate = lfutil.openlfdirstate(ui, repo)
217 for f in sorted(remove):
219 for f in sorted(remove):
218 if ui.verbose or not m.exact(f):
220 if ui.verbose or not m.exact(f):
219 ui.status(_(b'removing %s\n') % uipathfn(f))
221 ui.status(_(b'removing %s\n') % uipathfn(f))
220
222
221 if not dryrun:
223 if not dryrun:
222 if not after:
224 if not after:
223 repo.wvfs.unlinkpath(f, ignoremissing=True)
225 repo.wvfs.unlinkpath(f, ignoremissing=True)
224
226
225 if dryrun:
227 if dryrun:
226 return result
228 return result
227
229
228 remove = [lfutil.standin(f) for f in remove]
230 remove = [lfutil.standin(f) for f in remove]
229 # If this is being called by addremove, let the original addremove
231 # If this is being called by addremove, let the original addremove
230 # function handle this.
232 # function handle this.
231 if not isaddremove:
233 if not isaddremove:
232 for f in remove:
234 for f in remove:
233 repo.wvfs.unlinkpath(f, ignoremissing=True)
235 repo.wvfs.unlinkpath(f, ignoremissing=True)
234 repo[None].forget(remove)
236 repo[None].forget(remove)
235
237
236 for f in remove:
238 for f in remove:
237 lfdirstate.set_untracked(lfutil.splitstandin(f))
239 lfdirstate.set_untracked(lfutil.splitstandin(f))
238
240
239 lfdirstate.write(repo.currenttransaction())
241 lfdirstate.write(repo.currenttransaction())
240
242
241 return result
243 return result
242
244
243
245
244 # For overriding mercurial.hgweb.webcommands so that largefiles will
246 # For overriding mercurial.hgweb.webcommands so that largefiles will
245 # appear at their right place in the manifests.
247 # appear at their right place in the manifests.
246 @eh.wrapfunction(webcommands, 'decodepath')
248 @eh.wrapfunction(webcommands, 'decodepath')
247 def decodepath(orig, path):
249 def decodepath(orig, path):
248 return lfutil.splitstandin(path) or path
250 return lfutil.splitstandin(path) or path
249
251
250
252
251 # -- Wrappers: modify existing commands --------------------------------
253 # -- Wrappers: modify existing commands --------------------------------
252
254
253
255
254 @eh.wrapcommand(
256 @eh.wrapcommand(
255 b'add',
257 b'add',
256 opts=[
258 opts=[
257 (b'', b'large', None, _(b'add as largefile')),
259 (b'', b'large', None, _(b'add as largefile')),
258 (b'', b'normal', None, _(b'add as normal file')),
260 (b'', b'normal', None, _(b'add as normal file')),
259 (
261 (
260 b'',
262 b'',
261 b'lfsize',
263 b'lfsize',
262 b'',
264 b'',
263 _(
265 _(
264 b'add all files above this size (in megabytes) '
266 b'add all files above this size (in megabytes) '
265 b'as largefiles (default: 10)'
267 b'as largefiles (default: 10)'
266 ),
268 ),
267 ),
269 ),
268 ],
270 ],
269 )
271 )
270 def overrideadd(orig, ui, repo, *pats, **opts):
272 def overrideadd(orig, ui, repo, *pats, **opts):
271 if opts.get('normal') and opts.get('large'):
273 if opts.get('normal') and opts.get('large'):
272 raise error.Abort(_(b'--normal cannot be used with --large'))
274 raise error.Abort(_(b'--normal cannot be used with --large'))
273 return orig(ui, repo, *pats, **opts)
275 return orig(ui, repo, *pats, **opts)
274
276
275
277
276 @eh.wrapfunction(cmdutil, 'add')
278 @eh.wrapfunction(cmdutil, 'add')
277 def cmdutiladd(orig, ui, repo, matcher, prefix, uipathfn, explicitonly, **opts):
279 def cmdutiladd(orig, ui, repo, matcher, prefix, uipathfn, explicitonly, **opts):
278 # The --normal flag short circuits this override
280 # The --normal flag short circuits this override
279 if opts.get('normal'):
281 if opts.get('normal'):
280 return orig(ui, repo, matcher, prefix, uipathfn, explicitonly, **opts)
282 return orig(ui, repo, matcher, prefix, uipathfn, explicitonly, **opts)
281
283
282 ladded, lbad = addlargefiles(ui, repo, False, matcher, uipathfn, **opts)
284 ladded, lbad = addlargefiles(ui, repo, False, matcher, uipathfn, **opts)
283 normalmatcher = composenormalfilematcher(
285 normalmatcher = composenormalfilematcher(
284 matcher, repo[None].manifest(), ladded
286 matcher, repo[None].manifest(), ladded
285 )
287 )
286 bad = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly, **opts)
288 bad = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly, **opts)
287
289
288 bad.extend(f for f in lbad)
290 bad.extend(f for f in lbad)
289 return bad
291 return bad
290
292
291
293
292 @eh.wrapfunction(cmdutil, 'remove')
294 @eh.wrapfunction(cmdutil, 'remove')
293 def cmdutilremove(
295 def cmdutilremove(
294 orig, ui, repo, matcher, prefix, uipathfn, after, force, subrepos, dryrun
296 orig, ui, repo, matcher, prefix, uipathfn, after, force, subrepos, dryrun
295 ):
297 ):
296 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
298 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
297 result = orig(
299 result = orig(
298 ui,
300 ui,
299 repo,
301 repo,
300 normalmatcher,
302 normalmatcher,
301 prefix,
303 prefix,
302 uipathfn,
304 uipathfn,
303 after,
305 after,
304 force,
306 force,
305 subrepos,
307 subrepos,
306 dryrun,
308 dryrun,
307 )
309 )
308 return (
310 return (
309 removelargefiles(
311 removelargefiles(
310 ui, repo, False, matcher, uipathfn, dryrun, after=after, force=force
312 ui, repo, False, matcher, uipathfn, dryrun, after=after, force=force
311 )
313 )
312 or result
314 or result
313 )
315 )
314
316
315
317
316 @eh.wrapfunction(dirstate.dirstate, '_changing')
318 @eh.wrapfunction(dirstate.dirstate, '_changing')
317 @contextlib.contextmanager
319 @contextlib.contextmanager
318 def _changing(orig, self, repo, change_type):
320 def _changing(orig, self, repo, change_type):
319 pre = sub_dirstate = getattr(self, '_sub_dirstate', None)
321 pre = sub_dirstate = getattr(self, '_sub_dirstate', None)
320 try:
322 try:
321 lfd = getattr(self, '_large_file_dirstate', False)
323 lfd = getattr(self, '_large_file_dirstate', False)
322 if sub_dirstate is None and not lfd:
324 if sub_dirstate is None and not lfd:
323 sub_dirstate = lfutil.openlfdirstate(repo.ui, repo)
325 sub_dirstate = lfutil.openlfdirstate(repo.ui, repo)
324 self._sub_dirstate = sub_dirstate
326 self._sub_dirstate = sub_dirstate
325 if not lfd:
327 if not lfd:
326 assert self._sub_dirstate is not None
328 assert self._sub_dirstate is not None
327 with orig(self, repo, change_type):
329 with orig(self, repo, change_type):
328 if sub_dirstate is None:
330 if sub_dirstate is None:
329 yield
331 yield
330 else:
332 else:
331 with sub_dirstate._changing(repo, change_type):
333 with sub_dirstate._changing(repo, change_type):
332 yield
334 yield
333 finally:
335 finally:
334 self._sub_dirstate = pre
336 self._sub_dirstate = pre
335
337
336
338
337 @eh.wrapfunction(dirstate.dirstate, 'running_status')
339 @eh.wrapfunction(dirstate.dirstate, 'running_status')
338 @contextlib.contextmanager
340 @contextlib.contextmanager
339 def running_status(orig, self, repo):
341 def running_status(orig, self, repo):
340 pre = sub_dirstate = getattr(self, '_sub_dirstate', None)
342 pre = sub_dirstate = getattr(self, '_sub_dirstate', None)
341 try:
343 try:
342 lfd = getattr(self, '_large_file_dirstate', False)
344 lfd = getattr(self, '_large_file_dirstate', False)
343 if sub_dirstate is None and not lfd:
345 if sub_dirstate is None and not lfd:
344 sub_dirstate = lfutil.openlfdirstate(repo.ui, repo)
346 sub_dirstate = lfutil.openlfdirstate(repo.ui, repo)
345 self._sub_dirstate = sub_dirstate
347 self._sub_dirstate = sub_dirstate
346 if not lfd:
348 if not lfd:
347 assert self._sub_dirstate is not None
349 assert self._sub_dirstate is not None
348 with orig(self, repo):
350 with orig(self, repo):
349 if sub_dirstate is None:
351 if sub_dirstate is None:
350 yield
352 yield
351 else:
353 else:
352 with sub_dirstate.running_status(repo):
354 with sub_dirstate.running_status(repo):
353 yield
355 yield
354 finally:
356 finally:
355 self._sub_dirstate = pre
357 self._sub_dirstate = pre
356
358
357
359
358 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
360 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
359 def overridestatusfn(orig, repo, rev2, **opts):
361 def overridestatusfn(orig, repo, rev2, **opts):
360 with lfstatus(repo._repo):
362 with lfstatus(repo._repo):
361 return orig(repo, rev2, **opts)
363 return orig(repo, rev2, **opts)
362
364
363
365
364 @eh.wrapcommand(b'status')
366 @eh.wrapcommand(b'status')
365 def overridestatus(orig, ui, repo, *pats, **opts):
367 def overridestatus(orig, ui, repo, *pats, **opts):
366 with lfstatus(repo):
368 with lfstatus(repo):
367 return orig(ui, repo, *pats, **opts)
369 return orig(ui, repo, *pats, **opts)
368
370
369
371
370 @eh.wrapfunction(subrepo.hgsubrepo, 'dirty')
372 @eh.wrapfunction(subrepo.hgsubrepo, 'dirty')
371 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
373 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
372 with lfstatus(repo._repo):
374 with lfstatus(repo._repo):
373 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
375 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
374
376
375
377
376 @eh.wrapcommand(b'log')
378 @eh.wrapcommand(b'log')
377 def overridelog(orig, ui, repo, *pats, **opts):
379 def overridelog(orig, ui, repo, *pats, **opts):
378 def overridematchandpats(
380 def overridematchandpats(
379 orig,
381 orig,
380 ctx,
382 ctx,
381 pats=(),
383 pats=(),
382 opts=None,
384 opts=None,
383 globbed=False,
385 globbed=False,
384 default=b'relpath',
386 default=b'relpath',
385 badfn=None,
387 badfn=None,
386 ):
388 ):
387 """Matcher that merges root directory with .hglf, suitable for log.
389 """Matcher that merges root directory with .hglf, suitable for log.
388 It is still possible to match .hglf directly.
390 It is still possible to match .hglf directly.
389 For any listed files run log on the standin too.
391 For any listed files run log on the standin too.
390 matchfn tries both the given filename and with .hglf stripped.
392 matchfn tries both the given filename and with .hglf stripped.
391 """
393 """
392 if opts is None:
394 if opts is None:
393 opts = {}
395 opts = {}
394 matchandpats = orig(ctx, pats, opts, globbed, default, badfn=badfn)
396 matchandpats = orig(ctx, pats, opts, globbed, default, badfn=badfn)
395 m, p = copy.copy(matchandpats)
397 m, p = copy.copy(matchandpats)
396
398
397 if m.always():
399 if m.always():
398 # We want to match everything anyway, so there's no benefit trying
400 # We want to match everything anyway, so there's no benefit trying
399 # to add standins.
401 # to add standins.
400 return matchandpats
402 return matchandpats
401
403
402 pats = set(p)
404 pats = set(p)
403
405
404 def fixpats(pat, tostandin=lfutil.standin):
406 def fixpats(pat, tostandin=lfutil.standin):
405 if pat.startswith(b'set:'):
407 if pat.startswith(b'set:'):
406 return pat
408 return pat
407
409
408 kindpat = matchmod._patsplit(pat, None)
410 kindpat = matchmod._patsplit(pat, None)
409
411
410 if kindpat[0] is not None:
412 if kindpat[0] is not None:
411 return kindpat[0] + b':' + tostandin(kindpat[1])
413 return kindpat[0] + b':' + tostandin(kindpat[1])
412 return tostandin(kindpat[1])
414 return tostandin(kindpat[1])
413
415
414 cwd = repo.getcwd()
416 cwd = repo.getcwd()
415 if cwd:
417 if cwd:
416 hglf = lfutil.shortname
418 hglf = lfutil.shortname
417 back = util.pconvert(repo.pathto(hglf)[: -len(hglf)])
419 back = util.pconvert(repo.pathto(hglf)[: -len(hglf)])
418
420
419 def tostandin(f):
421 def tostandin(f):
420 # The file may already be a standin, so truncate the back
422 # The file may already be a standin, so truncate the back
421 # prefix and test before mangling it. This avoids turning
423 # prefix and test before mangling it. This avoids turning
422 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
424 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
423 if f.startswith(back) and lfutil.splitstandin(f[len(back) :]):
425 if f.startswith(back) and lfutil.splitstandin(f[len(back) :]):
424 return f
426 return f
425
427
426 # An absolute path is from outside the repo, so truncate the
428 # An absolute path is from outside the repo, so truncate the
427 # path to the root before building the standin. Otherwise cwd
429 # path to the root before building the standin. Otherwise cwd
428 # is somewhere in the repo, relative to root, and needs to be
430 # is somewhere in the repo, relative to root, and needs to be
429 # prepended before building the standin.
431 # prepended before building the standin.
430 if os.path.isabs(cwd):
432 if os.path.isabs(cwd):
431 f = f[len(back) :]
433 f = f[len(back) :]
432 else:
434 else:
433 f = cwd + b'/' + f
435 f = cwd + b'/' + f
434 return back + lfutil.standin(f)
436 return back + lfutil.standin(f)
435
437
436 else:
438 else:
437
439
438 def tostandin(f):
440 def tostandin(f):
439 if lfutil.isstandin(f):
441 if lfutil.isstandin(f):
440 return f
442 return f
441 return lfutil.standin(f)
443 return lfutil.standin(f)
442
444
443 pats.update(fixpats(f, tostandin) for f in p)
445 pats.update(fixpats(f, tostandin) for f in p)
444
446
447 m._was_tampered_with = True
448
445 for i in range(0, len(m._files)):
449 for i in range(0, len(m._files)):
446 # Don't add '.hglf' to m.files, since that is already covered by '.'
450 # Don't add '.hglf' to m.files, since that is already covered by '.'
447 if m._files[i] == b'.':
451 if m._files[i] == b'.':
448 continue
452 continue
449 standin = lfutil.standin(m._files[i])
453 standin = lfutil.standin(m._files[i])
450 # If the "standin" is a directory, append instead of replace to
454 # If the "standin" is a directory, append instead of replace to
451 # support naming a directory on the command line with only
455 # support naming a directory on the command line with only
452 # largefiles. The original directory is kept to support normal
456 # largefiles. The original directory is kept to support normal
453 # files.
457 # files.
454 if standin in ctx:
458 if standin in ctx:
455 m._files[i] = standin
459 m._files[i] = standin
456 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
460 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
457 m._files.append(standin)
461 m._files.append(standin)
458
462
459 m._fileset = set(m._files)
463 m._fileset = set(m._files)
460 m.always = lambda: False
464 m.always = lambda: False
461 origmatchfn = m.matchfn
465 origmatchfn = m.matchfn
462
466
463 def lfmatchfn(f):
467 def lfmatchfn(f):
464 lf = lfutil.splitstandin(f)
468 lf = lfutil.splitstandin(f)
465 if lf is not None and origmatchfn(lf):
469 if lf is not None and origmatchfn(lf):
466 return True
470 return True
467 r = origmatchfn(f)
471 r = origmatchfn(f)
468 return r
472 return r
469
473
470 m.matchfn = lfmatchfn
474 m.matchfn = lfmatchfn
471
475
472 ui.debug(b'updated patterns: %s\n' % b', '.join(sorted(pats)))
476 ui.debug(b'updated patterns: %s\n' % b', '.join(sorted(pats)))
473 return m, pats
477 return m, pats
474
478
475 # For hg log --patch, the match object is used in two different senses:
479 # For hg log --patch, the match object is used in two different senses:
476 # (1) to determine what revisions should be printed out, and
480 # (1) to determine what revisions should be printed out, and
477 # (2) to determine what files to print out diffs for.
481 # (2) to determine what files to print out diffs for.
478 # The magic matchandpats override should be used for case (1) but not for
482 # The magic matchandpats override should be used for case (1) but not for
479 # case (2).
483 # case (2).
480 oldmatchandpats = scmutil.matchandpats
484 oldmatchandpats = scmutil.matchandpats
481
485
482 def overridemakefilematcher(orig, repo, pats, opts, badfn=None):
486 def overridemakefilematcher(orig, repo, pats, opts, badfn=None):
483 wctx = repo[None]
487 wctx = repo[None]
484 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
488 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
485 return lambda ctx: match
489 return lambda ctx: match
486
490
487 wrappedmatchandpats = extensions.wrappedfunction(
491 wrappedmatchandpats = extensions.wrappedfunction(
488 scmutil, 'matchandpats', overridematchandpats
492 scmutil, 'matchandpats', overridematchandpats
489 )
493 )
490 wrappedmakefilematcher = extensions.wrappedfunction(
494 wrappedmakefilematcher = extensions.wrappedfunction(
491 logcmdutil, '_makenofollowfilematcher', overridemakefilematcher
495 logcmdutil, '_makenofollowfilematcher', overridemakefilematcher
492 )
496 )
493 with wrappedmatchandpats, wrappedmakefilematcher:
497 with wrappedmatchandpats, wrappedmakefilematcher:
494 return orig(ui, repo, *pats, **opts)
498 return orig(ui, repo, *pats, **opts)
495
499
496
500
497 @eh.wrapcommand(
501 @eh.wrapcommand(
498 b'verify',
502 b'verify',
499 opts=[
503 opts=[
500 (
504 (
501 b'',
505 b'',
502 b'large',
506 b'large',
503 None,
507 None,
504 _(b'verify that all largefiles in current revision exists'),
508 _(b'verify that all largefiles in current revision exists'),
505 ),
509 ),
506 (
510 (
507 b'',
511 b'',
508 b'lfa',
512 b'lfa',
509 None,
513 None,
510 _(b'verify largefiles in all revisions, not just current'),
514 _(b'verify largefiles in all revisions, not just current'),
511 ),
515 ),
512 (
516 (
513 b'',
517 b'',
514 b'lfc',
518 b'lfc',
515 None,
519 None,
516 _(b'verify local largefile contents, not just existence'),
520 _(b'verify local largefile contents, not just existence'),
517 ),
521 ),
518 ],
522 ],
519 )
523 )
520 def overrideverify(orig, ui, repo, *pats, **opts):
524 def overrideverify(orig, ui, repo, *pats, **opts):
521 large = opts.pop('large', False)
525 large = opts.pop('large', False)
522 all = opts.pop('lfa', False)
526 all = opts.pop('lfa', False)
523 contents = opts.pop('lfc', False)
527 contents = opts.pop('lfc', False)
524
528
525 result = orig(ui, repo, *pats, **opts)
529 result = orig(ui, repo, *pats, **opts)
526 if large or all or contents:
530 if large or all or contents:
527 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
531 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
528 return result
532 return result
529
533
530
534
531 @eh.wrapcommand(
535 @eh.wrapcommand(
532 b'debugstate',
536 b'debugstate',
533 opts=[(b'', b'large', None, _(b'display largefiles dirstate'))],
537 opts=[(b'', b'large', None, _(b'display largefiles dirstate'))],
534 )
538 )
535 def overridedebugstate(orig, ui, repo, *pats, **opts):
539 def overridedebugstate(orig, ui, repo, *pats, **opts):
536 large = opts.pop('large', False)
540 large = opts.pop('large', False)
537 if large:
541 if large:
538
542
539 class fakerepo:
543 class fakerepo:
540 dirstate = lfutil.openlfdirstate(ui, repo)
544 dirstate = lfutil.openlfdirstate(ui, repo)
541
545
542 orig(ui, fakerepo, *pats, **opts)
546 orig(ui, fakerepo, *pats, **opts)
543 else:
547 else:
544 orig(ui, repo, *pats, **opts)
548 orig(ui, repo, *pats, **opts)
545
549
546
550
547 # Before starting the manifest merge, merge.updates will call
551 # Before starting the manifest merge, merge.updates will call
548 # _checkunknownfile to check if there are any files in the merged-in
552 # _checkunknownfile to check if there are any files in the merged-in
549 # changeset that collide with unknown files in the working copy.
553 # changeset that collide with unknown files in the working copy.
550 #
554 #
551 # The largefiles are seen as unknown, so this prevents us from merging
555 # The largefiles are seen as unknown, so this prevents us from merging
552 # in a file 'foo' if we already have a largefile with the same name.
556 # in a file 'foo' if we already have a largefile with the same name.
553 #
557 #
554 # The overridden function filters the unknown files by removing any
558 # The overridden function filters the unknown files by removing any
555 # largefiles. This makes the merge proceed and we can then handle this
559 # largefiles. This makes the merge proceed and we can then handle this
556 # case further in the overridden calculateupdates function below.
560 # case further in the overridden calculateupdates function below.
557 @eh.wrapfunction(merge, '_checkunknownfile')
561 @eh.wrapfunction(merge, '_checkunknownfile')
558 def overridecheckunknownfile(
562 def overridecheckunknownfile(
559 origfn, dirstate, wvfs, dircache, wctx, mctx, f, f2=None
563 origfn, dirstate, wvfs, dircache, wctx, mctx, f, f2=None
560 ):
564 ):
561 if lfutil.standin(dirstate.normalize(f)) in wctx:
565 if lfutil.standin(dirstate.normalize(f)) in wctx:
562 return False
566 return False
563 return origfn(dirstate, wvfs, dircache, wctx, mctx, f, f2)
567 return origfn(dirstate, wvfs, dircache, wctx, mctx, f, f2)
564
568
565
569
566 # The manifest merge handles conflicts on the manifest level. We want
570 # The manifest merge handles conflicts on the manifest level. We want
567 # to handle changes in largefile-ness of files at this level too.
571 # to handle changes in largefile-ness of files at this level too.
568 #
572 #
569 # The strategy is to run the original calculateupdates and then process
573 # The strategy is to run the original calculateupdates and then process
570 # the action list it outputs. There are two cases we need to deal with:
574 # the action list it outputs. There are two cases we need to deal with:
571 #
575 #
572 # 1. Normal file in p1, largefile in p2. Here the largefile is
576 # 1. Normal file in p1, largefile in p2. Here the largefile is
573 # detected via its standin file, which will enter the working copy
577 # detected via its standin file, which will enter the working copy
574 # with a "get" action. It is not "merge" since the standin is all
578 # with a "get" action. It is not "merge" since the standin is all
575 # Mercurial is concerned with at this level -- the link to the
579 # Mercurial is concerned with at this level -- the link to the
576 # existing normal file is not relevant here.
580 # existing normal file is not relevant here.
577 #
581 #
578 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
582 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
579 # since the largefile will be present in the working copy and
583 # since the largefile will be present in the working copy and
580 # different from the normal file in p2. Mercurial therefore
584 # different from the normal file in p2. Mercurial therefore
581 # triggers a merge action.
585 # triggers a merge action.
582 #
586 #
583 # In both cases, we prompt the user and emit new actions to either
587 # In both cases, we prompt the user and emit new actions to either
584 # remove the standin (if the normal file was kept) or to remove the
588 # remove the standin (if the normal file was kept) or to remove the
585 # normal file and get the standin (if the largefile was kept). The
589 # normal file and get the standin (if the largefile was kept). The
586 # default prompt answer is to use the largefile version since it was
590 # default prompt answer is to use the largefile version since it was
587 # presumably changed on purpose.
591 # presumably changed on purpose.
588 #
592 #
589 # Finally, the merge.applyupdates function will then take care of
593 # Finally, the merge.applyupdates function will then take care of
590 # writing the files into the working copy and lfcommands.updatelfiles
594 # writing the files into the working copy and lfcommands.updatelfiles
591 # will update the largefiles.
595 # will update the largefiles.
592 @eh.wrapfunction(merge, 'calculateupdates')
596 @eh.wrapfunction(merge, 'calculateupdates')
593 def overridecalculateupdates(
597 def overridecalculateupdates(
594 origfn, repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs
598 origfn, repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs
595 ):
599 ):
596 overwrite = force and not branchmerge
600 overwrite = force and not branchmerge
597 mresult = origfn(
601 mresult = origfn(
598 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs
602 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs
599 )
603 )
600
604
601 if overwrite:
605 if overwrite:
602 return mresult
606 return mresult
603
607
604 # Convert to dictionary with filename as key and action as value.
608 # Convert to dictionary with filename as key and action as value.
605 lfiles = set()
609 lfiles = set()
606 for f in mresult.files():
610 for f in mresult.files():
607 splitstandin = lfutil.splitstandin(f)
611 splitstandin = lfutil.splitstandin(f)
608 if splitstandin is not None and splitstandin in p1:
612 if splitstandin is not None and splitstandin in p1:
609 lfiles.add(splitstandin)
613 lfiles.add(splitstandin)
610 elif lfutil.standin(f) in p1:
614 elif lfutil.standin(f) in p1:
611 lfiles.add(f)
615 lfiles.add(f)
612
616
613 for lfile in sorted(lfiles):
617 for lfile in sorted(lfiles):
614 standin = lfutil.standin(lfile)
618 standin = lfutil.standin(lfile)
615 (lm, largs, lmsg) = mresult.getfile(lfile, (None, None, None))
619 (lm, largs, lmsg) = mresult.getfile(lfile, (None, None, None))
616 (sm, sargs, smsg) = mresult.getfile(standin, (None, None, None))
620 (sm, sargs, smsg) = mresult.getfile(standin, (None, None, None))
617
621
618 if sm in (ACTION_GET, ACTION_DELETED_CHANGED) and lm != ACTION_REMOVE:
622 if sm in (ACTION_GET, ACTION_DELETED_CHANGED) and lm != ACTION_REMOVE:
619 if sm == ACTION_DELETED_CHANGED:
623 if sm == ACTION_DELETED_CHANGED:
620 f1, f2, fa, move, anc = sargs
624 f1, f2, fa, move, anc = sargs
621 sargs = (p2[f2].flags(), False)
625 sargs = (p2[f2].flags(), False)
622 # Case 1: normal file in the working copy, largefile in
626 # Case 1: normal file in the working copy, largefile in
623 # the second parent
627 # the second parent
624 usermsg = (
628 usermsg = (
625 _(
629 _(
626 b'remote turned local normal file %s into a largefile\n'
630 b'remote turned local normal file %s into a largefile\n'
627 b'use (l)argefile or keep (n)ormal file?'
631 b'use (l)argefile or keep (n)ormal file?'
628 b'$$ &Largefile $$ &Normal file'
632 b'$$ &Largefile $$ &Normal file'
629 )
633 )
630 % lfile
634 % lfile
631 )
635 )
632 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
636 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
633 mresult.addfile(
637 mresult.addfile(
634 lfile, ACTION_REMOVE, None, b'replaced by standin'
638 lfile, ACTION_REMOVE, None, b'replaced by standin'
635 )
639 )
636 mresult.addfile(standin, ACTION_GET, sargs, b'replaces standin')
640 mresult.addfile(standin, ACTION_GET, sargs, b'replaces standin')
637 else: # keep local normal file
641 else: # keep local normal file
638 mresult.addfile(lfile, ACTION_KEEP, None, b'replaces standin')
642 mresult.addfile(lfile, ACTION_KEEP, None, b'replaces standin')
639 if branchmerge:
643 if branchmerge:
640 mresult.addfile(
644 mresult.addfile(
641 standin,
645 standin,
642 ACTION_KEEP,
646 ACTION_KEEP,
643 None,
647 None,
644 b'replaced by non-standin',
648 b'replaced by non-standin',
645 )
649 )
646 else:
650 else:
647 mresult.addfile(
651 mresult.addfile(
648 standin,
652 standin,
649 ACTION_REMOVE,
653 ACTION_REMOVE,
650 None,
654 None,
651 b'replaced by non-standin',
655 b'replaced by non-standin',
652 )
656 )
653 if lm in (ACTION_GET, ACTION_DELETED_CHANGED) and sm != ACTION_REMOVE:
657 if lm in (ACTION_GET, ACTION_DELETED_CHANGED) and sm != ACTION_REMOVE:
654 if lm == ACTION_DELETED_CHANGED:
658 if lm == ACTION_DELETED_CHANGED:
655 f1, f2, fa, move, anc = largs
659 f1, f2, fa, move, anc = largs
656 largs = (p2[f2].flags(), False)
660 largs = (p2[f2].flags(), False)
657 # Case 2: largefile in the working copy, normal file in
661 # Case 2: largefile in the working copy, normal file in
658 # the second parent
662 # the second parent
659 usermsg = (
663 usermsg = (
660 _(
664 _(
661 b'remote turned local largefile %s into a normal file\n'
665 b'remote turned local largefile %s into a normal file\n'
662 b'keep (l)argefile or use (n)ormal file?'
666 b'keep (l)argefile or use (n)ormal file?'
663 b'$$ &Largefile $$ &Normal file'
667 b'$$ &Largefile $$ &Normal file'
664 )
668 )
665 % lfile
669 % lfile
666 )
670 )
667 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
671 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
668 if branchmerge:
672 if branchmerge:
669 # largefile can be restored from standin safely
673 # largefile can be restored from standin safely
670 mresult.addfile(
674 mresult.addfile(
671 lfile,
675 lfile,
672 ACTION_KEEP,
676 ACTION_KEEP,
673 None,
677 None,
674 b'replaced by standin',
678 b'replaced by standin',
675 )
679 )
676 mresult.addfile(
680 mresult.addfile(
677 standin, ACTION_KEEP, None, b'replaces standin'
681 standin, ACTION_KEEP, None, b'replaces standin'
678 )
682 )
679 else:
683 else:
680 # "lfile" should be marked as "removed" without
684 # "lfile" should be marked as "removed" without
681 # removal of itself
685 # removal of itself
682 mresult.addfile(
686 mresult.addfile(
683 lfile,
687 lfile,
684 MERGE_ACTION_LARGEFILE_MARK_REMOVED,
688 MERGE_ACTION_LARGEFILE_MARK_REMOVED,
685 None,
689 None,
686 b'forget non-standin largefile',
690 b'forget non-standin largefile',
687 )
691 )
688
692
689 # linear-merge should treat this largefile as 're-added'
693 # linear-merge should treat this largefile as 're-added'
690 mresult.addfile(standin, ACTION_ADD, None, b'keep standin')
694 mresult.addfile(standin, ACTION_ADD, None, b'keep standin')
691 else: # pick remote normal file
695 else: # pick remote normal file
692 mresult.addfile(lfile, ACTION_GET, largs, b'replaces standin')
696 mresult.addfile(lfile, ACTION_GET, largs, b'replaces standin')
693 mresult.addfile(
697 mresult.addfile(
694 standin,
698 standin,
695 ACTION_REMOVE,
699 ACTION_REMOVE,
696 None,
700 None,
697 b'replaced by non-standin',
701 b'replaced by non-standin',
698 )
702 )
699
703
700 return mresult
704 return mresult
701
705
702
706
703 @eh.wrapfunction(mergestatemod, 'recordupdates')
707 @eh.wrapfunction(mergestatemod, 'recordupdates')
704 def mergerecordupdates(orig, repo, actions, branchmerge, getfiledata):
708 def mergerecordupdates(orig, repo, actions, branchmerge, getfiledata):
705 if MERGE_ACTION_LARGEFILE_MARK_REMOVED in actions:
709 if MERGE_ACTION_LARGEFILE_MARK_REMOVED in actions:
706 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
710 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
707 for lfile, args, msg in actions[MERGE_ACTION_LARGEFILE_MARK_REMOVED]:
711 for lfile, args, msg in actions[MERGE_ACTION_LARGEFILE_MARK_REMOVED]:
708 # this should be executed before 'orig', to execute 'remove'
712 # this should be executed before 'orig', to execute 'remove'
709 # before all other actions
713 # before all other actions
710 repo.dirstate.update_file(lfile, p1_tracked=True, wc_tracked=False)
714 repo.dirstate.update_file(lfile, p1_tracked=True, wc_tracked=False)
711 # make sure lfile doesn't get synclfdirstate'd as normal
715 # make sure lfile doesn't get synclfdirstate'd as normal
712 lfdirstate.update_file(lfile, p1_tracked=False, wc_tracked=True)
716 lfdirstate.update_file(lfile, p1_tracked=False, wc_tracked=True)
713
717
714 return orig(repo, actions, branchmerge, getfiledata)
718 return orig(repo, actions, branchmerge, getfiledata)
715
719
716
720
717 # Override filemerge to prompt the user about how they wish to merge
721 # Override filemerge to prompt the user about how they wish to merge
718 # largefiles. This will handle identical edits without prompting the user.
722 # largefiles. This will handle identical edits without prompting the user.
719 @eh.wrapfunction(filemerge, 'filemerge')
723 @eh.wrapfunction(filemerge, 'filemerge')
720 def overridefilemerge(
724 def overridefilemerge(
721 origfn, repo, wctx, mynode, orig, fcd, fco, fca, labels=None
725 origfn, repo, wctx, mynode, orig, fcd, fco, fca, labels=None
722 ):
726 ):
723 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
727 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
724 return origfn(repo, wctx, mynode, orig, fcd, fco, fca, labels=labels)
728 return origfn(repo, wctx, mynode, orig, fcd, fco, fca, labels=labels)
725
729
726 ahash = lfutil.readasstandin(fca).lower()
730 ahash = lfutil.readasstandin(fca).lower()
727 dhash = lfutil.readasstandin(fcd).lower()
731 dhash = lfutil.readasstandin(fcd).lower()
728 ohash = lfutil.readasstandin(fco).lower()
732 ohash = lfutil.readasstandin(fco).lower()
729 if (
733 if (
730 ohash != ahash
734 ohash != ahash
731 and ohash != dhash
735 and ohash != dhash
732 and (
736 and (
733 dhash == ahash
737 dhash == ahash
734 or repo.ui.promptchoice(
738 or repo.ui.promptchoice(
735 _(
739 _(
736 b'largefile %s has a merge conflict\nancestor was %s\n'
740 b'largefile %s has a merge conflict\nancestor was %s\n'
737 b'you can keep (l)ocal %s or take (o)ther %s.\n'
741 b'you can keep (l)ocal %s or take (o)ther %s.\n'
738 b'what do you want to do?'
742 b'what do you want to do?'
739 b'$$ &Local $$ &Other'
743 b'$$ &Local $$ &Other'
740 )
744 )
741 % (lfutil.splitstandin(orig), ahash, dhash, ohash),
745 % (lfutil.splitstandin(orig), ahash, dhash, ohash),
742 0,
746 0,
743 )
747 )
744 == 1
748 == 1
745 )
749 )
746 ):
750 ):
747 repo.wwrite(fcd.path(), fco.data(), fco.flags())
751 repo.wwrite(fcd.path(), fco.data(), fco.flags())
748 return 0, False
752 return 0, False
749
753
750
754
751 @eh.wrapfunction(copiesmod, 'pathcopies')
755 @eh.wrapfunction(copiesmod, 'pathcopies')
752 def copiespathcopies(orig, ctx1, ctx2, match=None):
756 def copiespathcopies(orig, ctx1, ctx2, match=None):
753 copies = orig(ctx1, ctx2, match=match)
757 copies = orig(ctx1, ctx2, match=match)
754 updated = {}
758 updated = {}
755
759
756 for k, v in copies.items():
760 for k, v in copies.items():
757 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
761 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
758
762
759 return updated
763 return updated
760
764
761
765
762 # Copy first changes the matchers to match standins instead of
766 # Copy first changes the matchers to match standins instead of
763 # largefiles. Then it overrides util.copyfile in that function it
767 # largefiles. Then it overrides util.copyfile in that function it
764 # checks if the destination largefile already exists. It also keeps a
768 # checks if the destination largefile already exists. It also keeps a
765 # list of copied files so that the largefiles can be copied and the
769 # list of copied files so that the largefiles can be copied and the
766 # dirstate updated.
770 # dirstate updated.
767 @eh.wrapfunction(cmdutil, 'copy')
771 @eh.wrapfunction(cmdutil, 'copy')
768 def overridecopy(orig, ui, repo, pats, opts, rename=False):
772 def overridecopy(orig, ui, repo, pats, opts, rename=False):
769 # doesn't remove largefile on rename
773 # doesn't remove largefile on rename
770 if len(pats) < 2:
774 if len(pats) < 2:
771 # this isn't legal, let the original function deal with it
775 # this isn't legal, let the original function deal with it
772 return orig(ui, repo, pats, opts, rename)
776 return orig(ui, repo, pats, opts, rename)
773
777
774 # This could copy both lfiles and normal files in one command,
778 # This could copy both lfiles and normal files in one command,
775 # but we don't want to do that. First replace their matcher to
779 # but we don't want to do that. First replace their matcher to
776 # only match normal files and run it, then replace it to just
780 # only match normal files and run it, then replace it to just
777 # match largefiles and run it again.
781 # match largefiles and run it again.
778 nonormalfiles = False
782 nonormalfiles = False
779 nolfiles = False
783 nolfiles = False
780 manifest = repo[None].manifest()
784 manifest = repo[None].manifest()
781
785
782 def normalfilesmatchfn(
786 def normalfilesmatchfn(
783 orig,
787 orig,
784 ctx,
788 ctx,
785 pats=(),
789 pats=(),
786 opts=None,
790 opts=None,
787 globbed=False,
791 globbed=False,
788 default=b'relpath',
792 default=b'relpath',
789 badfn=None,
793 badfn=None,
790 ):
794 ):
791 if opts is None:
795 if opts is None:
792 opts = {}
796 opts = {}
793 match = orig(ctx, pats, opts, globbed, default, badfn=badfn)
797 match = orig(ctx, pats, opts, globbed, default, badfn=badfn)
794 return composenormalfilematcher(match, manifest)
798 return composenormalfilematcher(match, manifest)
795
799
796 with extensions.wrappedfunction(scmutil, 'match', normalfilesmatchfn):
800 with extensions.wrappedfunction(scmutil, 'match', normalfilesmatchfn):
797 try:
801 try:
798 result = orig(ui, repo, pats, opts, rename)
802 result = orig(ui, repo, pats, opts, rename)
799 except error.Abort as e:
803 except error.Abort as e:
800 if e.message != _(b'no files to copy'):
804 if e.message != _(b'no files to copy'):
801 raise e
805 raise e
802 else:
806 else:
803 nonormalfiles = True
807 nonormalfiles = True
804 result = 0
808 result = 0
805
809
806 # The first rename can cause our current working directory to be removed.
810 # The first rename can cause our current working directory to be removed.
807 # In that case there is nothing left to copy/rename so just quit.
811 # In that case there is nothing left to copy/rename so just quit.
808 try:
812 try:
809 repo.getcwd()
813 repo.getcwd()
810 except OSError:
814 except OSError:
811 return result
815 return result
812
816
813 def makestandin(relpath):
817 def makestandin(relpath):
814 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
818 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
815 return repo.wvfs.join(lfutil.standin(path))
819 return repo.wvfs.join(lfutil.standin(path))
816
820
817 fullpats = scmutil.expandpats(pats)
821 fullpats = scmutil.expandpats(pats)
818 dest = fullpats[-1]
822 dest = fullpats[-1]
819
823
820 if os.path.isdir(dest):
824 if os.path.isdir(dest):
821 if not os.path.isdir(makestandin(dest)):
825 if not os.path.isdir(makestandin(dest)):
822 os.makedirs(makestandin(dest))
826 os.makedirs(makestandin(dest))
823
827
824 try:
828 try:
825 # When we call orig below it creates the standins but we don't add
829 # When we call orig below it creates the standins but we don't add
826 # them to the dir state until later so lock during that time.
830 # them to the dir state until later so lock during that time.
827 wlock = repo.wlock()
831 wlock = repo.wlock()
828
832
829 manifest = repo[None].manifest()
833 manifest = repo[None].manifest()
830
834
831 def overridematch(
835 def overridematch(
832 orig,
836 orig,
833 ctx,
837 ctx,
834 pats=(),
838 pats=(),
835 opts=None,
839 opts=None,
836 globbed=False,
840 globbed=False,
837 default=b'relpath',
841 default=b'relpath',
838 badfn=None,
842 badfn=None,
839 ):
843 ):
840 if opts is None:
844 if opts is None:
841 opts = {}
845 opts = {}
842 newpats = []
846 newpats = []
843 # The patterns were previously mangled to add the standin
847 # The patterns were previously mangled to add the standin
844 # directory; we need to remove that now
848 # directory; we need to remove that now
845 for pat in pats:
849 for pat in pats:
846 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
850 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
847 newpats.append(pat.replace(lfutil.shortname, b''))
851 newpats.append(pat.replace(lfutil.shortname, b''))
848 else:
852 else:
849 newpats.append(pat)
853 newpats.append(pat)
850 match = orig(ctx, newpats, opts, globbed, default, badfn=badfn)
854 match = orig(ctx, newpats, opts, globbed, default, badfn=badfn)
851 m = copy.copy(match)
855 m = copy.copy(match)
856 m._was_tampered_with = True
852 lfile = lambda f: lfutil.standin(f) in manifest
857 lfile = lambda f: lfutil.standin(f) in manifest
853 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
858 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
854 m._fileset = set(m._files)
859 m._fileset = set(m._files)
855 origmatchfn = m.matchfn
860 origmatchfn = m.matchfn
856
861
857 def matchfn(f):
862 def matchfn(f):
858 lfile = lfutil.splitstandin(f)
863 lfile = lfutil.splitstandin(f)
859 return (
864 return (
860 lfile is not None
865 lfile is not None
861 and (f in manifest)
866 and (f in manifest)
862 and origmatchfn(lfile)
867 and origmatchfn(lfile)
863 or None
868 or None
864 )
869 )
865
870
866 m.matchfn = matchfn
871 m.matchfn = matchfn
867 return m
872 return m
868
873
869 listpats = []
874 listpats = []
870 for pat in pats:
875 for pat in pats:
871 if matchmod.patkind(pat) is not None:
876 if matchmod.patkind(pat) is not None:
872 listpats.append(pat)
877 listpats.append(pat)
873 else:
878 else:
874 listpats.append(makestandin(pat))
879 listpats.append(makestandin(pat))
875
880
876 copiedfiles = []
881 copiedfiles = []
877
882
878 def overridecopyfile(orig, src, dest, *args, **kwargs):
883 def overridecopyfile(orig, src, dest, *args, **kwargs):
879 if lfutil.shortname in src and dest.startswith(
884 if lfutil.shortname in src and dest.startswith(
880 repo.wjoin(lfutil.shortname)
885 repo.wjoin(lfutil.shortname)
881 ):
886 ):
882 destlfile = dest.replace(lfutil.shortname, b'')
887 destlfile = dest.replace(lfutil.shortname, b'')
883 if not opts[b'force'] and os.path.exists(destlfile):
888 if not opts[b'force'] and os.path.exists(destlfile):
884 raise IOError(
889 raise IOError(
885 b'', _(b'destination largefile already exists')
890 b'', _(b'destination largefile already exists')
886 )
891 )
887 copiedfiles.append((src, dest))
892 copiedfiles.append((src, dest))
888 orig(src, dest, *args, **kwargs)
893 orig(src, dest, *args, **kwargs)
889
894
890 with extensions.wrappedfunction(util, 'copyfile', overridecopyfile):
895 with extensions.wrappedfunction(util, 'copyfile', overridecopyfile):
891 with extensions.wrappedfunction(scmutil, 'match', overridematch):
896 with extensions.wrappedfunction(scmutil, 'match', overridematch):
892 result += orig(ui, repo, listpats, opts, rename)
897 result += orig(ui, repo, listpats, opts, rename)
893
898
894 lfdirstate = lfutil.openlfdirstate(ui, repo)
899 lfdirstate = lfutil.openlfdirstate(ui, repo)
895 for (src, dest) in copiedfiles:
900 for (src, dest) in copiedfiles:
896 if lfutil.shortname in src and dest.startswith(
901 if lfutil.shortname in src and dest.startswith(
897 repo.wjoin(lfutil.shortname)
902 repo.wjoin(lfutil.shortname)
898 ):
903 ):
899 srclfile = src.replace(repo.wjoin(lfutil.standin(b'')), b'')
904 srclfile = src.replace(repo.wjoin(lfutil.standin(b'')), b'')
900 destlfile = dest.replace(repo.wjoin(lfutil.standin(b'')), b'')
905 destlfile = dest.replace(repo.wjoin(lfutil.standin(b'')), b'')
901 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or b'.'
906 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or b'.'
902 if not os.path.isdir(destlfiledir):
907 if not os.path.isdir(destlfiledir):
903 os.makedirs(destlfiledir)
908 os.makedirs(destlfiledir)
904 if rename:
909 if rename:
905 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
910 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
906
911
907 # The file is gone, but this deletes any empty parent
912 # The file is gone, but this deletes any empty parent
908 # directories as a side-effect.
913 # directories as a side-effect.
909 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
914 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
910 lfdirstate.set_untracked(srclfile)
915 lfdirstate.set_untracked(srclfile)
911 else:
916 else:
912 util.copyfile(repo.wjoin(srclfile), repo.wjoin(destlfile))
917 util.copyfile(repo.wjoin(srclfile), repo.wjoin(destlfile))
913
918
914 lfdirstate.set_tracked(destlfile)
919 lfdirstate.set_tracked(destlfile)
915 lfdirstate.write(repo.currenttransaction())
920 lfdirstate.write(repo.currenttransaction())
916 except error.Abort as e:
921 except error.Abort as e:
917 if e.message != _(b'no files to copy'):
922 if e.message != _(b'no files to copy'):
918 raise e
923 raise e
919 else:
924 else:
920 nolfiles = True
925 nolfiles = True
921 finally:
926 finally:
922 wlock.release()
927 wlock.release()
923
928
924 if nolfiles and nonormalfiles:
929 if nolfiles and nonormalfiles:
925 raise error.Abort(_(b'no files to copy'))
930 raise error.Abort(_(b'no files to copy'))
926
931
927 return result
932 return result
928
933
929
934
930 # When the user calls revert, we have to be careful to not revert any
935 # When the user calls revert, we have to be careful to not revert any
931 # changes to other largefiles accidentally. This means we have to keep
936 # changes to other largefiles accidentally. This means we have to keep
932 # track of the largefiles that are being reverted so we only pull down
937 # track of the largefiles that are being reverted so we only pull down
933 # the necessary largefiles.
938 # the necessary largefiles.
934 #
939 #
935 # Standins are only updated (to match the hash of largefiles) before
940 # Standins are only updated (to match the hash of largefiles) before
936 # commits. Update the standins then run the original revert, changing
941 # commits. Update the standins then run the original revert, changing
937 # the matcher to hit standins instead of largefiles. Based on the
942 # the matcher to hit standins instead of largefiles. Based on the
938 # resulting standins update the largefiles.
943 # resulting standins update the largefiles.
939 @eh.wrapfunction(cmdutil, 'revert')
944 @eh.wrapfunction(cmdutil, 'revert')
940 def overriderevert(orig, ui, repo, ctx, *pats, **opts):
945 def overriderevert(orig, ui, repo, ctx, *pats, **opts):
941 # Because we put the standins in a bad state (by updating them)
946 # Because we put the standins in a bad state (by updating them)
942 # and then return them to a correct state we need to lock to
947 # and then return them to a correct state we need to lock to
943 # prevent others from changing them in their incorrect state.
948 # prevent others from changing them in their incorrect state.
944 with repo.wlock(), repo.dirstate.running_status(repo):
949 with repo.wlock(), repo.dirstate.running_status(repo):
945 lfdirstate = lfutil.openlfdirstate(ui, repo)
950 lfdirstate = lfutil.openlfdirstate(ui, repo)
946 s = lfutil.lfdirstatestatus(lfdirstate, repo)
951 s = lfutil.lfdirstatestatus(lfdirstate, repo)
947 lfdirstate.write(repo.currenttransaction())
952 lfdirstate.write(repo.currenttransaction())
948 for lfile in s.modified:
953 for lfile in s.modified:
949 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
954 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
950 for lfile in s.deleted:
955 for lfile in s.deleted:
951 fstandin = lfutil.standin(lfile)
956 fstandin = lfutil.standin(lfile)
952 if repo.wvfs.exists(fstandin):
957 if repo.wvfs.exists(fstandin):
953 repo.wvfs.unlink(fstandin)
958 repo.wvfs.unlink(fstandin)
954
959
955 oldstandins = lfutil.getstandinsstate(repo)
960 oldstandins = lfutil.getstandinsstate(repo)
956
961
957 def overridematch(
962 def overridematch(
958 orig,
963 orig,
959 mctx,
964 mctx,
960 pats=(),
965 pats=(),
961 opts=None,
966 opts=None,
962 globbed=False,
967 globbed=False,
963 default=b'relpath',
968 default=b'relpath',
964 badfn=None,
969 badfn=None,
965 ):
970 ):
966 if opts is None:
971 if opts is None:
967 opts = {}
972 opts = {}
968 match = orig(mctx, pats, opts, globbed, default, badfn=badfn)
973 match = orig(mctx, pats, opts, globbed, default, badfn=badfn)
969 m = copy.copy(match)
974 m = copy.copy(match)
975 m._was_tampered_with = True
970
976
971 # revert supports recursing into subrepos, and though largefiles
977 # revert supports recursing into subrepos, and though largefiles
972 # currently doesn't work correctly in that case, this match is
978 # currently doesn't work correctly in that case, this match is
973 # called, so the lfdirstate above may not be the correct one for
979 # called, so the lfdirstate above may not be the correct one for
974 # this invocation of match.
980 # this invocation of match.
975 lfdirstate = lfutil.openlfdirstate(
981 lfdirstate = lfutil.openlfdirstate(
976 mctx.repo().ui, mctx.repo(), False
982 mctx.repo().ui, mctx.repo(), False
977 )
983 )
978
984
979 wctx = repo[None]
985 wctx = repo[None]
980 matchfiles = []
986 matchfiles = []
981 for f in m._files:
987 for f in m._files:
982 standin = lfutil.standin(f)
988 standin = lfutil.standin(f)
983 if standin in ctx or standin in mctx:
989 if standin in ctx or standin in mctx:
984 matchfiles.append(standin)
990 matchfiles.append(standin)
985 elif standin in wctx or lfdirstate.get_entry(f).removed:
991 elif standin in wctx or lfdirstate.get_entry(f).removed:
986 continue
992 continue
987 else:
993 else:
988 matchfiles.append(f)
994 matchfiles.append(f)
989 m._files = matchfiles
995 m._files = matchfiles
990 m._fileset = set(m._files)
996 m._fileset = set(m._files)
991 origmatchfn = m.matchfn
997 origmatchfn = m.matchfn
992
998
993 def matchfn(f):
999 def matchfn(f):
994 lfile = lfutil.splitstandin(f)
1000 lfile = lfutil.splitstandin(f)
995 if lfile is not None:
1001 if lfile is not None:
996 return origmatchfn(lfile) and (f in ctx or f in mctx)
1002 return origmatchfn(lfile) and (f in ctx or f in mctx)
997 return origmatchfn(f)
1003 return origmatchfn(f)
998
1004
999 m.matchfn = matchfn
1005 m.matchfn = matchfn
1000 return m
1006 return m
1001
1007
1002 with extensions.wrappedfunction(scmutil, 'match', overridematch):
1008 with extensions.wrappedfunction(scmutil, 'match', overridematch):
1003 orig(ui, repo, ctx, *pats, **opts)
1009 orig(ui, repo, ctx, *pats, **opts)
1004
1010
1005 newstandins = lfutil.getstandinsstate(repo)
1011 newstandins = lfutil.getstandinsstate(repo)
1006 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1012 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1007 # lfdirstate should be 'normallookup'-ed for updated files,
1013 # lfdirstate should be 'normallookup'-ed for updated files,
1008 # because reverting doesn't touch dirstate for 'normal' files
1014 # because reverting doesn't touch dirstate for 'normal' files
1009 # when target revision is explicitly specified: in such case,
1015 # when target revision is explicitly specified: in such case,
1010 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
1016 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
1011 # of target (standin) file.
1017 # of target (standin) file.
1012 lfcommands.updatelfiles(
1018 lfcommands.updatelfiles(
1013 ui, repo, filelist, printmessage=False, normallookup=True
1019 ui, repo, filelist, printmessage=False, normallookup=True
1014 )
1020 )
1015
1021
1016
1022
1017 # after pulling changesets, we need to take some extra care to get
1023 # after pulling changesets, we need to take some extra care to get
1018 # largefiles updated remotely
1024 # largefiles updated remotely
1019 @eh.wrapcommand(
1025 @eh.wrapcommand(
1020 b'pull',
1026 b'pull',
1021 opts=[
1027 opts=[
1022 (
1028 (
1023 b'',
1029 b'',
1024 b'all-largefiles',
1030 b'all-largefiles',
1025 None,
1031 None,
1026 _(b'download all pulled versions of largefiles (DEPRECATED)'),
1032 _(b'download all pulled versions of largefiles (DEPRECATED)'),
1027 ),
1033 ),
1028 (
1034 (
1029 b'',
1035 b'',
1030 b'lfrev',
1036 b'lfrev',
1031 [],
1037 [],
1032 _(b'download largefiles for these revisions'),
1038 _(b'download largefiles for these revisions'),
1033 _(b'REV'),
1039 _(b'REV'),
1034 ),
1040 ),
1035 ],
1041 ],
1036 )
1042 )
1037 def overridepull(orig, ui, repo, source=None, **opts):
1043 def overridepull(orig, ui, repo, source=None, **opts):
1038 revsprepull = len(repo)
1044 revsprepull = len(repo)
1039 if not source:
1045 if not source:
1040 source = b'default'
1046 source = b'default'
1041 repo.lfpullsource = source
1047 repo.lfpullsource = source
1042 result = orig(ui, repo, source, **opts)
1048 result = orig(ui, repo, source, **opts)
1043 revspostpull = len(repo)
1049 revspostpull = len(repo)
1044 lfrevs = opts.get('lfrev', [])
1050 lfrevs = opts.get('lfrev', [])
1045 if opts.get('all_largefiles'):
1051 if opts.get('all_largefiles'):
1046 lfrevs.append(b'pulled()')
1052 lfrevs.append(b'pulled()')
1047 if lfrevs and revspostpull > revsprepull:
1053 if lfrevs and revspostpull > revsprepull:
1048 numcached = 0
1054 numcached = 0
1049 repo.firstpulled = revsprepull # for pulled() revset expression
1055 repo.firstpulled = revsprepull # for pulled() revset expression
1050 try:
1056 try:
1051 for rev in logcmdutil.revrange(repo, lfrevs):
1057 for rev in logcmdutil.revrange(repo, lfrevs):
1052 ui.note(_(b'pulling largefiles for revision %d\n') % rev)
1058 ui.note(_(b'pulling largefiles for revision %d\n') % rev)
1053 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
1059 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
1054 numcached += len(cached)
1060 numcached += len(cached)
1055 finally:
1061 finally:
1056 del repo.firstpulled
1062 del repo.firstpulled
1057 ui.status(_(b"%d largefiles cached\n") % numcached)
1063 ui.status(_(b"%d largefiles cached\n") % numcached)
1058 return result
1064 return result
1059
1065
1060
1066
1061 @eh.wrapcommand(
1067 @eh.wrapcommand(
1062 b'push',
1068 b'push',
1063 opts=[
1069 opts=[
1064 (
1070 (
1065 b'',
1071 b'',
1066 b'lfrev',
1072 b'lfrev',
1067 [],
1073 [],
1068 _(b'upload largefiles for these revisions'),
1074 _(b'upload largefiles for these revisions'),
1069 _(b'REV'),
1075 _(b'REV'),
1070 )
1076 )
1071 ],
1077 ],
1072 )
1078 )
1073 def overridepush(orig, ui, repo, *args, **kwargs):
1079 def overridepush(orig, ui, repo, *args, **kwargs):
1074 """Override push command and store --lfrev parameters in opargs"""
1080 """Override push command and store --lfrev parameters in opargs"""
1075 lfrevs = kwargs.pop('lfrev', None)
1081 lfrevs = kwargs.pop('lfrev', None)
1076 if lfrevs:
1082 if lfrevs:
1077 opargs = kwargs.setdefault('opargs', {})
1083 opargs = kwargs.setdefault('opargs', {})
1078 opargs[b'lfrevs'] = logcmdutil.revrange(repo, lfrevs)
1084 opargs[b'lfrevs'] = logcmdutil.revrange(repo, lfrevs)
1079 return orig(ui, repo, *args, **kwargs)
1085 return orig(ui, repo, *args, **kwargs)
1080
1086
1081
1087
1082 @eh.wrapfunction(exchange, 'pushoperation')
1088 @eh.wrapfunction(exchange, 'pushoperation')
1083 def exchangepushoperation(orig, *args, **kwargs):
1089 def exchangepushoperation(orig, *args, **kwargs):
1084 """Override pushoperation constructor and store lfrevs parameter"""
1090 """Override pushoperation constructor and store lfrevs parameter"""
1085 lfrevs = kwargs.pop('lfrevs', None)
1091 lfrevs = kwargs.pop('lfrevs', None)
1086 pushop = orig(*args, **kwargs)
1092 pushop = orig(*args, **kwargs)
1087 pushop.lfrevs = lfrevs
1093 pushop.lfrevs = lfrevs
1088 return pushop
1094 return pushop
1089
1095
1090
1096
1091 @eh.revsetpredicate(b'pulled()')
1097 @eh.revsetpredicate(b'pulled()')
1092 def pulledrevsetsymbol(repo, subset, x):
1098 def pulledrevsetsymbol(repo, subset, x):
1093 """Changesets that just has been pulled.
1099 """Changesets that just has been pulled.
1094
1100
1095 Only available with largefiles from pull --lfrev expressions.
1101 Only available with largefiles from pull --lfrev expressions.
1096
1102
1097 .. container:: verbose
1103 .. container:: verbose
1098
1104
1099 Some examples:
1105 Some examples:
1100
1106
1101 - pull largefiles for all new changesets::
1107 - pull largefiles for all new changesets::
1102
1108
1103 hg pull -lfrev "pulled()"
1109 hg pull -lfrev "pulled()"
1104
1110
1105 - pull largefiles for all new branch heads::
1111 - pull largefiles for all new branch heads::
1106
1112
1107 hg pull -lfrev "head(pulled()) and not closed()"
1113 hg pull -lfrev "head(pulled()) and not closed()"
1108
1114
1109 """
1115 """
1110
1116
1111 try:
1117 try:
1112 firstpulled = repo.firstpulled
1118 firstpulled = repo.firstpulled
1113 except AttributeError:
1119 except AttributeError:
1114 raise error.Abort(_(b"pulled() only available in --lfrev"))
1120 raise error.Abort(_(b"pulled() only available in --lfrev"))
1115 return smartset.baseset([r for r in subset if r >= firstpulled])
1121 return smartset.baseset([r for r in subset if r >= firstpulled])
1116
1122
1117
1123
1118 @eh.wrapcommand(
1124 @eh.wrapcommand(
1119 b'clone',
1125 b'clone',
1120 opts=[
1126 opts=[
1121 (
1127 (
1122 b'',
1128 b'',
1123 b'all-largefiles',
1129 b'all-largefiles',
1124 None,
1130 None,
1125 _(b'download all versions of all largefiles'),
1131 _(b'download all versions of all largefiles'),
1126 )
1132 )
1127 ],
1133 ],
1128 )
1134 )
1129 def overrideclone(orig, ui, source, dest=None, **opts):
1135 def overrideclone(orig, ui, source, dest=None, **opts):
1130 d = dest
1136 d = dest
1131 if d is None:
1137 if d is None:
1132 d = hg.defaultdest(source)
1138 d = hg.defaultdest(source)
1133 if opts.get('all_largefiles') and not hg.islocal(d):
1139 if opts.get('all_largefiles') and not hg.islocal(d):
1134 raise error.Abort(
1140 raise error.Abort(
1135 _(b'--all-largefiles is incompatible with non-local destination %s')
1141 _(b'--all-largefiles is incompatible with non-local destination %s')
1136 % d
1142 % d
1137 )
1143 )
1138
1144
1139 return orig(ui, source, dest, **opts)
1145 return orig(ui, source, dest, **opts)
1140
1146
1141
1147
1142 @eh.wrapfunction(hg, 'clone')
1148 @eh.wrapfunction(hg, 'clone')
1143 def hgclone(orig, ui, opts, *args, **kwargs):
1149 def hgclone(orig, ui, opts, *args, **kwargs):
1144 result = orig(ui, opts, *args, **kwargs)
1150 result = orig(ui, opts, *args, **kwargs)
1145
1151
1146 if result is not None:
1152 if result is not None:
1147 sourcerepo, destrepo = result
1153 sourcerepo, destrepo = result
1148 repo = destrepo.local()
1154 repo = destrepo.local()
1149
1155
1150 # When cloning to a remote repo (like through SSH), no repo is available
1156 # When cloning to a remote repo (like through SSH), no repo is available
1151 # from the peer. Therefore the largefiles can't be downloaded and the
1157 # from the peer. Therefore the largefiles can't be downloaded and the
1152 # hgrc can't be updated.
1158 # hgrc can't be updated.
1153 if not repo:
1159 if not repo:
1154 return result
1160 return result
1155
1161
1156 # Caching is implicitly limited to 'rev' option, since the dest repo was
1162 # Caching is implicitly limited to 'rev' option, since the dest repo was
1157 # truncated at that point. The user may expect a download count with
1163 # truncated at that point. The user may expect a download count with
1158 # this option, so attempt whether or not this is a largefile repo.
1164 # this option, so attempt whether or not this is a largefile repo.
1159 if opts.get(b'all_largefiles'):
1165 if opts.get(b'all_largefiles'):
1160 success, missing = lfcommands.downloadlfiles(ui, repo)
1166 success, missing = lfcommands.downloadlfiles(ui, repo)
1161
1167
1162 if missing != 0:
1168 if missing != 0:
1163 return None
1169 return None
1164
1170
1165 return result
1171 return result
1166
1172
1167
1173
1168 @eh.wrapcommand(b'rebase', extension=b'rebase')
1174 @eh.wrapcommand(b'rebase', extension=b'rebase')
1169 def overriderebasecmd(orig, ui, repo, **opts):
1175 def overriderebasecmd(orig, ui, repo, **opts):
1170 if not hasattr(repo, '_largefilesenabled'):
1176 if not hasattr(repo, '_largefilesenabled'):
1171 return orig(ui, repo, **opts)
1177 return orig(ui, repo, **opts)
1172
1178
1173 resuming = opts.get('continue')
1179 resuming = opts.get('continue')
1174 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1180 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1175 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1181 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1176 try:
1182 try:
1177 with ui.configoverride(
1183 with ui.configoverride(
1178 {(b'rebase', b'experimental.inmemory'): False}, b"largefiles"
1184 {(b'rebase', b'experimental.inmemory'): False}, b"largefiles"
1179 ):
1185 ):
1180 return orig(ui, repo, **opts)
1186 return orig(ui, repo, **opts)
1181 finally:
1187 finally:
1182 repo._lfstatuswriters.pop()
1188 repo._lfstatuswriters.pop()
1183 repo._lfcommithooks.pop()
1189 repo._lfcommithooks.pop()
1184
1190
1185
1191
1186 @eh.extsetup
1192 @eh.extsetup
1187 def overriderebase(ui):
1193 def overriderebase(ui):
1188 try:
1194 try:
1189 rebase = extensions.find(b'rebase')
1195 rebase = extensions.find(b'rebase')
1190 except KeyError:
1196 except KeyError:
1191 pass
1197 pass
1192 else:
1198 else:
1193
1199
1194 def _dorebase(orig, *args, **kwargs):
1200 def _dorebase(orig, *args, **kwargs):
1195 kwargs['inmemory'] = False
1201 kwargs['inmemory'] = False
1196 return orig(*args, **kwargs)
1202 return orig(*args, **kwargs)
1197
1203
1198 extensions.wrapfunction(rebase, '_dorebase', _dorebase)
1204 extensions.wrapfunction(rebase, '_dorebase', _dorebase)
1199
1205
1200
1206
1201 @eh.wrapcommand(b'archive')
1207 @eh.wrapcommand(b'archive')
1202 def overridearchivecmd(orig, ui, repo, dest, **opts):
1208 def overridearchivecmd(orig, ui, repo, dest, **opts):
1203 with lfstatus(repo.unfiltered()):
1209 with lfstatus(repo.unfiltered()):
1204 return orig(ui, repo.unfiltered(), dest, **opts)
1210 return orig(ui, repo.unfiltered(), dest, **opts)
1205
1211
1206
1212
1207 @eh.wrapfunction(webcommands, 'archive')
1213 @eh.wrapfunction(webcommands, 'archive')
1208 def hgwebarchive(orig, web):
1214 def hgwebarchive(orig, web):
1209 with lfstatus(web.repo):
1215 with lfstatus(web.repo):
1210 return orig(web)
1216 return orig(web)
1211
1217
1212
1218
1213 @eh.wrapfunction(archival, 'archive')
1219 @eh.wrapfunction(archival, 'archive')
1214 def overridearchive(
1220 def overridearchive(
1215 orig,
1221 orig,
1216 repo,
1222 repo,
1217 dest,
1223 dest,
1218 node,
1224 node,
1219 kind,
1225 kind,
1220 decode=True,
1226 decode=True,
1221 match=None,
1227 match=None,
1222 prefix=b'',
1228 prefix=b'',
1223 mtime=None,
1229 mtime=None,
1224 subrepos=None,
1230 subrepos=None,
1225 ):
1231 ):
1226 # For some reason setting repo.lfstatus in hgwebarchive only changes the
1232 # For some reason setting repo.lfstatus in hgwebarchive only changes the
1227 # unfiltered repo's attr, so check that as well.
1233 # unfiltered repo's attr, so check that as well.
1228 if not repo.lfstatus and not repo.unfiltered().lfstatus:
1234 if not repo.lfstatus and not repo.unfiltered().lfstatus:
1229 return orig(
1235 return orig(
1230 repo, dest, node, kind, decode, match, prefix, mtime, subrepos
1236 repo, dest, node, kind, decode, match, prefix, mtime, subrepos
1231 )
1237 )
1232
1238
1233 # No need to lock because we are only reading history and
1239 # No need to lock because we are only reading history and
1234 # largefile caches, neither of which are modified.
1240 # largefile caches, neither of which are modified.
1235 if node is not None:
1241 if node is not None:
1236 lfcommands.cachelfiles(repo.ui, repo, node)
1242 lfcommands.cachelfiles(repo.ui, repo, node)
1237
1243
1238 if kind not in archival.archivers:
1244 if kind not in archival.archivers:
1239 raise error.Abort(_(b"unknown archive type '%s'") % kind)
1245 raise error.Abort(_(b"unknown archive type '%s'") % kind)
1240
1246
1241 ctx = repo[node]
1247 ctx = repo[node]
1242
1248
1243 if kind == b'files':
1249 if kind == b'files':
1244 if prefix:
1250 if prefix:
1245 raise error.Abort(_(b'cannot give prefix when archiving to files'))
1251 raise error.Abort(_(b'cannot give prefix when archiving to files'))
1246 else:
1252 else:
1247 prefix = archival.tidyprefix(dest, kind, prefix)
1253 prefix = archival.tidyprefix(dest, kind, prefix)
1248
1254
1249 def write(name, mode, islink, getdata):
1255 def write(name, mode, islink, getdata):
1250 if match and not match(name):
1256 if match and not match(name):
1251 return
1257 return
1252 data = getdata()
1258 data = getdata()
1253 if decode:
1259 if decode:
1254 data = repo.wwritedata(name, data)
1260 data = repo.wwritedata(name, data)
1255 archiver.addfile(prefix + name, mode, islink, data)
1261 archiver.addfile(prefix + name, mode, islink, data)
1256
1262
1257 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
1263 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
1258
1264
1259 if repo.ui.configbool(b"ui", b"archivemeta"):
1265 if repo.ui.configbool(b"ui", b"archivemeta"):
1260 write(
1266 write(
1261 b'.hg_archival.txt',
1267 b'.hg_archival.txt',
1262 0o644,
1268 0o644,
1263 False,
1269 False,
1264 lambda: archival.buildmetadata(ctx),
1270 lambda: archival.buildmetadata(ctx),
1265 )
1271 )
1266
1272
1267 for f in ctx:
1273 for f in ctx:
1268 ff = ctx.flags(f)
1274 ff = ctx.flags(f)
1269 getdata = ctx[f].data
1275 getdata = ctx[f].data
1270 lfile = lfutil.splitstandin(f)
1276 lfile = lfutil.splitstandin(f)
1271 if lfile is not None:
1277 if lfile is not None:
1272 if node is not None:
1278 if node is not None:
1273 path = lfutil.findfile(repo, getdata().strip())
1279 path = lfutil.findfile(repo, getdata().strip())
1274
1280
1275 if path is None:
1281 if path is None:
1276 raise error.Abort(
1282 raise error.Abort(
1277 _(
1283 _(
1278 b'largefile %s not found in repo store or system cache'
1284 b'largefile %s not found in repo store or system cache'
1279 )
1285 )
1280 % lfile
1286 % lfile
1281 )
1287 )
1282 else:
1288 else:
1283 path = lfile
1289 path = lfile
1284
1290
1285 f = lfile
1291 f = lfile
1286
1292
1287 getdata = lambda: util.readfile(path)
1293 getdata = lambda: util.readfile(path)
1288 write(f, b'x' in ff and 0o755 or 0o644, b'l' in ff, getdata)
1294 write(f, b'x' in ff and 0o755 or 0o644, b'l' in ff, getdata)
1289
1295
1290 if subrepos:
1296 if subrepos:
1291 for subpath in sorted(ctx.substate):
1297 for subpath in sorted(ctx.substate):
1292 sub = ctx.workingsub(subpath)
1298 sub = ctx.workingsub(subpath)
1293 submatch = matchmod.subdirmatcher(subpath, match)
1299 submatch = matchmod.subdirmatcher(subpath, match)
1294 subprefix = prefix + subpath + b'/'
1300 subprefix = prefix + subpath + b'/'
1295
1301
1296 # TODO: Only hgsubrepo instances have `_repo`, so figure out how to
1302 # TODO: Only hgsubrepo instances have `_repo`, so figure out how to
1297 # infer and possibly set lfstatus in hgsubrepoarchive. That would
1303 # infer and possibly set lfstatus in hgsubrepoarchive. That would
1298 # allow only hgsubrepos to set this, instead of the current scheme
1304 # allow only hgsubrepos to set this, instead of the current scheme
1299 # where the parent sets this for the child.
1305 # where the parent sets this for the child.
1300 with (
1306 with (
1301 hasattr(sub, '_repo')
1307 hasattr(sub, '_repo')
1302 and lfstatus(sub._repo)
1308 and lfstatus(sub._repo)
1303 or util.nullcontextmanager()
1309 or util.nullcontextmanager()
1304 ):
1310 ):
1305 sub.archive(archiver, subprefix, submatch)
1311 sub.archive(archiver, subprefix, submatch)
1306
1312
1307 archiver.done()
1313 archiver.done()
1308
1314
1309
1315
1310 @eh.wrapfunction(subrepo.hgsubrepo, 'archive')
1316 @eh.wrapfunction(subrepo.hgsubrepo, 'archive')
1311 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
1317 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
1312 lfenabled = hasattr(repo._repo, '_largefilesenabled')
1318 lfenabled = hasattr(repo._repo, '_largefilesenabled')
1313 if not lfenabled or not repo._repo.lfstatus:
1319 if not lfenabled or not repo._repo.lfstatus:
1314 return orig(repo, archiver, prefix, match, decode)
1320 return orig(repo, archiver, prefix, match, decode)
1315
1321
1316 repo._get(repo._state + (b'hg',))
1322 repo._get(repo._state + (b'hg',))
1317 rev = repo._state[1]
1323 rev = repo._state[1]
1318 ctx = repo._repo[rev]
1324 ctx = repo._repo[rev]
1319
1325
1320 if ctx.node() is not None:
1326 if ctx.node() is not None:
1321 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1327 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1322
1328
1323 def write(name, mode, islink, getdata):
1329 def write(name, mode, islink, getdata):
1324 # At this point, the standin has been replaced with the largefile name,
1330 # At this point, the standin has been replaced with the largefile name,
1325 # so the normal matcher works here without the lfutil variants.
1331 # so the normal matcher works here without the lfutil variants.
1326 if match and not match(f):
1332 if match and not match(f):
1327 return
1333 return
1328 data = getdata()
1334 data = getdata()
1329 if decode:
1335 if decode:
1330 data = repo._repo.wwritedata(name, data)
1336 data = repo._repo.wwritedata(name, data)
1331
1337
1332 archiver.addfile(prefix + name, mode, islink, data)
1338 archiver.addfile(prefix + name, mode, islink, data)
1333
1339
1334 for f in ctx:
1340 for f in ctx:
1335 ff = ctx.flags(f)
1341 ff = ctx.flags(f)
1336 getdata = ctx[f].data
1342 getdata = ctx[f].data
1337 lfile = lfutil.splitstandin(f)
1343 lfile = lfutil.splitstandin(f)
1338 if lfile is not None:
1344 if lfile is not None:
1339 if ctx.node() is not None:
1345 if ctx.node() is not None:
1340 path = lfutil.findfile(repo._repo, getdata().strip())
1346 path = lfutil.findfile(repo._repo, getdata().strip())
1341
1347
1342 if path is None:
1348 if path is None:
1343 raise error.Abort(
1349 raise error.Abort(
1344 _(
1350 _(
1345 b'largefile %s not found in repo store or system cache'
1351 b'largefile %s not found in repo store or system cache'
1346 )
1352 )
1347 % lfile
1353 % lfile
1348 )
1354 )
1349 else:
1355 else:
1350 path = lfile
1356 path = lfile
1351
1357
1352 f = lfile
1358 f = lfile
1353
1359
1354 getdata = lambda: util.readfile(os.path.join(prefix, path))
1360 getdata = lambda: util.readfile(os.path.join(prefix, path))
1355
1361
1356 write(f, b'x' in ff and 0o755 or 0o644, b'l' in ff, getdata)
1362 write(f, b'x' in ff and 0o755 or 0o644, b'l' in ff, getdata)
1357
1363
1358 for subpath in sorted(ctx.substate):
1364 for subpath in sorted(ctx.substate):
1359 sub = ctx.workingsub(subpath)
1365 sub = ctx.workingsub(subpath)
1360 submatch = matchmod.subdirmatcher(subpath, match)
1366 submatch = matchmod.subdirmatcher(subpath, match)
1361 subprefix = prefix + subpath + b'/'
1367 subprefix = prefix + subpath + b'/'
1362 # TODO: Only hgsubrepo instances have `_repo`, so figure out how to
1368 # TODO: Only hgsubrepo instances have `_repo`, so figure out how to
1363 # infer and possibly set lfstatus at the top of this function. That
1369 # infer and possibly set lfstatus at the top of this function. That
1364 # would allow only hgsubrepos to set this, instead of the current scheme
1370 # would allow only hgsubrepos to set this, instead of the current scheme
1365 # where the parent sets this for the child.
1371 # where the parent sets this for the child.
1366 with (
1372 with (
1367 hasattr(sub, '_repo')
1373 hasattr(sub, '_repo')
1368 and lfstatus(sub._repo)
1374 and lfstatus(sub._repo)
1369 or util.nullcontextmanager()
1375 or util.nullcontextmanager()
1370 ):
1376 ):
1371 sub.archive(archiver, subprefix, submatch, decode)
1377 sub.archive(archiver, subprefix, submatch, decode)
1372
1378
1373
1379
1374 # If a largefile is modified, the change is not reflected in its
1380 # If a largefile is modified, the change is not reflected in its
1375 # standin until a commit. cmdutil.bailifchanged() raises an exception
1381 # standin until a commit. cmdutil.bailifchanged() raises an exception
1376 # if the repo has uncommitted changes. Wrap it to also check if
1382 # if the repo has uncommitted changes. Wrap it to also check if
1377 # largefiles were changed. This is used by bisect, backout and fetch.
1383 # largefiles were changed. This is used by bisect, backout and fetch.
1378 @eh.wrapfunction(cmdutil, 'bailifchanged')
1384 @eh.wrapfunction(cmdutil, 'bailifchanged')
1379 def overridebailifchanged(orig, repo, *args, **kwargs):
1385 def overridebailifchanged(orig, repo, *args, **kwargs):
1380 orig(repo, *args, **kwargs)
1386 orig(repo, *args, **kwargs)
1381 with lfstatus(repo):
1387 with lfstatus(repo):
1382 s = repo.status()
1388 s = repo.status()
1383 if s.modified or s.added or s.removed or s.deleted:
1389 if s.modified or s.added or s.removed or s.deleted:
1384 raise error.Abort(_(b'uncommitted changes'))
1390 raise error.Abort(_(b'uncommitted changes'))
1385
1391
1386
1392
1387 @eh.wrapfunction(cmdutil, 'postcommitstatus')
1393 @eh.wrapfunction(cmdutil, 'postcommitstatus')
1388 def postcommitstatus(orig, repo, *args, **kwargs):
1394 def postcommitstatus(orig, repo, *args, **kwargs):
1389 with lfstatus(repo):
1395 with lfstatus(repo):
1390 return orig(repo, *args, **kwargs)
1396 return orig(repo, *args, **kwargs)
1391
1397
1392
1398
1393 @eh.wrapfunction(cmdutil, 'forget')
1399 @eh.wrapfunction(cmdutil, 'forget')
1394 def cmdutilforget(
1400 def cmdutilforget(
1395 orig, ui, repo, match, prefix, uipathfn, explicitonly, dryrun, interactive
1401 orig, ui, repo, match, prefix, uipathfn, explicitonly, dryrun, interactive
1396 ):
1402 ):
1397 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1403 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1398 bad, forgot = orig(
1404 bad, forgot = orig(
1399 ui,
1405 ui,
1400 repo,
1406 repo,
1401 normalmatcher,
1407 normalmatcher,
1402 prefix,
1408 prefix,
1403 uipathfn,
1409 uipathfn,
1404 explicitonly,
1410 explicitonly,
1405 dryrun,
1411 dryrun,
1406 interactive,
1412 interactive,
1407 )
1413 )
1408 m = composelargefilematcher(match, repo[None].manifest())
1414 m = composelargefilematcher(match, repo[None].manifest())
1409
1415
1410 with lfstatus(repo):
1416 with lfstatus(repo):
1411 s = repo.status(match=m, clean=True)
1417 s = repo.status(match=m, clean=True)
1412 manifest = repo[None].manifest()
1418 manifest = repo[None].manifest()
1413 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1419 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1414 forget = [f for f in forget if lfutil.standin(f) in manifest]
1420 forget = [f for f in forget if lfutil.standin(f) in manifest]
1415
1421
1416 for f in forget:
1422 for f in forget:
1417 fstandin = lfutil.standin(f)
1423 fstandin = lfutil.standin(f)
1418 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1424 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1419 ui.warn(
1425 ui.warn(
1420 _(b'not removing %s: file is already untracked\n') % uipathfn(f)
1426 _(b'not removing %s: file is already untracked\n') % uipathfn(f)
1421 )
1427 )
1422 bad.append(f)
1428 bad.append(f)
1423
1429
1424 for f in forget:
1430 for f in forget:
1425 if ui.verbose or not m.exact(f):
1431 if ui.verbose or not m.exact(f):
1426 ui.status(_(b'removing %s\n') % uipathfn(f))
1432 ui.status(_(b'removing %s\n') % uipathfn(f))
1427
1433
1428 # Need to lock because standin files are deleted then removed from the
1434 # Need to lock because standin files are deleted then removed from the
1429 # repository and we could race in-between.
1435 # repository and we could race in-between.
1430 with repo.wlock():
1436 with repo.wlock():
1431 lfdirstate = lfutil.openlfdirstate(ui, repo)
1437 lfdirstate = lfutil.openlfdirstate(ui, repo)
1432 for f in forget:
1438 for f in forget:
1433 lfdirstate.set_untracked(f)
1439 lfdirstate.set_untracked(f)
1434 lfdirstate.write(repo.currenttransaction())
1440 lfdirstate.write(repo.currenttransaction())
1435 standins = [lfutil.standin(f) for f in forget]
1441 standins = [lfutil.standin(f) for f in forget]
1436 for f in standins:
1442 for f in standins:
1437 repo.wvfs.unlinkpath(f, ignoremissing=True)
1443 repo.wvfs.unlinkpath(f, ignoremissing=True)
1438 rejected = repo[None].forget(standins)
1444 rejected = repo[None].forget(standins)
1439
1445
1440 bad.extend(f for f in rejected if f in m.files())
1446 bad.extend(f for f in rejected if f in m.files())
1441 forgot.extend(f for f in forget if f not in rejected)
1447 forgot.extend(f for f in forget if f not in rejected)
1442 return bad, forgot
1448 return bad, forgot
1443
1449
1444
1450
1445 def _getoutgoings(repo, other, missing, addfunc):
1451 def _getoutgoings(repo, other, missing, addfunc):
1446 """get pairs of filename and largefile hash in outgoing revisions
1452 """get pairs of filename and largefile hash in outgoing revisions
1447 in 'missing'.
1453 in 'missing'.
1448
1454
1449 largefiles already existing on 'other' repository are ignored.
1455 largefiles already existing on 'other' repository are ignored.
1450
1456
1451 'addfunc' is invoked with each unique pairs of filename and
1457 'addfunc' is invoked with each unique pairs of filename and
1452 largefile hash value.
1458 largefile hash value.
1453 """
1459 """
1454 knowns = set()
1460 knowns = set()
1455 lfhashes = set()
1461 lfhashes = set()
1456
1462
1457 def dedup(fn, lfhash):
1463 def dedup(fn, lfhash):
1458 k = (fn, lfhash)
1464 k = (fn, lfhash)
1459 if k not in knowns:
1465 if k not in knowns:
1460 knowns.add(k)
1466 knowns.add(k)
1461 lfhashes.add(lfhash)
1467 lfhashes.add(lfhash)
1462
1468
1463 lfutil.getlfilestoupload(repo, missing, dedup)
1469 lfutil.getlfilestoupload(repo, missing, dedup)
1464 if lfhashes:
1470 if lfhashes:
1465 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1471 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1466 for fn, lfhash in knowns:
1472 for fn, lfhash in knowns:
1467 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1473 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1468 addfunc(fn, lfhash)
1474 addfunc(fn, lfhash)
1469
1475
1470
1476
1471 def outgoinghook(ui, repo, other, opts, missing):
1477 def outgoinghook(ui, repo, other, opts, missing):
1472 if opts.pop(b'large', None):
1478 if opts.pop(b'large', None):
1473 lfhashes = set()
1479 lfhashes = set()
1474 if ui.debugflag:
1480 if ui.debugflag:
1475 toupload = {}
1481 toupload = {}
1476
1482
1477 def addfunc(fn, lfhash):
1483 def addfunc(fn, lfhash):
1478 if fn not in toupload:
1484 if fn not in toupload:
1479 toupload[fn] = [] # pytype: disable=unsupported-operands
1485 toupload[fn] = [] # pytype: disable=unsupported-operands
1480 toupload[fn].append(lfhash)
1486 toupload[fn].append(lfhash)
1481 lfhashes.add(lfhash)
1487 lfhashes.add(lfhash)
1482
1488
1483 def showhashes(fn):
1489 def showhashes(fn):
1484 for lfhash in sorted(toupload[fn]):
1490 for lfhash in sorted(toupload[fn]):
1485 ui.debug(b' %s\n' % lfhash)
1491 ui.debug(b' %s\n' % lfhash)
1486
1492
1487 else:
1493 else:
1488 toupload = set()
1494 toupload = set()
1489
1495
1490 def addfunc(fn, lfhash):
1496 def addfunc(fn, lfhash):
1491 toupload.add(fn)
1497 toupload.add(fn)
1492 lfhashes.add(lfhash)
1498 lfhashes.add(lfhash)
1493
1499
1494 def showhashes(fn):
1500 def showhashes(fn):
1495 pass
1501 pass
1496
1502
1497 _getoutgoings(repo, other, missing, addfunc)
1503 _getoutgoings(repo, other, missing, addfunc)
1498
1504
1499 if not toupload:
1505 if not toupload:
1500 ui.status(_(b'largefiles: no files to upload\n'))
1506 ui.status(_(b'largefiles: no files to upload\n'))
1501 else:
1507 else:
1502 ui.status(
1508 ui.status(
1503 _(b'largefiles to upload (%d entities):\n') % (len(lfhashes))
1509 _(b'largefiles to upload (%d entities):\n') % (len(lfhashes))
1504 )
1510 )
1505 for file in sorted(toupload):
1511 for file in sorted(toupload):
1506 ui.status(lfutil.splitstandin(file) + b'\n')
1512 ui.status(lfutil.splitstandin(file) + b'\n')
1507 showhashes(file)
1513 showhashes(file)
1508 ui.status(b'\n')
1514 ui.status(b'\n')
1509
1515
1510
1516
1511 @eh.wrapcommand(
1517 @eh.wrapcommand(
1512 b'outgoing', opts=[(b'', b'large', None, _(b'display outgoing largefiles'))]
1518 b'outgoing', opts=[(b'', b'large', None, _(b'display outgoing largefiles'))]
1513 )
1519 )
1514 def _outgoingcmd(orig, *args, **kwargs):
1520 def _outgoingcmd(orig, *args, **kwargs):
1515 # Nothing to do here other than add the extra help option- the hook above
1521 # Nothing to do here other than add the extra help option- the hook above
1516 # processes it.
1522 # processes it.
1517 return orig(*args, **kwargs)
1523 return orig(*args, **kwargs)
1518
1524
1519
1525
1520 def summaryremotehook(ui, repo, opts, changes):
1526 def summaryremotehook(ui, repo, opts, changes):
1521 largeopt = opts.get(b'large', False)
1527 largeopt = opts.get(b'large', False)
1522 if changes is None:
1528 if changes is None:
1523 if largeopt:
1529 if largeopt:
1524 return (False, True) # only outgoing check is needed
1530 return (False, True) # only outgoing check is needed
1525 else:
1531 else:
1526 return (False, False)
1532 return (False, False)
1527 elif largeopt:
1533 elif largeopt:
1528 url, branch, peer, outgoing = changes[1]
1534 url, branch, peer, outgoing = changes[1]
1529 if peer is None:
1535 if peer is None:
1530 # i18n: column positioning for "hg summary"
1536 # i18n: column positioning for "hg summary"
1531 ui.status(_(b'largefiles: (no remote repo)\n'))
1537 ui.status(_(b'largefiles: (no remote repo)\n'))
1532 return
1538 return
1533
1539
1534 toupload = set()
1540 toupload = set()
1535 lfhashes = set()
1541 lfhashes = set()
1536
1542
1537 def addfunc(fn, lfhash):
1543 def addfunc(fn, lfhash):
1538 toupload.add(fn)
1544 toupload.add(fn)
1539 lfhashes.add(lfhash)
1545 lfhashes.add(lfhash)
1540
1546
1541 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1547 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1542
1548
1543 if not toupload:
1549 if not toupload:
1544 # i18n: column positioning for "hg summary"
1550 # i18n: column positioning for "hg summary"
1545 ui.status(_(b'largefiles: (no files to upload)\n'))
1551 ui.status(_(b'largefiles: (no files to upload)\n'))
1546 else:
1552 else:
1547 # i18n: column positioning for "hg summary"
1553 # i18n: column positioning for "hg summary"
1548 ui.status(
1554 ui.status(
1549 _(b'largefiles: %d entities for %d files to upload\n')
1555 _(b'largefiles: %d entities for %d files to upload\n')
1550 % (len(lfhashes), len(toupload))
1556 % (len(lfhashes), len(toupload))
1551 )
1557 )
1552
1558
1553
1559
1554 @eh.wrapcommand(
1560 @eh.wrapcommand(
1555 b'summary', opts=[(b'', b'large', None, _(b'display outgoing largefiles'))]
1561 b'summary', opts=[(b'', b'large', None, _(b'display outgoing largefiles'))]
1556 )
1562 )
1557 def overridesummary(orig, ui, repo, *pats, **opts):
1563 def overridesummary(orig, ui, repo, *pats, **opts):
1558 with lfstatus(repo):
1564 with lfstatus(repo):
1559 orig(ui, repo, *pats, **opts)
1565 orig(ui, repo, *pats, **opts)
1560
1566
1561
1567
1562 @eh.wrapfunction(scmutil, 'addremove')
1568 @eh.wrapfunction(scmutil, 'addremove')
1563 def scmutiladdremove(
1569 def scmutiladdremove(
1564 orig,
1570 orig,
1565 repo,
1571 repo,
1566 matcher,
1572 matcher,
1567 prefix,
1573 prefix,
1568 uipathfn,
1574 uipathfn,
1569 opts=None,
1575 opts=None,
1570 open_tr=None,
1576 open_tr=None,
1571 ):
1577 ):
1572 if opts is None:
1578 if opts is None:
1573 opts = {}
1579 opts = {}
1574 if not lfutil.islfilesrepo(repo):
1580 if not lfutil.islfilesrepo(repo):
1575 return orig(repo, matcher, prefix, uipathfn, opts, open_tr=open_tr)
1581 return orig(repo, matcher, prefix, uipathfn, opts, open_tr=open_tr)
1576
1582
1577 # open the transaction and changing_files context
1583 # open the transaction and changing_files context
1578 if open_tr is not None:
1584 if open_tr is not None:
1579 open_tr()
1585 open_tr()
1580
1586
1581 # Get the list of missing largefiles so we can remove them
1587 # Get the list of missing largefiles so we can remove them
1582 with repo.dirstate.running_status(repo):
1588 with repo.dirstate.running_status(repo):
1583 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1589 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1584 unsure, s, mtime_boundary = lfdirstate.status(
1590 unsure, s, mtime_boundary = lfdirstate.status(
1585 matchmod.always(),
1591 matchmod.always(),
1586 subrepos=[],
1592 subrepos=[],
1587 ignored=False,
1593 ignored=False,
1588 clean=False,
1594 clean=False,
1589 unknown=False,
1595 unknown=False,
1590 )
1596 )
1591
1597
1592 # Call into the normal remove code, but the removing of the standin, we want
1598 # Call into the normal remove code, but the removing of the standin, we want
1593 # to have handled by original addremove. Monkey patching here makes sure
1599 # to have handled by original addremove. Monkey patching here makes sure
1594 # we don't remove the standin in the largefiles code, preventing a very
1600 # we don't remove the standin in the largefiles code, preventing a very
1595 # confused state later.
1601 # confused state later.
1596 if s.deleted:
1602 if s.deleted:
1597 m = copy.copy(matcher)
1603 m = copy.copy(matcher)
1604 m._was_tampered_with = True
1598
1605
1599 # The m._files and m._map attributes are not changed to the deleted list
1606 # The m._files and m._map attributes are not changed to the deleted list
1600 # because that affects the m.exact() test, which in turn governs whether
1607 # because that affects the m.exact() test, which in turn governs whether
1601 # or not the file name is printed, and how. Simply limit the original
1608 # or not the file name is printed, and how. Simply limit the original
1602 # matches to those in the deleted status list.
1609 # matches to those in the deleted status list.
1603 matchfn = m.matchfn
1610 matchfn = m.matchfn
1604 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1611 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1605
1612
1606 removelargefiles(
1613 removelargefiles(
1607 repo.ui,
1614 repo.ui,
1608 repo,
1615 repo,
1609 True,
1616 True,
1610 m,
1617 m,
1611 uipathfn,
1618 uipathfn,
1612 opts.get(b'dry_run'),
1619 opts.get(b'dry_run'),
1613 **pycompat.strkwargs(opts)
1620 **pycompat.strkwargs(opts)
1614 )
1621 )
1615 # Call into the normal add code, and any files that *should* be added as
1622 # Call into the normal add code, and any files that *should* be added as
1616 # largefiles will be
1623 # largefiles will be
1617 added, bad = addlargefiles(
1624 added, bad = addlargefiles(
1618 repo.ui, repo, True, matcher, uipathfn, **pycompat.strkwargs(opts)
1625 repo.ui, repo, True, matcher, uipathfn, **pycompat.strkwargs(opts)
1619 )
1626 )
1620 # Now that we've handled largefiles, hand off to the original addremove
1627 # Now that we've handled largefiles, hand off to the original addremove
1621 # function to take care of the rest. Make sure it doesn't do anything with
1628 # function to take care of the rest. Make sure it doesn't do anything with
1622 # largefiles by passing a matcher that will ignore them.
1629 # largefiles by passing a matcher that will ignore them.
1623 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1630 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1624
1631
1625 return orig(repo, matcher, prefix, uipathfn, opts, open_tr=open_tr)
1632 return orig(repo, matcher, prefix, uipathfn, opts, open_tr=open_tr)
1626
1633
1627
1634
1628 # Calling purge with --all will cause the largefiles to be deleted.
1635 # Calling purge with --all will cause the largefiles to be deleted.
1629 # Override repo.status to prevent this from happening.
1636 # Override repo.status to prevent this from happening.
1630 @eh.wrapcommand(b'purge')
1637 @eh.wrapcommand(b'purge')
1631 def overridepurge(orig, ui, repo, *dirs, **opts):
1638 def overridepurge(orig, ui, repo, *dirs, **opts):
1632 # XXX Monkey patching a repoview will not work. The assigned attribute will
1639 # XXX Monkey patching a repoview will not work. The assigned attribute will
1633 # be set on the unfiltered repo, but we will only lookup attributes in the
1640 # be set on the unfiltered repo, but we will only lookup attributes in the
1634 # unfiltered repo if the lookup in the repoview object itself fails. As the
1641 # unfiltered repo if the lookup in the repoview object itself fails. As the
1635 # monkey patched method exists on the repoview class the lookup will not
1642 # monkey patched method exists on the repoview class the lookup will not
1636 # fail. As a result, the original version will shadow the monkey patched
1643 # fail. As a result, the original version will shadow the monkey patched
1637 # one, defeating the monkey patch.
1644 # one, defeating the monkey patch.
1638 #
1645 #
1639 # As a work around we use an unfiltered repo here. We should do something
1646 # As a work around we use an unfiltered repo here. We should do something
1640 # cleaner instead.
1647 # cleaner instead.
1641 repo = repo.unfiltered()
1648 repo = repo.unfiltered()
1642 oldstatus = repo.status
1649 oldstatus = repo.status
1643
1650
1644 def overridestatus(
1651 def overridestatus(
1645 node1=b'.',
1652 node1=b'.',
1646 node2=None,
1653 node2=None,
1647 match=None,
1654 match=None,
1648 ignored=False,
1655 ignored=False,
1649 clean=False,
1656 clean=False,
1650 unknown=False,
1657 unknown=False,
1651 listsubrepos=False,
1658 listsubrepos=False,
1652 ):
1659 ):
1653 r = oldstatus(
1660 r = oldstatus(
1654 node1, node2, match, ignored, clean, unknown, listsubrepos
1661 node1, node2, match, ignored, clean, unknown, listsubrepos
1655 )
1662 )
1656 lfdirstate = lfutil.openlfdirstate(ui, repo)
1663 lfdirstate = lfutil.openlfdirstate(ui, repo)
1657 unknown = [
1664 unknown = [
1658 f for f in r.unknown if not lfdirstate.get_entry(f).any_tracked
1665 f for f in r.unknown if not lfdirstate.get_entry(f).any_tracked
1659 ]
1666 ]
1660 ignored = [
1667 ignored = [
1661 f for f in r.ignored if not lfdirstate.get_entry(f).any_tracked
1668 f for f in r.ignored if not lfdirstate.get_entry(f).any_tracked
1662 ]
1669 ]
1663 return scmutil.status(
1670 return scmutil.status(
1664 r.modified, r.added, r.removed, r.deleted, unknown, ignored, r.clean
1671 r.modified, r.added, r.removed, r.deleted, unknown, ignored, r.clean
1665 )
1672 )
1666
1673
1667 repo.status = overridestatus
1674 repo.status = overridestatus
1668 orig(ui, repo, *dirs, **opts)
1675 orig(ui, repo, *dirs, **opts)
1669 repo.status = oldstatus
1676 repo.status = oldstatus
1670
1677
1671
1678
1672 @eh.wrapcommand(b'rollback')
1679 @eh.wrapcommand(b'rollback')
1673 def overriderollback(orig, ui, repo, **opts):
1680 def overriderollback(orig, ui, repo, **opts):
1674 with repo.wlock():
1681 with repo.wlock():
1675 before = repo.dirstate.parents()
1682 before = repo.dirstate.parents()
1676 orphans = {
1683 orphans = {
1677 f
1684 f
1678 for f in repo.dirstate
1685 for f in repo.dirstate
1679 if lfutil.isstandin(f) and not repo.dirstate.get_entry(f).removed
1686 if lfutil.isstandin(f) and not repo.dirstate.get_entry(f).removed
1680 }
1687 }
1681 result = orig(ui, repo, **opts)
1688 result = orig(ui, repo, **opts)
1682 after = repo.dirstate.parents()
1689 after = repo.dirstate.parents()
1683 if before == after:
1690 if before == after:
1684 return result # no need to restore standins
1691 return result # no need to restore standins
1685
1692
1686 pctx = repo[b'.']
1693 pctx = repo[b'.']
1687 for f in repo.dirstate:
1694 for f in repo.dirstate:
1688 if lfutil.isstandin(f):
1695 if lfutil.isstandin(f):
1689 orphans.discard(f)
1696 orphans.discard(f)
1690 if repo.dirstate.get_entry(f).removed:
1697 if repo.dirstate.get_entry(f).removed:
1691 repo.wvfs.unlinkpath(f, ignoremissing=True)
1698 repo.wvfs.unlinkpath(f, ignoremissing=True)
1692 elif f in pctx:
1699 elif f in pctx:
1693 fctx = pctx[f]
1700 fctx = pctx[f]
1694 repo.wwrite(f, fctx.data(), fctx.flags())
1701 repo.wwrite(f, fctx.data(), fctx.flags())
1695 else:
1702 else:
1696 # content of standin is not so important in 'a',
1703 # content of standin is not so important in 'a',
1697 # 'm' or 'n' (coming from the 2nd parent) cases
1704 # 'm' or 'n' (coming from the 2nd parent) cases
1698 lfutil.writestandin(repo, f, b'', False)
1705 lfutil.writestandin(repo, f, b'', False)
1699 for standin in orphans:
1706 for standin in orphans:
1700 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1707 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1701
1708
1702 return result
1709 return result
1703
1710
1704
1711
1705 @eh.wrapcommand(b'transplant', extension=b'transplant')
1712 @eh.wrapcommand(b'transplant', extension=b'transplant')
1706 def overridetransplant(orig, ui, repo, *revs, **opts):
1713 def overridetransplant(orig, ui, repo, *revs, **opts):
1707 resuming = opts.get('continue')
1714 resuming = opts.get('continue')
1708 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1715 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1709 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1716 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1710 try:
1717 try:
1711 result = orig(ui, repo, *revs, **opts)
1718 result = orig(ui, repo, *revs, **opts)
1712 finally:
1719 finally:
1713 repo._lfstatuswriters.pop()
1720 repo._lfstatuswriters.pop()
1714 repo._lfcommithooks.pop()
1721 repo._lfcommithooks.pop()
1715 return result
1722 return result
1716
1723
1717
1724
1718 @eh.wrapcommand(b'cat')
1725 @eh.wrapcommand(b'cat')
1719 def overridecat(orig, ui, repo, file1, *pats, **opts):
1726 def overridecat(orig, ui, repo, file1, *pats, **opts):
1720 ctx = logcmdutil.revsingle(repo, opts.get('rev'))
1727 ctx = logcmdutil.revsingle(repo, opts.get('rev'))
1721 err = 1
1728 err = 1
1722 notbad = set()
1729 notbad = set()
1723 m = scmutil.match(ctx, (file1,) + pats, pycompat.byteskwargs(opts))
1730 m = scmutil.match(ctx, (file1,) + pats, pycompat.byteskwargs(opts))
1731 m._was_tampered_with = True
1724 origmatchfn = m.matchfn
1732 origmatchfn = m.matchfn
1725
1733
1726 def lfmatchfn(f):
1734 def lfmatchfn(f):
1727 if origmatchfn(f):
1735 if origmatchfn(f):
1728 return True
1736 return True
1729 lf = lfutil.splitstandin(f)
1737 lf = lfutil.splitstandin(f)
1730 if lf is None:
1738 if lf is None:
1731 return False
1739 return False
1732 notbad.add(lf)
1740 notbad.add(lf)
1733 return origmatchfn(lf)
1741 return origmatchfn(lf)
1734
1742
1735 m.matchfn = lfmatchfn
1743 m.matchfn = lfmatchfn
1736 origbadfn = m.bad
1744 origbadfn = m.bad
1737
1745
1738 def lfbadfn(f, msg):
1746 def lfbadfn(f, msg):
1739 if not f in notbad:
1747 if not f in notbad:
1740 origbadfn(f, msg)
1748 origbadfn(f, msg)
1741
1749
1742 m.bad = lfbadfn
1750 m.bad = lfbadfn
1743
1751
1744 origvisitdirfn = m.visitdir
1752 origvisitdirfn = m.visitdir
1745
1753
1746 def lfvisitdirfn(dir):
1754 def lfvisitdirfn(dir):
1747 if dir == lfutil.shortname:
1755 if dir == lfutil.shortname:
1748 return True
1756 return True
1749 ret = origvisitdirfn(dir)
1757 ret = origvisitdirfn(dir)
1750 if ret:
1758 if ret:
1751 return ret
1759 return ret
1752 lf = lfutil.splitstandin(dir)
1760 lf = lfutil.splitstandin(dir)
1753 if lf is None:
1761 if lf is None:
1754 return False
1762 return False
1755 return origvisitdirfn(lf)
1763 return origvisitdirfn(lf)
1756
1764
1757 m.visitdir = lfvisitdirfn
1765 m.visitdir = lfvisitdirfn
1758
1766
1759 for f in ctx.walk(m):
1767 for f in ctx.walk(m):
1760 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1768 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1761 lf = lfutil.splitstandin(f)
1769 lf = lfutil.splitstandin(f)
1762 if lf is None or origmatchfn(f):
1770 if lf is None or origmatchfn(f):
1763 # duplicating unreachable code from commands.cat
1771 # duplicating unreachable code from commands.cat
1764 data = ctx[f].data()
1772 data = ctx[f].data()
1765 if opts.get('decode'):
1773 if opts.get('decode'):
1766 data = repo.wwritedata(f, data)
1774 data = repo.wwritedata(f, data)
1767 fp.write(data)
1775 fp.write(data)
1768 else:
1776 else:
1769 hash = lfutil.readasstandin(ctx[f])
1777 hash = lfutil.readasstandin(ctx[f])
1770 if not lfutil.inusercache(repo.ui, hash):
1778 if not lfutil.inusercache(repo.ui, hash):
1771 store = storefactory.openstore(repo)
1779 store = storefactory.openstore(repo)
1772 success, missing = store.get([(lf, hash)])
1780 success, missing = store.get([(lf, hash)])
1773 if len(success) != 1:
1781 if len(success) != 1:
1774 raise error.Abort(
1782 raise error.Abort(
1775 _(
1783 _(
1776 b'largefile %s is not in cache and could not be '
1784 b'largefile %s is not in cache and could not be '
1777 b'downloaded'
1785 b'downloaded'
1778 )
1786 )
1779 % lf
1787 % lf
1780 )
1788 )
1781 path = lfutil.usercachepath(repo.ui, hash)
1789 path = lfutil.usercachepath(repo.ui, hash)
1782 with open(path, b"rb") as fpin:
1790 with open(path, b"rb") as fpin:
1783 for chunk in util.filechunkiter(fpin):
1791 for chunk in util.filechunkiter(fpin):
1784 fp.write(chunk)
1792 fp.write(chunk)
1785 err = 0
1793 err = 0
1786 return err
1794 return err
1787
1795
1788
1796
1789 @eh.wrapfunction(merge, '_update')
1797 @eh.wrapfunction(merge, '_update')
1790 def mergeupdate(orig, repo, node, branchmerge, force, *args, **kwargs):
1798 def mergeupdate(orig, repo, node, branchmerge, force, *args, **kwargs):
1791 matcher = kwargs.get('matcher', None)
1799 matcher = kwargs.get('matcher', None)
1792 # note if this is a partial update
1800 # note if this is a partial update
1793 partial = matcher and not matcher.always()
1801 partial = matcher and not matcher.always()
1794 with repo.wlock(), repo.dirstate.changing_parents(repo):
1802 with repo.wlock(), repo.dirstate.changing_parents(repo):
1795 # branch | | |
1803 # branch | | |
1796 # merge | force | partial | action
1804 # merge | force | partial | action
1797 # -------+-------+---------+--------------
1805 # -------+-------+---------+--------------
1798 # x | x | x | linear-merge
1806 # x | x | x | linear-merge
1799 # o | x | x | branch-merge
1807 # o | x | x | branch-merge
1800 # x | o | x | overwrite (as clean update)
1808 # x | o | x | overwrite (as clean update)
1801 # o | o | x | force-branch-merge (*1)
1809 # o | o | x | force-branch-merge (*1)
1802 # x | x | o | (*)
1810 # x | x | o | (*)
1803 # o | x | o | (*)
1811 # o | x | o | (*)
1804 # x | o | o | overwrite (as revert)
1812 # x | o | o | overwrite (as revert)
1805 # o | o | o | (*)
1813 # o | o | o | (*)
1806 #
1814 #
1807 # (*) don't care
1815 # (*) don't care
1808 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1816 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1809 with repo.dirstate.running_status(repo):
1817 with repo.dirstate.running_status(repo):
1810 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1818 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1811 unsure, s, mtime_boundary = lfdirstate.status(
1819 unsure, s, mtime_boundary = lfdirstate.status(
1812 matchmod.always(),
1820 matchmod.always(),
1813 subrepos=[],
1821 subrepos=[],
1814 ignored=False,
1822 ignored=False,
1815 clean=True,
1823 clean=True,
1816 unknown=False,
1824 unknown=False,
1817 )
1825 )
1818 oldclean = set(s.clean)
1826 oldclean = set(s.clean)
1819 pctx = repo[b'.']
1827 pctx = repo[b'.']
1820 dctx = repo[node]
1828 dctx = repo[node]
1821 for lfile in unsure + s.modified:
1829 for lfile in unsure + s.modified:
1822 lfileabs = repo.wvfs.join(lfile)
1830 lfileabs = repo.wvfs.join(lfile)
1823 if not repo.wvfs.exists(lfileabs):
1831 if not repo.wvfs.exists(lfileabs):
1824 continue
1832 continue
1825 lfhash = lfutil.hashfile(lfileabs)
1833 lfhash = lfutil.hashfile(lfileabs)
1826 standin = lfutil.standin(lfile)
1834 standin = lfutil.standin(lfile)
1827 lfutil.writestandin(
1835 lfutil.writestandin(
1828 repo, standin, lfhash, lfutil.getexecutable(lfileabs)
1836 repo, standin, lfhash, lfutil.getexecutable(lfileabs)
1829 )
1837 )
1830 if standin in pctx and lfhash == lfutil.readasstandin(
1838 if standin in pctx and lfhash == lfutil.readasstandin(
1831 pctx[standin]
1839 pctx[standin]
1832 ):
1840 ):
1833 oldclean.add(lfile)
1841 oldclean.add(lfile)
1834 for lfile in s.added:
1842 for lfile in s.added:
1835 fstandin = lfutil.standin(lfile)
1843 fstandin = lfutil.standin(lfile)
1836 if fstandin not in dctx:
1844 if fstandin not in dctx:
1837 # in this case, content of standin file is meaningless
1845 # in this case, content of standin file is meaningless
1838 # (in dctx, lfile is unknown, or normal file)
1846 # (in dctx, lfile is unknown, or normal file)
1839 continue
1847 continue
1840 lfutil.updatestandin(repo, lfile, fstandin)
1848 lfutil.updatestandin(repo, lfile, fstandin)
1841 # mark all clean largefiles as dirty, just in case the update gets
1849 # mark all clean largefiles as dirty, just in case the update gets
1842 # interrupted before largefiles and lfdirstate are synchronized
1850 # interrupted before largefiles and lfdirstate are synchronized
1843 for lfile in oldclean:
1851 for lfile in oldclean:
1844 entry = lfdirstate.get_entry(lfile)
1852 entry = lfdirstate.get_entry(lfile)
1845 lfdirstate.hacky_extension_update_file(
1853 lfdirstate.hacky_extension_update_file(
1846 lfile,
1854 lfile,
1847 wc_tracked=entry.tracked,
1855 wc_tracked=entry.tracked,
1848 p1_tracked=entry.p1_tracked,
1856 p1_tracked=entry.p1_tracked,
1849 p2_info=entry.p2_info,
1857 p2_info=entry.p2_info,
1850 possibly_dirty=True,
1858 possibly_dirty=True,
1851 )
1859 )
1852 lfdirstate.write(repo.currenttransaction())
1860 lfdirstate.write(repo.currenttransaction())
1853
1861
1854 oldstandins = lfutil.getstandinsstate(repo)
1862 oldstandins = lfutil.getstandinsstate(repo)
1855 wc = kwargs.get('wc')
1863 wc = kwargs.get('wc')
1856 if wc and wc.isinmemory():
1864 if wc and wc.isinmemory():
1857 # largefiles is not a good candidate for in-memory merge (large
1865 # largefiles is not a good candidate for in-memory merge (large
1858 # files, custom dirstate, matcher usage).
1866 # files, custom dirstate, matcher usage).
1859 raise error.ProgrammingError(
1867 raise error.ProgrammingError(
1860 b'largefiles is not compatible with in-memory merge'
1868 b'largefiles is not compatible with in-memory merge'
1861 )
1869 )
1862 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1870 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1863
1871
1864 newstandins = lfutil.getstandinsstate(repo)
1872 newstandins = lfutil.getstandinsstate(repo)
1865 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1873 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1866
1874
1867 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1875 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1868 # all the ones that didn't change as clean
1876 # all the ones that didn't change as clean
1869 for lfile in oldclean.difference(filelist):
1877 for lfile in oldclean.difference(filelist):
1870 lfdirstate.update_file(lfile, p1_tracked=True, wc_tracked=True)
1878 lfdirstate.update_file(lfile, p1_tracked=True, wc_tracked=True)
1871
1879
1872 if branchmerge or force or partial:
1880 if branchmerge or force or partial:
1873 filelist.extend(s.deleted + s.removed)
1881 filelist.extend(s.deleted + s.removed)
1874
1882
1875 lfcommands.updatelfiles(
1883 lfcommands.updatelfiles(
1876 repo.ui, repo, filelist=filelist, normallookup=partial
1884 repo.ui, repo, filelist=filelist, normallookup=partial
1877 )
1885 )
1878
1886
1879 return result
1887 return result
1880
1888
1881
1889
1882 @eh.wrapfunction(scmutil, 'marktouched')
1890 @eh.wrapfunction(scmutil, 'marktouched')
1883 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1891 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1884 result = orig(repo, files, *args, **kwargs)
1892 result = orig(repo, files, *args, **kwargs)
1885
1893
1886 filelist = []
1894 filelist = []
1887 for f in files:
1895 for f in files:
1888 lf = lfutil.splitstandin(f)
1896 lf = lfutil.splitstandin(f)
1889 if lf is not None:
1897 if lf is not None:
1890 filelist.append(lf)
1898 filelist.append(lf)
1891 if filelist:
1899 if filelist:
1892 lfcommands.updatelfiles(
1900 lfcommands.updatelfiles(
1893 repo.ui,
1901 repo.ui,
1894 repo,
1902 repo,
1895 filelist=filelist,
1903 filelist=filelist,
1896 printmessage=False,
1904 printmessage=False,
1897 normallookup=True,
1905 normallookup=True,
1898 )
1906 )
1899
1907
1900 return result
1908 return result
1901
1909
1902
1910
1903 @eh.wrapfunction(upgrade_actions, 'preservedrequirements')
1911 @eh.wrapfunction(upgrade_actions, 'preservedrequirements')
1904 @eh.wrapfunction(upgrade_actions, 'supporteddestrequirements')
1912 @eh.wrapfunction(upgrade_actions, 'supporteddestrequirements')
1905 def upgraderequirements(orig, repo):
1913 def upgraderequirements(orig, repo):
1906 reqs = orig(repo)
1914 reqs = orig(repo)
1907 if b'largefiles' in repo.requirements:
1915 if b'largefiles' in repo.requirements:
1908 reqs.add(b'largefiles')
1916 reqs.add(b'largefiles')
1909 return reqs
1917 return reqs
1910
1918
1911
1919
1912 _lfscheme = b'largefile://'
1920 _lfscheme = b'largefile://'
1913
1921
1914
1922
1915 @eh.wrapfunction(urlmod, 'open')
1923 @eh.wrapfunction(urlmod, 'open')
1916 def openlargefile(orig, ui, url_, data=None, **kwargs):
1924 def openlargefile(orig, ui, url_, data=None, **kwargs):
1917 if url_.startswith(_lfscheme):
1925 if url_.startswith(_lfscheme):
1918 if data:
1926 if data:
1919 msg = b"cannot use data on a 'largefile://' url"
1927 msg = b"cannot use data on a 'largefile://' url"
1920 raise error.ProgrammingError(msg)
1928 raise error.ProgrammingError(msg)
1921 lfid = url_[len(_lfscheme) :]
1929 lfid = url_[len(_lfscheme) :]
1922 return storefactory.getlfile(ui, lfid)
1930 return storefactory.getlfile(ui, lfid)
1923 else:
1931 else:
1924 return orig(ui, url_, data=data, **kwargs)
1932 return orig(ui, url_, data=data, **kwargs)
@@ -1,474 +1,476 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''setup for largefiles repositories: reposetup'''
9 '''setup for largefiles repositories: reposetup'''
10
10
11 import copy
11 import copy
12
12
13 from mercurial.i18n import _
13 from mercurial.i18n import _
14
14
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 extensions,
17 extensions,
18 localrepo,
18 localrepo,
19 match as matchmod,
19 match as matchmod,
20 scmutil,
20 scmutil,
21 util,
21 util,
22 )
22 )
23
23
24 from mercurial.dirstateutils import timestamp
24 from mercurial.dirstateutils import timestamp
25
25
26 from . import (
26 from . import (
27 lfcommands,
27 lfcommands,
28 lfutil,
28 lfutil,
29 )
29 )
30
30
31
31
32 def reposetup(ui, repo):
32 def reposetup(ui, repo):
33 # wire repositories should be given new wireproto functions
33 # wire repositories should be given new wireproto functions
34 # by "proto.wirereposetup()" via "hg.wirepeersetupfuncs"
34 # by "proto.wirereposetup()" via "hg.wirepeersetupfuncs"
35 if not repo.local():
35 if not repo.local():
36 return
36 return
37
37
38 class lfilesrepo(repo.__class__):
38 class lfilesrepo(repo.__class__):
39 # the mark to examine whether "repo" object enables largefiles or not
39 # the mark to examine whether "repo" object enables largefiles or not
40 _largefilesenabled = True
40 _largefilesenabled = True
41
41
42 lfstatus = False
42 lfstatus = False
43
43
44 # When lfstatus is set, return a context that gives the names
44 # When lfstatus is set, return a context that gives the names
45 # of largefiles instead of their corresponding standins and
45 # of largefiles instead of their corresponding standins and
46 # identifies the largefiles as always binary, regardless of
46 # identifies the largefiles as always binary, regardless of
47 # their actual contents.
47 # their actual contents.
48 def __getitem__(self, changeid):
48 def __getitem__(self, changeid):
49 ctx = super(lfilesrepo, self).__getitem__(changeid)
49 ctx = super(lfilesrepo, self).__getitem__(changeid)
50 if self.lfstatus:
50 if self.lfstatus:
51
51
52 def files(orig):
52 def files(orig):
53 filenames = orig()
53 filenames = orig()
54 return [lfutil.splitstandin(f) or f for f in filenames]
54 return [lfutil.splitstandin(f) or f for f in filenames]
55
55
56 extensions.wrapfunction(ctx, 'files', files)
56 extensions.wrapfunction(ctx, 'files', files)
57
57
58 def manifest(orig):
58 def manifest(orig):
59 man1 = orig()
59 man1 = orig()
60
60
61 class lfilesmanifest(man1.__class__):
61 class lfilesmanifest(man1.__class__):
62 def __contains__(self, filename):
62 def __contains__(self, filename):
63 orig = super(lfilesmanifest, self).__contains__
63 orig = super(lfilesmanifest, self).__contains__
64 return orig(filename) or orig(
64 return orig(filename) or orig(
65 lfutil.standin(filename)
65 lfutil.standin(filename)
66 )
66 )
67
67
68 man1.__class__ = lfilesmanifest
68 man1.__class__ = lfilesmanifest
69 return man1
69 return man1
70
70
71 extensions.wrapfunction(ctx, 'manifest', manifest)
71 extensions.wrapfunction(ctx, 'manifest', manifest)
72
72
73 def filectx(orig, path, fileid=None, filelog=None):
73 def filectx(orig, path, fileid=None, filelog=None):
74 try:
74 try:
75 if filelog is not None:
75 if filelog is not None:
76 result = orig(path, fileid, filelog)
76 result = orig(path, fileid, filelog)
77 else:
77 else:
78 result = orig(path, fileid)
78 result = orig(path, fileid)
79 except error.LookupError:
79 except error.LookupError:
80 # Adding a null character will cause Mercurial to
80 # Adding a null character will cause Mercurial to
81 # identify this as a binary file.
81 # identify this as a binary file.
82 if filelog is not None:
82 if filelog is not None:
83 result = orig(lfutil.standin(path), fileid, filelog)
83 result = orig(lfutil.standin(path), fileid, filelog)
84 else:
84 else:
85 result = orig(lfutil.standin(path), fileid)
85 result = orig(lfutil.standin(path), fileid)
86 olddata = result.data
86 olddata = result.data
87 result.data = lambda: olddata() + b'\0'
87 result.data = lambda: olddata() + b'\0'
88 return result
88 return result
89
89
90 extensions.wrapfunction(ctx, 'filectx', filectx)
90 extensions.wrapfunction(ctx, 'filectx', filectx)
91
91
92 return ctx
92 return ctx
93
93
94 # Figure out the status of big files and insert them into the
94 # Figure out the status of big files and insert them into the
95 # appropriate list in the result. Also removes standin files
95 # appropriate list in the result. Also removes standin files
96 # from the listing. Revert to the original status if
96 # from the listing. Revert to the original status if
97 # self.lfstatus is False.
97 # self.lfstatus is False.
98 # XXX large file status is buggy when used on repo proxy.
98 # XXX large file status is buggy when used on repo proxy.
99 # XXX this needs to be investigated.
99 # XXX this needs to be investigated.
100 @localrepo.unfilteredmethod
100 @localrepo.unfilteredmethod
101 def status(
101 def status(
102 self,
102 self,
103 node1=b'.',
103 node1=b'.',
104 node2=None,
104 node2=None,
105 match=None,
105 match=None,
106 ignored=False,
106 ignored=False,
107 clean=False,
107 clean=False,
108 unknown=False,
108 unknown=False,
109 listsubrepos=False,
109 listsubrepos=False,
110 ):
110 ):
111 listignored, listclean, listunknown = ignored, clean, unknown
111 listignored, listclean, listunknown = ignored, clean, unknown
112 orig = super(lfilesrepo, self).status
112 orig = super(lfilesrepo, self).status
113 if not self.lfstatus:
113 if not self.lfstatus:
114 return orig(
114 return orig(
115 node1,
115 node1,
116 node2,
116 node2,
117 match,
117 match,
118 listignored,
118 listignored,
119 listclean,
119 listclean,
120 listunknown,
120 listunknown,
121 listsubrepos,
121 listsubrepos,
122 )
122 )
123
123
124 # some calls in this function rely on the old version of status
124 # some calls in this function rely on the old version of status
125 self.lfstatus = False
125 self.lfstatus = False
126 ctx1 = self[node1]
126 ctx1 = self[node1]
127 ctx2 = self[node2]
127 ctx2 = self[node2]
128 working = ctx2.rev() is None
128 working = ctx2.rev() is None
129 parentworking = working and ctx1 == self[b'.']
129 parentworking = working and ctx1 == self[b'.']
130
130
131 if match is None:
131 if match is None:
132 match = matchmod.always()
132 match = matchmod.always()
133
133
134 try:
134 try:
135 # updating the dirstate is optional
135 # updating the dirstate is optional
136 # so we don't wait on the lock
136 # so we don't wait on the lock
137 wlock = self.wlock(False)
137 wlock = self.wlock(False)
138 gotlock = True
138 gotlock = True
139 except error.LockError:
139 except error.LockError:
140 wlock = util.nullcontextmanager()
140 wlock = util.nullcontextmanager()
141 gotlock = False
141 gotlock = False
142 with wlock, self.dirstate.running_status(self):
142 with wlock, self.dirstate.running_status(self):
143
143
144 # First check if paths or patterns were specified on the
144 # First check if paths or patterns were specified on the
145 # command line. If there were, and they don't match any
145 # command line. If there were, and they don't match any
146 # largefiles, we should just bail here and let super
146 # largefiles, we should just bail here and let super
147 # handle it -- thus gaining a big performance boost.
147 # handle it -- thus gaining a big performance boost.
148 lfdirstate = lfutil.openlfdirstate(ui, self)
148 lfdirstate = lfutil.openlfdirstate(ui, self)
149 if not match.always():
149 if not match.always():
150 for f in lfdirstate:
150 for f in lfdirstate:
151 if match(f):
151 if match(f):
152 break
152 break
153 else:
153 else:
154 return orig(
154 return orig(
155 node1,
155 node1,
156 node2,
156 node2,
157 match,
157 match,
158 listignored,
158 listignored,
159 listclean,
159 listclean,
160 listunknown,
160 listunknown,
161 listsubrepos,
161 listsubrepos,
162 )
162 )
163
163
164 # Create a copy of match that matches standins instead
164 # Create a copy of match that matches standins instead
165 # of largefiles.
165 # of largefiles.
166 def tostandins(files):
166 def tostandins(files):
167 if not working:
167 if not working:
168 return files
168 return files
169 newfiles = []
169 newfiles = []
170 dirstate = self.dirstate
170 dirstate = self.dirstate
171 for f in files:
171 for f in files:
172 sf = lfutil.standin(f)
172 sf = lfutil.standin(f)
173 if sf in dirstate:
173 if sf in dirstate:
174 newfiles.append(sf)
174 newfiles.append(sf)
175 elif dirstate.hasdir(sf):
175 elif dirstate.hasdir(sf):
176 # Directory entries could be regular or
176 # Directory entries could be regular or
177 # standin, check both
177 # standin, check both
178 newfiles.extend((f, sf))
178 newfiles.extend((f, sf))
179 else:
179 else:
180 newfiles.append(f)
180 newfiles.append(f)
181 return newfiles
181 return newfiles
182
182
183 m = copy.copy(match)
183 m = copy.copy(match)
184 m._was_tampered_with = True
184 m._files = tostandins(m._files)
185 m._files = tostandins(m._files)
185
186
186 result = orig(
187 result = orig(
187 node1, node2, m, ignored, clean, unknown, listsubrepos
188 node1, node2, m, ignored, clean, unknown, listsubrepos
188 )
189 )
189 if working:
190 if working:
190
191
191 def sfindirstate(f):
192 def sfindirstate(f):
192 sf = lfutil.standin(f)
193 sf = lfutil.standin(f)
193 dirstate = self.dirstate
194 dirstate = self.dirstate
194 return sf in dirstate or dirstate.hasdir(sf)
195 return sf in dirstate or dirstate.hasdir(sf)
195
196
197 match._was_tampered_with = True
196 match._files = [f for f in match._files if sfindirstate(f)]
198 match._files = [f for f in match._files if sfindirstate(f)]
197 # Don't waste time getting the ignored and unknown
199 # Don't waste time getting the ignored and unknown
198 # files from lfdirstate
200 # files from lfdirstate
199 unsure, s, mtime_boundary = lfdirstate.status(
201 unsure, s, mtime_boundary = lfdirstate.status(
200 match,
202 match,
201 subrepos=[],
203 subrepos=[],
202 ignored=False,
204 ignored=False,
203 clean=listclean,
205 clean=listclean,
204 unknown=False,
206 unknown=False,
205 )
207 )
206 (modified, added, removed, deleted, clean) = (
208 (modified, added, removed, deleted, clean) = (
207 s.modified,
209 s.modified,
208 s.added,
210 s.added,
209 s.removed,
211 s.removed,
210 s.deleted,
212 s.deleted,
211 s.clean,
213 s.clean,
212 )
214 )
213 if parentworking:
215 if parentworking:
214 wctx = repo[None]
216 wctx = repo[None]
215 for lfile in unsure:
217 for lfile in unsure:
216 standin = lfutil.standin(lfile)
218 standin = lfutil.standin(lfile)
217 if standin not in ctx1:
219 if standin not in ctx1:
218 # from second parent
220 # from second parent
219 modified.append(lfile)
221 modified.append(lfile)
220 elif lfutil.readasstandin(
222 elif lfutil.readasstandin(
221 ctx1[standin]
223 ctx1[standin]
222 ) != lfutil.hashfile(self.wjoin(lfile)):
224 ) != lfutil.hashfile(self.wjoin(lfile)):
223 modified.append(lfile)
225 modified.append(lfile)
224 else:
226 else:
225 if listclean:
227 if listclean:
226 clean.append(lfile)
228 clean.append(lfile)
227 s = wctx[lfile].lstat()
229 s = wctx[lfile].lstat()
228 mode = s.st_mode
230 mode = s.st_mode
229 size = s.st_size
231 size = s.st_size
230 mtime = timestamp.reliable_mtime_of(
232 mtime = timestamp.reliable_mtime_of(
231 s, mtime_boundary
233 s, mtime_boundary
232 )
234 )
233 if mtime is not None:
235 if mtime is not None:
234 cache_data = (mode, size, mtime)
236 cache_data = (mode, size, mtime)
235 lfdirstate.set_clean(lfile, cache_data)
237 lfdirstate.set_clean(lfile, cache_data)
236 else:
238 else:
237 tocheck = unsure + modified + added + clean
239 tocheck = unsure + modified + added + clean
238 modified, added, clean = [], [], []
240 modified, added, clean = [], [], []
239 checkexec = self.dirstate._checkexec
241 checkexec = self.dirstate._checkexec
240
242
241 for lfile in tocheck:
243 for lfile in tocheck:
242 standin = lfutil.standin(lfile)
244 standin = lfutil.standin(lfile)
243 if standin in ctx1:
245 if standin in ctx1:
244 abslfile = self.wjoin(lfile)
246 abslfile = self.wjoin(lfile)
245 if (
247 if (
246 lfutil.readasstandin(ctx1[standin])
248 lfutil.readasstandin(ctx1[standin])
247 != lfutil.hashfile(abslfile)
249 != lfutil.hashfile(abslfile)
248 ) or (
250 ) or (
249 checkexec
251 checkexec
250 and (b'x' in ctx1.flags(standin))
252 and (b'x' in ctx1.flags(standin))
251 != bool(lfutil.getexecutable(abslfile))
253 != bool(lfutil.getexecutable(abslfile))
252 ):
254 ):
253 modified.append(lfile)
255 modified.append(lfile)
254 elif listclean:
256 elif listclean:
255 clean.append(lfile)
257 clean.append(lfile)
256 else:
258 else:
257 added.append(lfile)
259 added.append(lfile)
258
260
259 # at this point, 'removed' contains largefiles
261 # at this point, 'removed' contains largefiles
260 # marked as 'R' in the working context.
262 # marked as 'R' in the working context.
261 # then, largefiles not managed also in the target
263 # then, largefiles not managed also in the target
262 # context should be excluded from 'removed'.
264 # context should be excluded from 'removed'.
263 removed = [
265 removed = [
264 lfile
266 lfile
265 for lfile in removed
267 for lfile in removed
266 if lfutil.standin(lfile) in ctx1
268 if lfutil.standin(lfile) in ctx1
267 ]
269 ]
268
270
269 # Standins no longer found in lfdirstate have been deleted
271 # Standins no longer found in lfdirstate have been deleted
270 for standin in ctx1.walk(lfutil.getstandinmatcher(self)):
272 for standin in ctx1.walk(lfutil.getstandinmatcher(self)):
271 lfile = lfutil.splitstandin(standin)
273 lfile = lfutil.splitstandin(standin)
272 if not match(lfile):
274 if not match(lfile):
273 continue
275 continue
274 if lfile not in lfdirstate:
276 if lfile not in lfdirstate:
275 deleted.append(lfile)
277 deleted.append(lfile)
276 # Sync "largefile has been removed" back to the
278 # Sync "largefile has been removed" back to the
277 # standin. Removing a file as a side effect of
279 # standin. Removing a file as a side effect of
278 # running status is gross, but the alternatives (if
280 # running status is gross, but the alternatives (if
279 # any) are worse.
281 # any) are worse.
280 self.wvfs.unlinkpath(standin, ignoremissing=True)
282 self.wvfs.unlinkpath(standin, ignoremissing=True)
281
283
282 # Filter result lists
284 # Filter result lists
283 result = list(result)
285 result = list(result)
284
286
285 # Largefiles are not really removed when they're
287 # Largefiles are not really removed when they're
286 # still in the normal dirstate. Likewise, normal
288 # still in the normal dirstate. Likewise, normal
287 # files are not really removed if they are still in
289 # files are not really removed if they are still in
288 # lfdirstate. This happens in merges where files
290 # lfdirstate. This happens in merges where files
289 # change type.
291 # change type.
290 removed = [f for f in removed if f not in self.dirstate]
292 removed = [f for f in removed if f not in self.dirstate]
291 result[2] = [f for f in result[2] if f not in lfdirstate]
293 result[2] = [f for f in result[2] if f not in lfdirstate]
292
294
293 lfiles = set(lfdirstate)
295 lfiles = set(lfdirstate)
294 # Unknown files
296 # Unknown files
295 result[4] = set(result[4]).difference(lfiles)
297 result[4] = set(result[4]).difference(lfiles)
296 # Ignored files
298 # Ignored files
297 result[5] = set(result[5]).difference(lfiles)
299 result[5] = set(result[5]).difference(lfiles)
298 # combine normal files and largefiles
300 # combine normal files and largefiles
299 normals = [
301 normals = [
300 [fn for fn in filelist if not lfutil.isstandin(fn)]
302 [fn for fn in filelist if not lfutil.isstandin(fn)]
301 for filelist in result
303 for filelist in result
302 ]
304 ]
303 lfstatus = (
305 lfstatus = (
304 modified,
306 modified,
305 added,
307 added,
306 removed,
308 removed,
307 deleted,
309 deleted,
308 [],
310 [],
309 [],
311 [],
310 clean,
312 clean,
311 )
313 )
312 result = [
314 result = [
313 sorted(list1 + list2)
315 sorted(list1 + list2)
314 for (list1, list2) in zip(normals, lfstatus)
316 for (list1, list2) in zip(normals, lfstatus)
315 ]
317 ]
316 else: # not against working directory
318 else: # not against working directory
317 result = [
319 result = [
318 [lfutil.splitstandin(f) or f for f in items]
320 [lfutil.splitstandin(f) or f for f in items]
319 for items in result
321 for items in result
320 ]
322 ]
321
323
322 if gotlock:
324 if gotlock:
323 lfdirstate.write(self.currenttransaction())
325 lfdirstate.write(self.currenttransaction())
324 else:
326 else:
325 lfdirstate.invalidate()
327 lfdirstate.invalidate()
326
328
327 self.lfstatus = True
329 self.lfstatus = True
328 return scmutil.status(*result)
330 return scmutil.status(*result)
329
331
330 def commitctx(self, ctx, *args, **kwargs):
332 def commitctx(self, ctx, *args, **kwargs):
331 node = super(lfilesrepo, self).commitctx(ctx, *args, **kwargs)
333 node = super(lfilesrepo, self).commitctx(ctx, *args, **kwargs)
332
334
333 class lfilesctx(ctx.__class__):
335 class lfilesctx(ctx.__class__):
334 def markcommitted(self, node):
336 def markcommitted(self, node):
335 orig = super(lfilesctx, self).markcommitted
337 orig = super(lfilesctx, self).markcommitted
336 return lfutil.markcommitted(orig, self, node)
338 return lfutil.markcommitted(orig, self, node)
337
339
338 ctx.__class__ = lfilesctx
340 ctx.__class__ = lfilesctx
339 return node
341 return node
340
342
341 # Before commit, largefile standins have not had their
343 # Before commit, largefile standins have not had their
342 # contents updated to reflect the hash of their largefile.
344 # contents updated to reflect the hash of their largefile.
343 # Do that here.
345 # Do that here.
344 def commit(
346 def commit(
345 self,
347 self,
346 text=b"",
348 text=b"",
347 user=None,
349 user=None,
348 date=None,
350 date=None,
349 match=None,
351 match=None,
350 force=False,
352 force=False,
351 editor=False,
353 editor=False,
352 extra=None,
354 extra=None,
353 ):
355 ):
354 if extra is None:
356 if extra is None:
355 extra = {}
357 extra = {}
356 orig = super(lfilesrepo, self).commit
358 orig = super(lfilesrepo, self).commit
357
359
358 with self.wlock():
360 with self.wlock():
359 lfcommithook = self._lfcommithooks[-1]
361 lfcommithook = self._lfcommithooks[-1]
360 match = lfcommithook(self, match)
362 match = lfcommithook(self, match)
361 result = orig(
363 result = orig(
362 text=text,
364 text=text,
363 user=user,
365 user=user,
364 date=date,
366 date=date,
365 match=match,
367 match=match,
366 force=force,
368 force=force,
367 editor=editor,
369 editor=editor,
368 extra=extra,
370 extra=extra,
369 )
371 )
370 return result
372 return result
371
373
372 # TODO: _subdirlfs should be moved into "lfutil.py", because
374 # TODO: _subdirlfs should be moved into "lfutil.py", because
373 # it is referred only from "lfutil.updatestandinsbymatch"
375 # it is referred only from "lfutil.updatestandinsbymatch"
374 def _subdirlfs(self, files, lfiles):
376 def _subdirlfs(self, files, lfiles):
375 """
377 """
376 Adjust matched file list
378 Adjust matched file list
377 If we pass a directory to commit whose only committable files
379 If we pass a directory to commit whose only committable files
378 are largefiles, the core commit code aborts before finding
380 are largefiles, the core commit code aborts before finding
379 the largefiles.
381 the largefiles.
380 So we do the following:
382 So we do the following:
381 For directories that only have largefiles as matches,
383 For directories that only have largefiles as matches,
382 we explicitly add the largefiles to the match list and remove
384 we explicitly add the largefiles to the match list and remove
383 the directory.
385 the directory.
384 In other cases, we leave the match list unmodified.
386 In other cases, we leave the match list unmodified.
385 """
387 """
386 actualfiles = []
388 actualfiles = []
387 dirs = []
389 dirs = []
388 regulars = []
390 regulars = []
389
391
390 for f in files:
392 for f in files:
391 if lfutil.isstandin(f + b'/'):
393 if lfutil.isstandin(f + b'/'):
392 raise error.Abort(
394 raise error.Abort(
393 _(b'file "%s" is a largefile standin') % f,
395 _(b'file "%s" is a largefile standin') % f,
394 hint=b'commit the largefile itself instead',
396 hint=b'commit the largefile itself instead',
395 )
397 )
396 # Scan directories
398 # Scan directories
397 if self.wvfs.isdir(f):
399 if self.wvfs.isdir(f):
398 dirs.append(f)
400 dirs.append(f)
399 else:
401 else:
400 regulars.append(f)
402 regulars.append(f)
401
403
402 for f in dirs:
404 for f in dirs:
403 matcheddir = False
405 matcheddir = False
404 d = self.dirstate.normalize(f) + b'/'
406 d = self.dirstate.normalize(f) + b'/'
405 # Check for matched normal files
407 # Check for matched normal files
406 for mf in regulars:
408 for mf in regulars:
407 if self.dirstate.normalize(mf).startswith(d):
409 if self.dirstate.normalize(mf).startswith(d):
408 actualfiles.append(f)
410 actualfiles.append(f)
409 matcheddir = True
411 matcheddir = True
410 break
412 break
411 if not matcheddir:
413 if not matcheddir:
412 # If no normal match, manually append
414 # If no normal match, manually append
413 # any matching largefiles
415 # any matching largefiles
414 for lf in lfiles:
416 for lf in lfiles:
415 if self.dirstate.normalize(lf).startswith(d):
417 if self.dirstate.normalize(lf).startswith(d):
416 actualfiles.append(lf)
418 actualfiles.append(lf)
417 if not matcheddir:
419 if not matcheddir:
418 # There may still be normal files in the dir, so
420 # There may still be normal files in the dir, so
419 # add a directory to the list, which
421 # add a directory to the list, which
420 # forces status/dirstate to walk all files and
422 # forces status/dirstate to walk all files and
421 # call the match function on the matcher, even
423 # call the match function on the matcher, even
422 # on case sensitive filesystems.
424 # on case sensitive filesystems.
423 actualfiles.append(b'.')
425 actualfiles.append(b'.')
424 matcheddir = True
426 matcheddir = True
425 # Nothing in dir, so readd it
427 # Nothing in dir, so readd it
426 # and let commit reject it
428 # and let commit reject it
427 if not matcheddir:
429 if not matcheddir:
428 actualfiles.append(f)
430 actualfiles.append(f)
429
431
430 # Always add normal files
432 # Always add normal files
431 actualfiles += regulars
433 actualfiles += regulars
432 return actualfiles
434 return actualfiles
433
435
434 repo.__class__ = lfilesrepo
436 repo.__class__ = lfilesrepo
435
437
436 # stack of hooks being executed before committing.
438 # stack of hooks being executed before committing.
437 # only last element ("_lfcommithooks[-1]") is used for each committing.
439 # only last element ("_lfcommithooks[-1]") is used for each committing.
438 repo._lfcommithooks = [lfutil.updatestandinsbymatch]
440 repo._lfcommithooks = [lfutil.updatestandinsbymatch]
439
441
440 # Stack of status writer functions taking "*msg, **opts" arguments
442 # Stack of status writer functions taking "*msg, **opts" arguments
441 # like "ui.status()". Only last element ("_lfstatuswriters[-1]")
443 # like "ui.status()". Only last element ("_lfstatuswriters[-1]")
442 # is used to write status out.
444 # is used to write status out.
443 repo._lfstatuswriters = [ui.status]
445 repo._lfstatuswriters = [ui.status]
444
446
445 def prepushoutgoinghook(pushop):
447 def prepushoutgoinghook(pushop):
446 """Push largefiles for pushop before pushing revisions."""
448 """Push largefiles for pushop before pushing revisions."""
447 lfrevs = pushop.lfrevs
449 lfrevs = pushop.lfrevs
448 if lfrevs is None:
450 if lfrevs is None:
449 lfrevs = pushop.outgoing.missing
451 lfrevs = pushop.outgoing.missing
450 if lfrevs:
452 if lfrevs:
451 toupload = set()
453 toupload = set()
452 addfunc = lambda fn, lfhash: toupload.add(lfhash)
454 addfunc = lambda fn, lfhash: toupload.add(lfhash)
453 lfutil.getlfilestoupload(pushop.repo, lfrevs, addfunc)
455 lfutil.getlfilestoupload(pushop.repo, lfrevs, addfunc)
454 lfcommands.uploadlfiles(ui, pushop.repo, pushop.remote, toupload)
456 lfcommands.uploadlfiles(ui, pushop.repo, pushop.remote, toupload)
455
457
456 repo.prepushoutgoinghooks.add(b"largefiles", prepushoutgoinghook)
458 repo.prepushoutgoinghooks.add(b"largefiles", prepushoutgoinghook)
457
459
458 def checkrequireslfiles(ui, repo, **kwargs):
460 def checkrequireslfiles(ui, repo, **kwargs):
459 with repo.lock():
461 with repo.lock():
460 if b'largefiles' in repo.requirements:
462 if b'largefiles' in repo.requirements:
461 return
463 return
462 marker = lfutil.shortnameslash
464 marker = lfutil.shortnameslash
463 for entry in repo.store.data_entries():
465 for entry in repo.store.data_entries():
464 # XXX note that this match is not rooted and can wrongly match
466 # XXX note that this match is not rooted and can wrongly match
465 # directory ending with ".hglf"
467 # directory ending with ".hglf"
466 if entry.is_revlog and marker in entry.target_id:
468 if entry.is_revlog and marker in entry.target_id:
467 repo.requirements.add(b'largefiles')
469 repo.requirements.add(b'largefiles')
468 scmutil.writereporequirements(repo)
470 scmutil.writereporequirements(repo)
469 break
471 break
470
472
471 ui.setconfig(
473 ui.setconfig(
472 b'hooks', b'changegroup.lfiles', checkrequireslfiles, b'largefiles'
474 b'hooks', b'changegroup.lfiles', checkrequireslfiles, b'largefiles'
473 )
475 )
474 ui.setconfig(b'hooks', b'commit.lfiles', checkrequireslfiles, b'largefiles')
476 ui.setconfig(b'hooks', b'commit.lfiles', checkrequireslfiles, b'largefiles')
@@ -1,2281 +1,2281 b''
1 # rebase.py - rebasing feature for mercurial
1 # rebase.py - rebasing feature for mercurial
2 #
2 #
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to move sets of revisions to a different ancestor
8 '''command to move sets of revisions to a different ancestor
9
9
10 This extension lets you rebase changesets in an existing Mercurial
10 This extension lets you rebase changesets in an existing Mercurial
11 repository.
11 repository.
12
12
13 For more information:
13 For more information:
14 https://mercurial-scm.org/wiki/RebaseExtension
14 https://mercurial-scm.org/wiki/RebaseExtension
15 '''
15 '''
16
16
17
17
18 import os
18 import os
19
19
20 from mercurial.i18n import _
20 from mercurial.i18n import _
21 from mercurial.node import (
21 from mercurial.node import (
22 nullrev,
22 nullrev,
23 short,
23 short,
24 wdirrev,
24 wdirrev,
25 )
25 )
26 from mercurial.pycompat import open
26 from mercurial.pycompat import open
27 from mercurial import (
27 from mercurial import (
28 bookmarks,
28 bookmarks,
29 cmdutil,
29 cmdutil,
30 commands,
30 commands,
31 copies,
31 copies,
32 destutil,
32 destutil,
33 error,
33 error,
34 extensions,
34 extensions,
35 logcmdutil,
35 logcmdutil,
36 merge as mergemod,
36 merge as mergemod,
37 mergestate as mergestatemod,
37 mergestate as mergestatemod,
38 mergeutil,
38 mergeutil,
39 obsolete,
39 obsolete,
40 obsutil,
40 obsutil,
41 patch,
41 patch,
42 phases,
42 phases,
43 pycompat,
43 pycompat,
44 registrar,
44 registrar,
45 repair,
45 repair,
46 revset,
46 revset,
47 revsetlang,
47 revsetlang,
48 rewriteutil,
48 rewriteutil,
49 scmutil,
49 scmutil,
50 smartset,
50 smartset,
51 state as statemod,
51 state as statemod,
52 util,
52 util,
53 )
53 )
54
54
55
55
56 # The following constants are used throughout the rebase module. The ordering of
56 # The following constants are used throughout the rebase module. The ordering of
57 # their values must be maintained.
57 # their values must be maintained.
58
58
59 # Indicates that a revision needs to be rebased
59 # Indicates that a revision needs to be rebased
60 revtodo = -1
60 revtodo = -1
61 revtodostr = b'-1'
61 revtodostr = b'-1'
62
62
63 # legacy revstates no longer needed in current code
63 # legacy revstates no longer needed in current code
64 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
64 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
65 legacystates = {b'-2', b'-3', b'-4', b'-5'}
65 legacystates = {b'-2', b'-3', b'-4', b'-5'}
66
66
67 cmdtable = {}
67 cmdtable = {}
68 command = registrar.command(cmdtable)
68 command = registrar.command(cmdtable)
69
69
70 configtable = {}
70 configtable = {}
71 configitem = registrar.configitem(configtable)
71 configitem = registrar.configitem(configtable)
72 configitem(
72 configitem(
73 b'devel',
73 b'devel',
74 b'rebase.force-in-memory-merge',
74 b'rebase.force-in-memory-merge',
75 default=False,
75 default=False,
76 )
76 )
77 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
77 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
78 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
78 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
79 # be specifying the version(s) of Mercurial they are tested with, or
79 # be specifying the version(s) of Mercurial they are tested with, or
80 # leave the attribute unspecified.
80 # leave the attribute unspecified.
81 testedwith = b'ships-with-hg-core'
81 testedwith = b'ships-with-hg-core'
82
82
83
83
84 def _nothingtorebase():
84 def _nothingtorebase():
85 return 1
85 return 1
86
86
87
87
88 def _savebranch(ctx, extra):
88 def _savebranch(ctx, extra):
89 extra[b'branch'] = ctx.branch()
89 extra[b'branch'] = ctx.branch()
90
90
91
91
92 def _destrebase(repo, sourceset, destspace=None):
92 def _destrebase(repo, sourceset, destspace=None):
93 """small wrapper around destmerge to pass the right extra args
93 """small wrapper around destmerge to pass the right extra args
94
94
95 Please wrap destutil.destmerge instead."""
95 Please wrap destutil.destmerge instead."""
96 return destutil.destmerge(
96 return destutil.destmerge(
97 repo,
97 repo,
98 action=b'rebase',
98 action=b'rebase',
99 sourceset=sourceset,
99 sourceset=sourceset,
100 onheadcheck=False,
100 onheadcheck=False,
101 destspace=destspace,
101 destspace=destspace,
102 )
102 )
103
103
104
104
105 revsetpredicate = registrar.revsetpredicate()
105 revsetpredicate = registrar.revsetpredicate()
106
106
107
107
108 @revsetpredicate(b'_destrebase')
108 @revsetpredicate(b'_destrebase')
109 def _revsetdestrebase(repo, subset, x):
109 def _revsetdestrebase(repo, subset, x):
110 # ``_rebasedefaultdest()``
110 # ``_rebasedefaultdest()``
111
111
112 # default destination for rebase.
112 # default destination for rebase.
113 # # XXX: Currently private because I expect the signature to change.
113 # # XXX: Currently private because I expect the signature to change.
114 # # XXX: - bailing out in case of ambiguity vs returning all data.
114 # # XXX: - bailing out in case of ambiguity vs returning all data.
115 # i18n: "_rebasedefaultdest" is a keyword
115 # i18n: "_rebasedefaultdest" is a keyword
116 sourceset = None
116 sourceset = None
117 if x is not None:
117 if x is not None:
118 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
118 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
119 return subset & smartset.baseset([_destrebase(repo, sourceset)])
119 return subset & smartset.baseset([_destrebase(repo, sourceset)])
120
120
121
121
122 @revsetpredicate(b'_destautoorphanrebase')
122 @revsetpredicate(b'_destautoorphanrebase')
123 def _revsetdestautoorphanrebase(repo, subset, x):
123 def _revsetdestautoorphanrebase(repo, subset, x):
124 # ``_destautoorphanrebase()``
124 # ``_destautoorphanrebase()``
125
125
126 # automatic rebase destination for a single orphan revision.
126 # automatic rebase destination for a single orphan revision.
127 unfi = repo.unfiltered()
127 unfi = repo.unfiltered()
128 obsoleted = unfi.revs(b'obsolete()')
128 obsoleted = unfi.revs(b'obsolete()')
129
129
130 src = revset.getset(repo, subset, x).first()
130 src = revset.getset(repo, subset, x).first()
131
131
132 # Empty src or already obsoleted - Do not return a destination
132 # Empty src or already obsoleted - Do not return a destination
133 if not src or src in obsoleted:
133 if not src or src in obsoleted:
134 return smartset.baseset()
134 return smartset.baseset()
135 dests = destutil.orphanpossibledestination(repo, src)
135 dests = destutil.orphanpossibledestination(repo, src)
136 if len(dests) > 1:
136 if len(dests) > 1:
137 raise error.StateError(
137 raise error.StateError(
138 _(b"ambiguous automatic rebase: %r could end up on any of %r")
138 _(b"ambiguous automatic rebase: %r could end up on any of %r")
139 % (src, dests)
139 % (src, dests)
140 )
140 )
141 # We have zero or one destination, so we can just return here.
141 # We have zero or one destination, so we can just return here.
142 return smartset.baseset(dests)
142 return smartset.baseset(dests)
143
143
144
144
145 def _ctxdesc(ctx):
145 def _ctxdesc(ctx):
146 """short description for a context"""
146 """short description for a context"""
147 return cmdutil.format_changeset_summary(
147 return cmdutil.format_changeset_summary(
148 ctx.repo().ui, ctx, command=b'rebase'
148 ctx.repo().ui, ctx, command=b'rebase'
149 )
149 )
150
150
151
151
152 class rebaseruntime:
152 class rebaseruntime:
153 """This class is a container for rebase runtime state"""
153 """This class is a container for rebase runtime state"""
154
154
155 def __init__(self, repo, ui, inmemory=False, dryrun=False, opts=None):
155 def __init__(self, repo, ui, inmemory=False, dryrun=False, opts=None):
156 if opts is None:
156 if opts is None:
157 opts = {}
157 opts = {}
158
158
159 # prepared: whether we have rebasestate prepared or not. Currently it
159 # prepared: whether we have rebasestate prepared or not. Currently it
160 # decides whether "self.repo" is unfiltered or not.
160 # decides whether "self.repo" is unfiltered or not.
161 # The rebasestate has explicit hash to hash instructions not depending
161 # The rebasestate has explicit hash to hash instructions not depending
162 # on visibility. If rebasestate exists (in-memory or on-disk), use
162 # on visibility. If rebasestate exists (in-memory or on-disk), use
163 # unfiltered repo to avoid visibility issues.
163 # unfiltered repo to avoid visibility issues.
164 # Before knowing rebasestate (i.e. when starting a new rebase (not
164 # Before knowing rebasestate (i.e. when starting a new rebase (not
165 # --continue or --abort)), the original repo should be used so
165 # --continue or --abort)), the original repo should be used so
166 # visibility-dependent revsets are correct.
166 # visibility-dependent revsets are correct.
167 self.prepared = False
167 self.prepared = False
168 self.resume = False
168 self.resume = False
169 self._repo = repo
169 self._repo = repo
170
170
171 self.ui = ui
171 self.ui = ui
172 self.opts = opts
172 self.opts = opts
173 self.originalwd = None
173 self.originalwd = None
174 self.external = nullrev
174 self.external = nullrev
175 # Mapping between the old revision id and either what is the new rebased
175 # Mapping between the old revision id and either what is the new rebased
176 # revision or what needs to be done with the old revision. The state
176 # revision or what needs to be done with the old revision. The state
177 # dict will be what contains most of the rebase progress state.
177 # dict will be what contains most of the rebase progress state.
178 self.state = {}
178 self.state = {}
179 self.activebookmark = None
179 self.activebookmark = None
180 self.destmap = {}
180 self.destmap = {}
181 self.skipped = set()
181 self.skipped = set()
182
182
183 self.collapsef = opts.get('collapse', False)
183 self.collapsef = opts.get('collapse', False)
184 self.collapsemsg = cmdutil.logmessage(ui, pycompat.byteskwargs(opts))
184 self.collapsemsg = cmdutil.logmessage(ui, pycompat.byteskwargs(opts))
185 self.date = opts.get('date', None)
185 self.date = opts.get('date', None)
186
186
187 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
187 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
188 self.extrafns = [rewriteutil.preserve_extras_on_rebase]
188 self.extrafns = [rewriteutil.preserve_extras_on_rebase]
189 if e:
189 if e:
190 self.extrafns = [e]
190 self.extrafns = [e]
191
191
192 self.backupf = ui.configbool(b'rewrite', b'backup-bundle')
192 self.backupf = ui.configbool(b'rewrite', b'backup-bundle')
193 self.keepf = opts.get('keep', False)
193 self.keepf = opts.get('keep', False)
194 self.keepbranchesf = opts.get('keepbranches', False)
194 self.keepbranchesf = opts.get('keepbranches', False)
195 self.skipemptysuccessorf = rewriteutil.skip_empty_successor(
195 self.skipemptysuccessorf = rewriteutil.skip_empty_successor(
196 repo.ui, b'rebase'
196 repo.ui, b'rebase'
197 )
197 )
198 self.obsolete_with_successor_in_destination = {}
198 self.obsolete_with_successor_in_destination = {}
199 self.obsolete_with_successor_in_rebase_set = set()
199 self.obsolete_with_successor_in_rebase_set = set()
200 self.inmemory = inmemory
200 self.inmemory = inmemory
201 self.dryrun = dryrun
201 self.dryrun = dryrun
202 self.stateobj = statemod.cmdstate(repo, b'rebasestate')
202 self.stateobj = statemod.cmdstate(repo, b'rebasestate')
203
203
204 @property
204 @property
205 def repo(self):
205 def repo(self):
206 if self.prepared:
206 if self.prepared:
207 return self._repo.unfiltered()
207 return self._repo.unfiltered()
208 else:
208 else:
209 return self._repo
209 return self._repo
210
210
211 def storestatus(self, tr=None):
211 def storestatus(self, tr=None):
212 """Store the current status to allow recovery"""
212 """Store the current status to allow recovery"""
213 if tr:
213 if tr:
214 tr.addfilegenerator(
214 tr.addfilegenerator(
215 b'rebasestate',
215 b'rebasestate',
216 (b'rebasestate',),
216 (b'rebasestate',),
217 self._writestatus,
217 self._writestatus,
218 location=b'plain',
218 location=b'plain',
219 )
219 )
220 else:
220 else:
221 with self.repo.vfs(b"rebasestate", b"w") as f:
221 with self.repo.vfs(b"rebasestate", b"w") as f:
222 self._writestatus(f)
222 self._writestatus(f)
223
223
224 def _writestatus(self, f):
224 def _writestatus(self, f):
225 repo = self.repo
225 repo = self.repo
226 assert repo.filtername is None
226 assert repo.filtername is None
227 f.write(repo[self.originalwd].hex() + b'\n')
227 f.write(repo[self.originalwd].hex() + b'\n')
228 # was "dest". we now write dest per src root below.
228 # was "dest". we now write dest per src root below.
229 f.write(b'\n')
229 f.write(b'\n')
230 f.write(repo[self.external].hex() + b'\n')
230 f.write(repo[self.external].hex() + b'\n')
231 f.write(b'%d\n' % int(self.collapsef))
231 f.write(b'%d\n' % int(self.collapsef))
232 f.write(b'%d\n' % int(self.keepf))
232 f.write(b'%d\n' % int(self.keepf))
233 f.write(b'%d\n' % int(self.keepbranchesf))
233 f.write(b'%d\n' % int(self.keepbranchesf))
234 f.write(b'%s\n' % (self.activebookmark or b''))
234 f.write(b'%s\n' % (self.activebookmark or b''))
235 destmap = self.destmap
235 destmap = self.destmap
236 for d, v in self.state.items():
236 for d, v in self.state.items():
237 oldrev = repo[d].hex()
237 oldrev = repo[d].hex()
238 if v >= 0:
238 if v >= 0:
239 newrev = repo[v].hex()
239 newrev = repo[v].hex()
240 else:
240 else:
241 newrev = b"%d" % v
241 newrev = b"%d" % v
242 destnode = repo[destmap[d]].hex()
242 destnode = repo[destmap[d]].hex()
243 f.write(b"%s:%s:%s\n" % (oldrev, newrev, destnode))
243 f.write(b"%s:%s:%s\n" % (oldrev, newrev, destnode))
244 repo.ui.debug(b'rebase status stored\n')
244 repo.ui.debug(b'rebase status stored\n')
245
245
246 def restorestatus(self):
246 def restorestatus(self):
247 """Restore a previously stored status"""
247 """Restore a previously stored status"""
248 if not self.stateobj.exists():
248 if not self.stateobj.exists():
249 cmdutil.wrongtooltocontinue(self.repo, _(b'rebase'))
249 cmdutil.wrongtooltocontinue(self.repo, _(b'rebase'))
250
250
251 data = self._read()
251 data = self._read()
252 self.repo.ui.debug(b'rebase status resumed\n')
252 self.repo.ui.debug(b'rebase status resumed\n')
253
253
254 self.originalwd = data[b'originalwd']
254 self.originalwd = data[b'originalwd']
255 self.destmap = data[b'destmap']
255 self.destmap = data[b'destmap']
256 self.state = data[b'state']
256 self.state = data[b'state']
257 self.skipped = data[b'skipped']
257 self.skipped = data[b'skipped']
258 self.collapsef = data[b'collapse']
258 self.collapsef = data[b'collapse']
259 self.keepf = data[b'keep']
259 self.keepf = data[b'keep']
260 self.keepbranchesf = data[b'keepbranches']
260 self.keepbranchesf = data[b'keepbranches']
261 self.external = data[b'external']
261 self.external = data[b'external']
262 self.activebookmark = data[b'activebookmark']
262 self.activebookmark = data[b'activebookmark']
263
263
264 def _read(self):
264 def _read(self):
265 self.prepared = True
265 self.prepared = True
266 repo = self.repo
266 repo = self.repo
267 assert repo.filtername is None
267 assert repo.filtername is None
268 data = {
268 data = {
269 b'keepbranches': None,
269 b'keepbranches': None,
270 b'collapse': None,
270 b'collapse': None,
271 b'activebookmark': None,
271 b'activebookmark': None,
272 b'external': nullrev,
272 b'external': nullrev,
273 b'keep': None,
273 b'keep': None,
274 b'originalwd': None,
274 b'originalwd': None,
275 }
275 }
276 legacydest = None
276 legacydest = None
277 state = {}
277 state = {}
278 destmap = {}
278 destmap = {}
279
279
280 if True:
280 if True:
281 f = repo.vfs(b"rebasestate")
281 f = repo.vfs(b"rebasestate")
282 for i, l in enumerate(f.read().splitlines()):
282 for i, l in enumerate(f.read().splitlines()):
283 if i == 0:
283 if i == 0:
284 data[b'originalwd'] = repo[l].rev()
284 data[b'originalwd'] = repo[l].rev()
285 elif i == 1:
285 elif i == 1:
286 # this line should be empty in newer version. but legacy
286 # this line should be empty in newer version. but legacy
287 # clients may still use it
287 # clients may still use it
288 if l:
288 if l:
289 legacydest = repo[l].rev()
289 legacydest = repo[l].rev()
290 elif i == 2:
290 elif i == 2:
291 data[b'external'] = repo[l].rev()
291 data[b'external'] = repo[l].rev()
292 elif i == 3:
292 elif i == 3:
293 data[b'collapse'] = bool(int(l))
293 data[b'collapse'] = bool(int(l))
294 elif i == 4:
294 elif i == 4:
295 data[b'keep'] = bool(int(l))
295 data[b'keep'] = bool(int(l))
296 elif i == 5:
296 elif i == 5:
297 data[b'keepbranches'] = bool(int(l))
297 data[b'keepbranches'] = bool(int(l))
298 elif i == 6 and not (len(l) == 81 and b':' in l):
298 elif i == 6 and not (len(l) == 81 and b':' in l):
299 # line 6 is a recent addition, so for backwards
299 # line 6 is a recent addition, so for backwards
300 # compatibility check that the line doesn't look like the
300 # compatibility check that the line doesn't look like the
301 # oldrev:newrev lines
301 # oldrev:newrev lines
302 data[b'activebookmark'] = l
302 data[b'activebookmark'] = l
303 else:
303 else:
304 args = l.split(b':')
304 args = l.split(b':')
305 oldrev = repo[args[0]].rev()
305 oldrev = repo[args[0]].rev()
306 newrev = args[1]
306 newrev = args[1]
307 if newrev in legacystates:
307 if newrev in legacystates:
308 continue
308 continue
309 if len(args) > 2:
309 if len(args) > 2:
310 destrev = repo[args[2]].rev()
310 destrev = repo[args[2]].rev()
311 else:
311 else:
312 destrev = legacydest
312 destrev = legacydest
313 destmap[oldrev] = destrev
313 destmap[oldrev] = destrev
314 if newrev == revtodostr:
314 if newrev == revtodostr:
315 state[oldrev] = revtodo
315 state[oldrev] = revtodo
316 # Legacy compat special case
316 # Legacy compat special case
317 else:
317 else:
318 state[oldrev] = repo[newrev].rev()
318 state[oldrev] = repo[newrev].rev()
319
319
320 if data[b'keepbranches'] is None:
320 if data[b'keepbranches'] is None:
321 raise error.Abort(_(b'.hg/rebasestate is incomplete'))
321 raise error.Abort(_(b'.hg/rebasestate is incomplete'))
322
322
323 data[b'destmap'] = destmap
323 data[b'destmap'] = destmap
324 data[b'state'] = state
324 data[b'state'] = state
325 skipped = set()
325 skipped = set()
326 # recompute the set of skipped revs
326 # recompute the set of skipped revs
327 if not data[b'collapse']:
327 if not data[b'collapse']:
328 seen = set(destmap.values())
328 seen = set(destmap.values())
329 for old, new in sorted(state.items()):
329 for old, new in sorted(state.items()):
330 if new != revtodo and new in seen:
330 if new != revtodo and new in seen:
331 skipped.add(old)
331 skipped.add(old)
332 seen.add(new)
332 seen.add(new)
333 data[b'skipped'] = skipped
333 data[b'skipped'] = skipped
334 repo.ui.debug(
334 repo.ui.debug(
335 b'computed skipped revs: %s\n'
335 b'computed skipped revs: %s\n'
336 % (b' '.join(b'%d' % r for r in sorted(skipped)) or b'')
336 % (b' '.join(b'%d' % r for r in sorted(skipped)) or b'')
337 )
337 )
338
338
339 return data
339 return data
340
340
341 def _handleskippingobsolete(self):
341 def _handleskippingobsolete(self):
342 """Compute structures necessary for skipping obsolete revisions"""
342 """Compute structures necessary for skipping obsolete revisions"""
343 if self.keepf:
343 if self.keepf:
344 return
344 return
345 if not self.ui.configbool(b'experimental', b'rebaseskipobsolete'):
345 if not self.ui.configbool(b'experimental', b'rebaseskipobsolete'):
346 return
346 return
347 obsoleteset = {r for r in self.state if self.repo[r].obsolete()}
347 obsoleteset = {r for r in self.state if self.repo[r].obsolete()}
348 (
348 (
349 self.obsolete_with_successor_in_destination,
349 self.obsolete_with_successor_in_destination,
350 self.obsolete_with_successor_in_rebase_set,
350 self.obsolete_with_successor_in_rebase_set,
351 ) = _compute_obsolete_sets(self.repo, obsoleteset, self.destmap)
351 ) = _compute_obsolete_sets(self.repo, obsoleteset, self.destmap)
352 skippedset = set(self.obsolete_with_successor_in_destination)
352 skippedset = set(self.obsolete_with_successor_in_destination)
353 skippedset.update(self.obsolete_with_successor_in_rebase_set)
353 skippedset.update(self.obsolete_with_successor_in_rebase_set)
354 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
354 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
355 if obsolete.isenabled(self.repo, obsolete.allowdivergenceopt):
355 if obsolete.isenabled(self.repo, obsolete.allowdivergenceopt):
356 self.obsolete_with_successor_in_rebase_set = set()
356 self.obsolete_with_successor_in_rebase_set = set()
357 else:
357 else:
358 for rev in self.repo.revs(
358 for rev in self.repo.revs(
359 b'descendants(%ld) and not %ld',
359 b'descendants(%ld) and not %ld',
360 self.obsolete_with_successor_in_rebase_set,
360 self.obsolete_with_successor_in_rebase_set,
361 self.obsolete_with_successor_in_rebase_set,
361 self.obsolete_with_successor_in_rebase_set,
362 ):
362 ):
363 self.state.pop(rev, None)
363 self.state.pop(rev, None)
364 self.destmap.pop(rev, None)
364 self.destmap.pop(rev, None)
365
365
366 def _prepareabortorcontinue(
366 def _prepareabortorcontinue(
367 self, isabort, backup=True, suppwarns=False, dryrun=False, confirm=False
367 self, isabort, backup=True, suppwarns=False, dryrun=False, confirm=False
368 ):
368 ):
369 self.resume = True
369 self.resume = True
370 try:
370 try:
371 self.restorestatus()
371 self.restorestatus()
372 # Calculate self.obsolete_* sets
372 # Calculate self.obsolete_* sets
373 self._handleskippingobsolete()
373 self._handleskippingobsolete()
374 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
374 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
375 except error.RepoLookupError:
375 except error.RepoLookupError:
376 if isabort:
376 if isabort:
377 clearstatus(self.repo)
377 clearstatus(self.repo)
378 clearcollapsemsg(self.repo)
378 clearcollapsemsg(self.repo)
379 self.repo.ui.warn(
379 self.repo.ui.warn(
380 _(
380 _(
381 b'rebase aborted (no revision is removed,'
381 b'rebase aborted (no revision is removed,'
382 b' only broken state is cleared)\n'
382 b' only broken state is cleared)\n'
383 )
383 )
384 )
384 )
385 return 0
385 return 0
386 else:
386 else:
387 msg = _(b'cannot continue inconsistent rebase')
387 msg = _(b'cannot continue inconsistent rebase')
388 hint = _(b'use "hg rebase --abort" to clear broken state')
388 hint = _(b'use "hg rebase --abort" to clear broken state')
389 raise error.Abort(msg, hint=hint)
389 raise error.Abort(msg, hint=hint)
390
390
391 if isabort:
391 if isabort:
392 backup = backup and self.backupf
392 backup = backup and self.backupf
393 return self._abort(
393 return self._abort(
394 backup=backup,
394 backup=backup,
395 suppwarns=suppwarns,
395 suppwarns=suppwarns,
396 dryrun=dryrun,
396 dryrun=dryrun,
397 confirm=confirm,
397 confirm=confirm,
398 )
398 )
399
399
400 def _preparenewrebase(self, destmap):
400 def _preparenewrebase(self, destmap):
401 if not destmap:
401 if not destmap:
402 return _nothingtorebase()
402 return _nothingtorebase()
403
403
404 result = buildstate(self.repo, destmap, self.collapsef)
404 result = buildstate(self.repo, destmap, self.collapsef)
405
405
406 if not result:
406 if not result:
407 # Empty state built, nothing to rebase
407 # Empty state built, nothing to rebase
408 self.ui.status(_(b'nothing to rebase\n'))
408 self.ui.status(_(b'nothing to rebase\n'))
409 return _nothingtorebase()
409 return _nothingtorebase()
410
410
411 (self.originalwd, self.destmap, self.state) = result
411 (self.originalwd, self.destmap, self.state) = result
412 if self.collapsef:
412 if self.collapsef:
413 dests = set(self.destmap.values())
413 dests = set(self.destmap.values())
414 if len(dests) != 1:
414 if len(dests) != 1:
415 raise error.InputError(
415 raise error.InputError(
416 _(b'--collapse does not work with multiple destinations')
416 _(b'--collapse does not work with multiple destinations')
417 )
417 )
418 destrev = next(iter(dests))
418 destrev = next(iter(dests))
419 destancestors = self.repo.changelog.ancestors(
419 destancestors = self.repo.changelog.ancestors(
420 [destrev], inclusive=True
420 [destrev], inclusive=True
421 )
421 )
422 self.external = externalparent(self.repo, self.state, destancestors)
422 self.external = externalparent(self.repo, self.state, destancestors)
423
423
424 for destrev in sorted(set(destmap.values())):
424 for destrev in sorted(set(destmap.values())):
425 dest = self.repo[destrev]
425 dest = self.repo[destrev]
426 if dest.closesbranch() and not self.keepbranchesf:
426 if dest.closesbranch() and not self.keepbranchesf:
427 self.ui.status(_(b'reopening closed branch head %s\n') % dest)
427 self.ui.status(_(b'reopening closed branch head %s\n') % dest)
428
428
429 # Calculate self.obsolete_* sets
429 # Calculate self.obsolete_* sets
430 self._handleskippingobsolete()
430 self._handleskippingobsolete()
431
431
432 if not self.keepf:
432 if not self.keepf:
433 rebaseset = set(destmap.keys())
433 rebaseset = set(destmap.keys())
434 rebaseset -= set(self.obsolete_with_successor_in_destination)
434 rebaseset -= set(self.obsolete_with_successor_in_destination)
435 rebaseset -= self.obsolete_with_successor_in_rebase_set
435 rebaseset -= self.obsolete_with_successor_in_rebase_set
436 # We have our own divergence-checking in the rebase extension
436 # We have our own divergence-checking in the rebase extension
437 overrides = {}
437 overrides = {}
438 if obsolete.isenabled(self.repo, obsolete.createmarkersopt):
438 if obsolete.isenabled(self.repo, obsolete.createmarkersopt):
439 overrides = {
439 overrides = {
440 (b'experimental', b'evolution.allowdivergence'): b'true'
440 (b'experimental', b'evolution.allowdivergence'): b'true'
441 }
441 }
442 try:
442 try:
443 with self.ui.configoverride(overrides):
443 with self.ui.configoverride(overrides):
444 rewriteutil.precheck(self.repo, rebaseset, action=b'rebase')
444 rewriteutil.precheck(self.repo, rebaseset, action=b'rebase')
445 except error.Abort as e:
445 except error.Abort as e:
446 if e.hint is None:
446 if e.hint is None:
447 e.hint = _(b'use --keep to keep original changesets')
447 e.hint = _(b'use --keep to keep original changesets')
448 raise e
448 raise e
449
449
450 self.prepared = True
450 self.prepared = True
451
451
452 def _assignworkingcopy(self):
452 def _assignworkingcopy(self):
453 if self.inmemory:
453 if self.inmemory:
454 from mercurial.context import overlayworkingctx
454 from mercurial.context import overlayworkingctx
455
455
456 self.wctx = overlayworkingctx(self.repo)
456 self.wctx = overlayworkingctx(self.repo)
457 self.repo.ui.debug(b"rebasing in memory\n")
457 self.repo.ui.debug(b"rebasing in memory\n")
458 else:
458 else:
459 self.wctx = self.repo[None]
459 self.wctx = self.repo[None]
460 self.repo.ui.debug(b"rebasing on disk\n")
460 self.repo.ui.debug(b"rebasing on disk\n")
461 self.repo.ui.log(
461 self.repo.ui.log(
462 b"rebase",
462 b"rebase",
463 b"using in-memory rebase: %r\n",
463 b"using in-memory rebase: %r\n",
464 self.inmemory,
464 self.inmemory,
465 rebase_imm_used=self.inmemory,
465 rebase_imm_used=self.inmemory,
466 )
466 )
467
467
468 def _performrebase(self, tr):
468 def _performrebase(self, tr):
469 self._assignworkingcopy()
469 self._assignworkingcopy()
470 repo, ui = self.repo, self.ui
470 repo, ui = self.repo, self.ui
471 if self.keepbranchesf:
471 if self.keepbranchesf:
472 # insert _savebranch at the start of extrafns so if
472 # insert _savebranch at the start of extrafns so if
473 # there's a user-provided extrafn it can clobber branch if
473 # there's a user-provided extrafn it can clobber branch if
474 # desired
474 # desired
475 self.extrafns.insert(0, _savebranch)
475 self.extrafns.insert(0, _savebranch)
476 if self.collapsef:
476 if self.collapsef:
477 branches = set()
477 branches = set()
478 for rev in self.state:
478 for rev in self.state:
479 branches.add(repo[rev].branch())
479 branches.add(repo[rev].branch())
480 if len(branches) > 1:
480 if len(branches) > 1:
481 raise error.InputError(
481 raise error.InputError(
482 _(b'cannot collapse multiple named branches')
482 _(b'cannot collapse multiple named branches')
483 )
483 )
484
484
485 # Keep track of the active bookmarks in order to reset them later
485 # Keep track of the active bookmarks in order to reset them later
486 self.activebookmark = self.activebookmark or repo._activebookmark
486 self.activebookmark = self.activebookmark or repo._activebookmark
487 if self.activebookmark:
487 if self.activebookmark:
488 bookmarks.deactivate(repo)
488 bookmarks.deactivate(repo)
489
489
490 # Store the state before we begin so users can run 'hg rebase --abort'
490 # Store the state before we begin so users can run 'hg rebase --abort'
491 # if we fail before the transaction closes.
491 # if we fail before the transaction closes.
492 self.storestatus()
492 self.storestatus()
493 if tr:
493 if tr:
494 # When using single transaction, store state when transaction
494 # When using single transaction, store state when transaction
495 # commits.
495 # commits.
496 self.storestatus(tr)
496 self.storestatus(tr)
497
497
498 cands = [k for k, v in self.state.items() if v == revtodo]
498 cands = [k for k, v in self.state.items() if v == revtodo]
499 p = repo.ui.makeprogress(
499 p = repo.ui.makeprogress(
500 _(b"rebasing"), unit=_(b'changesets'), total=len(cands)
500 _(b"rebasing"), unit=_(b'changesets'), total=len(cands)
501 )
501 )
502
502
503 def progress(ctx):
503 def progress(ctx):
504 p.increment(item=(b"%d:%s" % (ctx.rev(), ctx)))
504 p.increment(item=(b"%d:%s" % (ctx.rev(), ctx)))
505
505
506 for subset in sortsource(self.destmap):
506 for subset in sortsource(self.destmap):
507 sortedrevs = self.repo.revs(b'sort(%ld, -topo)', subset)
507 sortedrevs = self.repo.revs(b'sort(%ld, -topo)', subset)
508 for rev in sortedrevs:
508 for rev in sortedrevs:
509 self._rebasenode(tr, rev, progress)
509 self._rebasenode(tr, rev, progress)
510 p.complete()
510 p.complete()
511 ui.note(_(b'rebase merging completed\n'))
511 ui.note(_(b'rebase merging completed\n'))
512
512
513 def _concludenode(self, rev, editor, commitmsg=None):
513 def _concludenode(self, rev, editor, commitmsg=None):
514 """Commit the wd changes with parents p1 and p2.
514 """Commit the wd changes with parents p1 and p2.
515
515
516 Reuse commit info from rev but also store useful information in extra.
516 Reuse commit info from rev but also store useful information in extra.
517 Return node of committed revision."""
517 Return node of committed revision."""
518 repo = self.repo
518 repo = self.repo
519 ctx = repo[rev]
519 ctx = repo[rev]
520 if commitmsg is None:
520 if commitmsg is None:
521 commitmsg = ctx.description()
521 commitmsg = ctx.description()
522
522
523 # Skip replacement if collapsing, as that degenerates to p1 for all
523 # Skip replacement if collapsing, as that degenerates to p1 for all
524 # nodes.
524 # nodes.
525 if not self.collapsef:
525 if not self.collapsef:
526 cl = repo.changelog
526 cl = repo.changelog
527 commitmsg = rewriteutil.update_hash_refs(
527 commitmsg = rewriteutil.update_hash_refs(
528 repo,
528 repo,
529 commitmsg,
529 commitmsg,
530 {
530 {
531 cl.node(oldrev): [cl.node(newrev)]
531 cl.node(oldrev): [cl.node(newrev)]
532 for oldrev, newrev in self.state.items()
532 for oldrev, newrev in self.state.items()
533 if newrev != revtodo
533 if newrev != revtodo
534 },
534 },
535 )
535 )
536
536
537 date = self.date
537 date = self.date
538 if date is None:
538 if date is None:
539 date = ctx.date()
539 date = ctx.date()
540 extra = {}
540 extra = {}
541 if repo.ui.configbool(b'rebase', b'store-source'):
541 if repo.ui.configbool(b'rebase', b'store-source'):
542 extra = {b'rebase_source': ctx.hex()}
542 extra = {b'rebase_source': ctx.hex()}
543 for c in self.extrafns:
543 for c in self.extrafns:
544 c(ctx, extra)
544 c(ctx, extra)
545 destphase = max(ctx.phase(), phases.draft)
545 destphase = max(ctx.phase(), phases.draft)
546 overrides = {
546 overrides = {
547 (b'phases', b'new-commit'): destphase,
547 (b'phases', b'new-commit'): destphase,
548 (b'ui', b'allowemptycommit'): not self.skipemptysuccessorf,
548 (b'ui', b'allowemptycommit'): not self.skipemptysuccessorf,
549 }
549 }
550 with repo.ui.configoverride(overrides, b'rebase'):
550 with repo.ui.configoverride(overrides, b'rebase'):
551 if self.inmemory:
551 if self.inmemory:
552 newnode = commitmemorynode(
552 newnode = commitmemorynode(
553 repo,
553 repo,
554 wctx=self.wctx,
554 wctx=self.wctx,
555 extra=extra,
555 extra=extra,
556 commitmsg=commitmsg,
556 commitmsg=commitmsg,
557 editor=editor,
557 editor=editor,
558 user=ctx.user(),
558 user=ctx.user(),
559 date=date,
559 date=date,
560 )
560 )
561 else:
561 else:
562 newnode = commitnode(
562 newnode = commitnode(
563 repo,
563 repo,
564 extra=extra,
564 extra=extra,
565 commitmsg=commitmsg,
565 commitmsg=commitmsg,
566 editor=editor,
566 editor=editor,
567 user=ctx.user(),
567 user=ctx.user(),
568 date=date,
568 date=date,
569 )
569 )
570
570
571 return newnode
571 return newnode
572
572
573 def _rebasenode(self, tr, rev, progressfn):
573 def _rebasenode(self, tr, rev, progressfn):
574 repo, ui, opts = self.repo, self.ui, self.opts
574 repo, ui, opts = self.repo, self.ui, self.opts
575 ctx = repo[rev]
575 ctx = repo[rev]
576 desc = _ctxdesc(ctx)
576 desc = _ctxdesc(ctx)
577 if self.state[rev] == rev:
577 if self.state[rev] == rev:
578 ui.status(_(b'already rebased %s\n') % desc)
578 ui.status(_(b'already rebased %s\n') % desc)
579 elif rev in self.obsolete_with_successor_in_rebase_set:
579 elif rev in self.obsolete_with_successor_in_rebase_set:
580 msg = (
580 msg = (
581 _(
581 _(
582 b'note: not rebasing %s and its descendants as '
582 b'note: not rebasing %s and its descendants as '
583 b'this would cause divergence\n'
583 b'this would cause divergence\n'
584 )
584 )
585 % desc
585 % desc
586 )
586 )
587 repo.ui.status(msg)
587 repo.ui.status(msg)
588 self.skipped.add(rev)
588 self.skipped.add(rev)
589 elif rev in self.obsolete_with_successor_in_destination:
589 elif rev in self.obsolete_with_successor_in_destination:
590 succ = self.obsolete_with_successor_in_destination[rev]
590 succ = self.obsolete_with_successor_in_destination[rev]
591 if succ is None:
591 if succ is None:
592 msg = _(b'note: not rebasing %s, it has no successor\n') % desc
592 msg = _(b'note: not rebasing %s, it has no successor\n') % desc
593 else:
593 else:
594 succdesc = _ctxdesc(repo[succ])
594 succdesc = _ctxdesc(repo[succ])
595 msg = _(
595 msg = _(
596 b'note: not rebasing %s, already in destination as %s\n'
596 b'note: not rebasing %s, already in destination as %s\n'
597 ) % (desc, succdesc)
597 ) % (desc, succdesc)
598 repo.ui.status(msg)
598 repo.ui.status(msg)
599 # Make clearrebased aware state[rev] is not a true successor
599 # Make clearrebased aware state[rev] is not a true successor
600 self.skipped.add(rev)
600 self.skipped.add(rev)
601 # Record rev as moved to its desired destination in self.state.
601 # Record rev as moved to its desired destination in self.state.
602 # This helps bookmark and working parent movement.
602 # This helps bookmark and working parent movement.
603 dest = max(
603 dest = max(
604 adjustdest(repo, rev, self.destmap, self.state, self.skipped)
604 adjustdest(repo, rev, self.destmap, self.state, self.skipped)
605 )
605 )
606 self.state[rev] = dest
606 self.state[rev] = dest
607 elif self.state[rev] == revtodo:
607 elif self.state[rev] == revtodo:
608 ui.status(_(b'rebasing %s\n') % desc)
608 ui.status(_(b'rebasing %s\n') % desc)
609 progressfn(ctx)
609 progressfn(ctx)
610 p1, p2, base = defineparents(
610 p1, p2, base = defineparents(
611 repo,
611 repo,
612 rev,
612 rev,
613 self.destmap,
613 self.destmap,
614 self.state,
614 self.state,
615 self.skipped,
615 self.skipped,
616 self.obsolete_with_successor_in_destination,
616 self.obsolete_with_successor_in_destination,
617 )
617 )
618 if self.resume and self.wctx.p1().rev() == p1:
618 if self.resume and self.wctx.p1().rev() == p1:
619 repo.ui.debug(b'resuming interrupted rebase\n')
619 repo.ui.debug(b'resuming interrupted rebase\n')
620 self.resume = False
620 self.resume = False
621 else:
621 else:
622 overrides = {(b'ui', b'forcemerge'): opts.get('tool', b'')}
622 overrides = {(b'ui', b'forcemerge'): opts.get('tool', b'')}
623 with ui.configoverride(overrides, b'rebase'):
623 with ui.configoverride(overrides, b'rebase'):
624 try:
624 try:
625 rebasenode(
625 rebasenode(
626 repo,
626 repo,
627 rev,
627 rev,
628 p1,
628 p1,
629 p2,
629 p2,
630 base,
630 base,
631 self.collapsef,
631 self.collapsef,
632 wctx=self.wctx,
632 wctx=self.wctx,
633 )
633 )
634 except error.InMemoryMergeConflictsError:
634 except error.InMemoryMergeConflictsError:
635 if self.dryrun:
635 if self.dryrun:
636 raise error.ConflictResolutionRequired(b'rebase')
636 raise error.ConflictResolutionRequired(b'rebase')
637 if self.collapsef:
637 if self.collapsef:
638 # TODO: Make the overlayworkingctx reflected
638 # TODO: Make the overlayworkingctx reflected
639 # in the working copy here instead of re-raising
639 # in the working copy here instead of re-raising
640 # so the entire rebase operation is retried.
640 # so the entire rebase operation is retried.
641 raise
641 raise
642 ui.status(
642 ui.status(
643 _(
643 _(
644 b"hit merge conflicts; rebasing that "
644 b"hit merge conflicts; rebasing that "
645 b"commit again in the working copy\n"
645 b"commit again in the working copy\n"
646 )
646 )
647 )
647 )
648 try:
648 try:
649 cmdutil.bailifchanged(repo)
649 cmdutil.bailifchanged(repo)
650 except error.Abort:
650 except error.Abort:
651 clearstatus(repo)
651 clearstatus(repo)
652 clearcollapsemsg(repo)
652 clearcollapsemsg(repo)
653 raise
653 raise
654 self.inmemory = False
654 self.inmemory = False
655 self._assignworkingcopy()
655 self._assignworkingcopy()
656 mergemod.update(repo[p1], wc=self.wctx)
656 mergemod.update(repo[p1], wc=self.wctx)
657 rebasenode(
657 rebasenode(
658 repo,
658 repo,
659 rev,
659 rev,
660 p1,
660 p1,
661 p2,
661 p2,
662 base,
662 base,
663 self.collapsef,
663 self.collapsef,
664 wctx=self.wctx,
664 wctx=self.wctx,
665 )
665 )
666 if not self.collapsef:
666 if not self.collapsef:
667 merging = p2 != nullrev
667 merging = p2 != nullrev
668 editform = cmdutil.mergeeditform(merging, b'rebase')
668 editform = cmdutil.mergeeditform(merging, b'rebase')
669 editor = cmdutil.getcommiteditor(editform=editform, **opts)
669 editor = cmdutil.getcommiteditor(editform=editform, **opts)
670 # We need to set parents again here just in case we're continuing
670 # We need to set parents again here just in case we're continuing
671 # a rebase started with an old hg version (before 9c9cfecd4600),
671 # a rebase started with an old hg version (before 9c9cfecd4600),
672 # because those old versions would have left us with two dirstate
672 # because those old versions would have left us with two dirstate
673 # parents, and we don't want to create a merge commit here (unless
673 # parents, and we don't want to create a merge commit here (unless
674 # we're rebasing a merge commit).
674 # we're rebasing a merge commit).
675 self.wctx.setparents(repo[p1].node(), repo[p2].node())
675 self.wctx.setparents(repo[p1].node(), repo[p2].node())
676 newnode = self._concludenode(rev, editor)
676 newnode = self._concludenode(rev, editor)
677 else:
677 else:
678 # Skip commit if we are collapsing
678 # Skip commit if we are collapsing
679 newnode = None
679 newnode = None
680 # Update the state
680 # Update the state
681 if newnode is not None:
681 if newnode is not None:
682 self.state[rev] = repo[newnode].rev()
682 self.state[rev] = repo[newnode].rev()
683 ui.debug(b'rebased as %s\n' % short(newnode))
683 ui.debug(b'rebased as %s\n' % short(newnode))
684 if repo[newnode].isempty():
684 if repo[newnode].isempty():
685 ui.warn(
685 ui.warn(
686 _(
686 _(
687 b'note: created empty successor for %s, its '
687 b'note: created empty successor for %s, its '
688 b'destination already has all its changes\n'
688 b'destination already has all its changes\n'
689 )
689 )
690 % desc
690 % desc
691 )
691 )
692 else:
692 else:
693 if not self.collapsef:
693 if not self.collapsef:
694 ui.warn(
694 ui.warn(
695 _(
695 _(
696 b'note: not rebasing %s, its destination already '
696 b'note: not rebasing %s, its destination already '
697 b'has all its changes\n'
697 b'has all its changes\n'
698 )
698 )
699 % desc
699 % desc
700 )
700 )
701 self.skipped.add(rev)
701 self.skipped.add(rev)
702 self.state[rev] = p1
702 self.state[rev] = p1
703 ui.debug(b'next revision set to %d\n' % p1)
703 ui.debug(b'next revision set to %d\n' % p1)
704 else:
704 else:
705 ui.status(
705 ui.status(
706 _(b'already rebased %s as %s\n') % (desc, repo[self.state[rev]])
706 _(b'already rebased %s as %s\n') % (desc, repo[self.state[rev]])
707 )
707 )
708 if not tr:
708 if not tr:
709 # When not using single transaction, store state after each
709 # When not using single transaction, store state after each
710 # commit is completely done. On InterventionRequired, we thus
710 # commit is completely done. On InterventionRequired, we thus
711 # won't store the status. Instead, we'll hit the "len(parents) == 2"
711 # won't store the status. Instead, we'll hit the "len(parents) == 2"
712 # case and realize that the commit was in progress.
712 # case and realize that the commit was in progress.
713 self.storestatus()
713 self.storestatus()
714
714
715 def _finishrebase(self):
715 def _finishrebase(self):
716 repo, ui, opts = self.repo, self.ui, self.opts
716 repo, ui, opts = self.repo, self.ui, self.opts
717 fm = ui.formatter(b'rebase', pycompat.byteskwargs(opts))
717 fm = ui.formatter(b'rebase', pycompat.byteskwargs(opts))
718 fm.startitem()
718 fm.startitem()
719 if self.collapsef:
719 if self.collapsef:
720 p1, p2, _base = defineparents(
720 p1, p2, _base = defineparents(
721 repo,
721 repo,
722 min(self.state),
722 min(self.state),
723 self.destmap,
723 self.destmap,
724 self.state,
724 self.state,
725 self.skipped,
725 self.skipped,
726 self.obsolete_with_successor_in_destination,
726 self.obsolete_with_successor_in_destination,
727 )
727 )
728 editopt = opts.get('edit')
728 editopt = opts.get('edit')
729 editform = b'rebase.collapse'
729 editform = b'rebase.collapse'
730 if self.collapsemsg:
730 if self.collapsemsg:
731 commitmsg = self.collapsemsg
731 commitmsg = self.collapsemsg
732 else:
732 else:
733 commitmsg = b'Collapsed revision'
733 commitmsg = b'Collapsed revision'
734 for rebased in sorted(self.state):
734 for rebased in sorted(self.state):
735 if rebased not in self.skipped:
735 if rebased not in self.skipped:
736 commitmsg += b'\n* %s' % repo[rebased].description()
736 commitmsg += b'\n* %s' % repo[rebased].description()
737 editopt = True
737 editopt = True
738 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
738 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
739 revtoreuse = max(self.state)
739 revtoreuse = max(self.state)
740
740
741 self.wctx.setparents(repo[p1].node(), repo[self.external].node())
741 self.wctx.setparents(repo[p1].node(), repo[self.external].node())
742 newnode = self._concludenode(
742 newnode = self._concludenode(
743 revtoreuse, editor, commitmsg=commitmsg
743 revtoreuse, editor, commitmsg=commitmsg
744 )
744 )
745
745
746 if newnode is not None:
746 if newnode is not None:
747 newrev = repo[newnode].rev()
747 newrev = repo[newnode].rev()
748 for oldrev in self.state:
748 for oldrev in self.state:
749 self.state[oldrev] = newrev
749 self.state[oldrev] = newrev
750
750
751 if b'qtip' in repo.tags():
751 if b'qtip' in repo.tags():
752 updatemq(repo, self.state, self.skipped, **opts)
752 updatemq(repo, self.state, self.skipped, **opts)
753
753
754 # restore original working directory
754 # restore original working directory
755 # (we do this before stripping)
755 # (we do this before stripping)
756 newwd = self.state.get(self.originalwd, self.originalwd)
756 newwd = self.state.get(self.originalwd, self.originalwd)
757 if newwd < 0:
757 if newwd < 0:
758 # original directory is a parent of rebase set root or ignored
758 # original directory is a parent of rebase set root or ignored
759 newwd = self.originalwd
759 newwd = self.originalwd
760 if newwd not in [c.rev() for c in repo[None].parents()]:
760 if newwd not in [c.rev() for c in repo[None].parents()]:
761 ui.note(_(b"update back to initial working directory parent\n"))
761 ui.note(_(b"update back to initial working directory parent\n"))
762 mergemod.update(repo[newwd])
762 mergemod.update(repo[newwd])
763
763
764 collapsedas = None
764 collapsedas = None
765 if self.collapsef and not self.keepf:
765 if self.collapsef and not self.keepf:
766 collapsedas = newnode
766 collapsedas = newnode
767 clearrebased(
767 clearrebased(
768 ui,
768 ui,
769 repo,
769 repo,
770 self.destmap,
770 self.destmap,
771 self.state,
771 self.state,
772 self.skipped,
772 self.skipped,
773 collapsedas,
773 collapsedas,
774 self.keepf,
774 self.keepf,
775 fm=fm,
775 fm=fm,
776 backup=self.backupf,
776 backup=self.backupf,
777 )
777 )
778
778
779 clearstatus(repo)
779 clearstatus(repo)
780 clearcollapsemsg(repo)
780 clearcollapsemsg(repo)
781
781
782 ui.note(_(b"rebase completed\n"))
782 ui.note(_(b"rebase completed\n"))
783 util.unlinkpath(repo.sjoin(b'undo'), ignoremissing=True)
783 util.unlinkpath(repo.sjoin(b'undo'), ignoremissing=True)
784 if self.skipped:
784 if self.skipped:
785 skippedlen = len(self.skipped)
785 skippedlen = len(self.skipped)
786 ui.note(_(b"%d revisions have been skipped\n") % skippedlen)
786 ui.note(_(b"%d revisions have been skipped\n") % skippedlen)
787 fm.end()
787 fm.end()
788
788
789 if (
789 if (
790 self.activebookmark
790 self.activebookmark
791 and self.activebookmark in repo._bookmarks
791 and self.activebookmark in repo._bookmarks
792 and repo[b'.'].node() == repo._bookmarks[self.activebookmark]
792 and repo[b'.'].node() == repo._bookmarks[self.activebookmark]
793 ):
793 ):
794 bookmarks.activate(repo, self.activebookmark)
794 bookmarks.activate(repo, self.activebookmark)
795
795
796 def _abort(self, backup=True, suppwarns=False, dryrun=False, confirm=False):
796 def _abort(self, backup=True, suppwarns=False, dryrun=False, confirm=False):
797 '''Restore the repository to its original state.'''
797 '''Restore the repository to its original state.'''
798
798
799 repo = self.repo
799 repo = self.repo
800 try:
800 try:
801 # If the first commits in the rebased set get skipped during the
801 # If the first commits in the rebased set get skipped during the
802 # rebase, their values within the state mapping will be the dest
802 # rebase, their values within the state mapping will be the dest
803 # rev id. The rebased list must must not contain the dest rev
803 # rev id. The rebased list must must not contain the dest rev
804 # (issue4896)
804 # (issue4896)
805 rebased = [
805 rebased = [
806 s
806 s
807 for r, s in self.state.items()
807 for r, s in self.state.items()
808 if s >= 0 and s != r and s != self.destmap[r]
808 if s >= 0 and s != r and s != self.destmap[r]
809 ]
809 ]
810 immutable = [d for d in rebased if not repo[d].mutable()]
810 immutable = [d for d in rebased if not repo[d].mutable()]
811 cleanup = True
811 cleanup = True
812 if immutable:
812 if immutable:
813 repo.ui.warn(
813 repo.ui.warn(
814 _(b"warning: can't clean up public changesets %s\n")
814 _(b"warning: can't clean up public changesets %s\n")
815 % b', '.join(bytes(repo[r]) for r in immutable),
815 % b', '.join(bytes(repo[r]) for r in immutable),
816 hint=_(b"see 'hg help phases' for details"),
816 hint=_(b"see 'hg help phases' for details"),
817 )
817 )
818 cleanup = False
818 cleanup = False
819
819
820 descendants = set()
820 descendants = set()
821 if rebased:
821 if rebased:
822 descendants = set(repo.changelog.descendants(rebased))
822 descendants = set(repo.changelog.descendants(rebased))
823 if descendants - set(rebased):
823 if descendants - set(rebased):
824 repo.ui.warn(
824 repo.ui.warn(
825 _(
825 _(
826 b"warning: new changesets detected on "
826 b"warning: new changesets detected on "
827 b"destination branch, can't strip\n"
827 b"destination branch, can't strip\n"
828 )
828 )
829 )
829 )
830 cleanup = False
830 cleanup = False
831
831
832 if cleanup:
832 if cleanup:
833
833
834 if rebased:
834 if rebased:
835 strippoints = [
835 strippoints = [
836 c.node() for c in repo.set(b'roots(%ld)', rebased)
836 c.node() for c in repo.set(b'roots(%ld)', rebased)
837 ]
837 ]
838
838
839 updateifonnodes = set(rebased)
839 updateifonnodes = set(rebased)
840 updateifonnodes.update(self.destmap.values())
840 updateifonnodes.update(self.destmap.values())
841
841
842 if not confirm:
842 if not confirm:
843 # note: when dry run is set the `rebased` and `destmap`
843 # note: when dry run is set the `rebased` and `destmap`
844 # variables seem to contain "bad" contents, so do not
844 # variables seem to contain "bad" contents, so do not
845 # rely on them. As dryrun does not need this part of
845 # rely on them. As dryrun does not need this part of
846 # the cleanup, this is "fine"
846 # the cleanup, this is "fine"
847 updateifonnodes.add(self.originalwd)
847 updateifonnodes.add(self.originalwd)
848
848
849 shouldupdate = repo[b'.'].rev() in updateifonnodes
849 shouldupdate = repo[b'.'].rev() in updateifonnodes
850
850
851 # Update away from the rebase if necessary
851 # Update away from the rebase if necessary
852 if not dryrun and shouldupdate:
852 if not dryrun and shouldupdate:
853 mergemod.clean_update(repo[self.originalwd])
853 mergemod.clean_update(repo[self.originalwd])
854
854
855 # Strip from the first rebased revision
855 # Strip from the first rebased revision
856 if rebased:
856 if rebased:
857 repair.strip(repo.ui, repo, strippoints, backup=backup)
857 repair.strip(repo.ui, repo, strippoints, backup=backup)
858
858
859 if self.activebookmark and self.activebookmark in repo._bookmarks:
859 if self.activebookmark and self.activebookmark in repo._bookmarks:
860 bookmarks.activate(repo, self.activebookmark)
860 bookmarks.activate(repo, self.activebookmark)
861
861
862 finally:
862 finally:
863 clearstatus(repo)
863 clearstatus(repo)
864 clearcollapsemsg(repo)
864 clearcollapsemsg(repo)
865 if not suppwarns:
865 if not suppwarns:
866 repo.ui.warn(_(b'rebase aborted\n'))
866 repo.ui.warn(_(b'rebase aborted\n'))
867 return 0
867 return 0
868
868
869
869
870 @command(
870 @command(
871 b'rebase',
871 b'rebase',
872 [
872 [
873 (
873 (
874 b's',
874 b's',
875 b'source',
875 b'source',
876 [],
876 [],
877 _(b'rebase the specified changesets and their descendants'),
877 _(b'rebase the specified changesets and their descendants'),
878 _(b'REV'),
878 _(b'REV'),
879 ),
879 ),
880 (
880 (
881 b'b',
881 b'b',
882 b'base',
882 b'base',
883 [],
883 [],
884 _(b'rebase everything from branching point of specified changeset'),
884 _(b'rebase everything from branching point of specified changeset'),
885 _(b'REV'),
885 _(b'REV'),
886 ),
886 ),
887 (b'r', b'rev', [], _(b'rebase these revisions'), _(b'REV')),
887 (b'r', b'rev', [], _(b'rebase these revisions'), _(b'REV')),
888 (
888 (
889 b'd',
889 b'd',
890 b'dest',
890 b'dest',
891 b'',
891 b'',
892 _(b'rebase onto the specified changeset'),
892 _(b'rebase onto the specified changeset'),
893 _(b'REV'),
893 _(b'REV'),
894 ),
894 ),
895 (b'', b'collapse', False, _(b'collapse the rebased changesets')),
895 (b'', b'collapse', False, _(b'collapse the rebased changesets')),
896 (
896 (
897 b'm',
897 b'm',
898 b'message',
898 b'message',
899 b'',
899 b'',
900 _(b'use text as collapse commit message'),
900 _(b'use text as collapse commit message'),
901 _(b'TEXT'),
901 _(b'TEXT'),
902 ),
902 ),
903 (b'e', b'edit', False, _(b'invoke editor on commit messages')),
903 (b'e', b'edit', False, _(b'invoke editor on commit messages')),
904 (
904 (
905 b'l',
905 b'l',
906 b'logfile',
906 b'logfile',
907 b'',
907 b'',
908 _(b'read collapse commit message from file'),
908 _(b'read collapse commit message from file'),
909 _(b'FILE'),
909 _(b'FILE'),
910 ),
910 ),
911 (b'k', b'keep', False, _(b'keep original changesets')),
911 (b'k', b'keep', False, _(b'keep original changesets')),
912 (b'', b'keepbranches', False, _(b'keep original branch names')),
912 (b'', b'keepbranches', False, _(b'keep original branch names')),
913 (b'D', b'detach', False, _(b'(DEPRECATED)')),
913 (b'D', b'detach', False, _(b'(DEPRECATED)')),
914 (b'i', b'interactive', False, _(b'(DEPRECATED)')),
914 (b'i', b'interactive', False, _(b'(DEPRECATED)')),
915 (b't', b'tool', b'', _(b'specify merge tool')),
915 (b't', b'tool', b'', _(b'specify merge tool')),
916 (b'', b'stop', False, _(b'stop interrupted rebase')),
916 (b'', b'stop', False, _(b'stop interrupted rebase')),
917 (b'c', b'continue', False, _(b'continue an interrupted rebase')),
917 (b'c', b'continue', False, _(b'continue an interrupted rebase')),
918 (b'a', b'abort', False, _(b'abort an interrupted rebase')),
918 (b'a', b'abort', False, _(b'abort an interrupted rebase')),
919 (
919 (
920 b'',
920 b'',
921 b'auto-orphans',
921 b'auto-orphans',
922 b'',
922 b'',
923 _(
923 _(
924 b'automatically rebase orphan revisions '
924 b'automatically rebase orphan revisions '
925 b'in the specified revset (EXPERIMENTAL)'
925 b'in the specified revset (EXPERIMENTAL)'
926 ),
926 ),
927 ),
927 ),
928 ]
928 ]
929 + cmdutil.dryrunopts
929 + cmdutil.dryrunopts
930 + cmdutil.formatteropts
930 + cmdutil.formatteropts
931 + cmdutil.confirmopts,
931 + cmdutil.confirmopts,
932 _(b'[[-s REV]... | [-b REV]... | [-r REV]...] [-d REV] [OPTION]...'),
932 _(b'[[-s REV]... | [-b REV]... | [-r REV]...] [-d REV] [OPTION]...'),
933 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
933 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
934 )
934 )
935 def rebase(ui, repo, **opts):
935 def rebase(ui, repo, **opts):
936 """move changeset (and descendants) to a different branch
936 """move changeset (and descendants) to a different branch
937
937
938 Rebase uses repeated merging to graft changesets from one part of
938 Rebase uses repeated merging to graft changesets from one part of
939 history (the source) onto another (the destination). This can be
939 history (the source) onto another (the destination). This can be
940 useful for linearizing *local* changes relative to a master
940 useful for linearizing *local* changes relative to a master
941 development tree.
941 development tree.
942
942
943 Published commits cannot be rebased (see :hg:`help phases`).
943 Published commits cannot be rebased (see :hg:`help phases`).
944 To copy commits, see :hg:`help graft`.
944 To copy commits, see :hg:`help graft`.
945
945
946 If you don't specify a destination changeset (``-d/--dest``), rebase
946 If you don't specify a destination changeset (``-d/--dest``), rebase
947 will use the same logic as :hg:`merge` to pick a destination. if
947 will use the same logic as :hg:`merge` to pick a destination. if
948 the current branch contains exactly one other head, the other head
948 the current branch contains exactly one other head, the other head
949 is merged with by default. Otherwise, an explicit revision with
949 is merged with by default. Otherwise, an explicit revision with
950 which to merge with must be provided. (destination changeset is not
950 which to merge with must be provided. (destination changeset is not
951 modified by rebasing, but new changesets are added as its
951 modified by rebasing, but new changesets are added as its
952 descendants.)
952 descendants.)
953
953
954 Here are the ways to select changesets:
954 Here are the ways to select changesets:
955
955
956 1. Explicitly select them using ``--rev``.
956 1. Explicitly select them using ``--rev``.
957
957
958 2. Use ``--source`` to select a root changeset and include all of its
958 2. Use ``--source`` to select a root changeset and include all of its
959 descendants.
959 descendants.
960
960
961 3. Use ``--base`` to select a changeset; rebase will find ancestors
961 3. Use ``--base`` to select a changeset; rebase will find ancestors
962 and their descendants which are not also ancestors of the destination.
962 and their descendants which are not also ancestors of the destination.
963
963
964 4. If you do not specify any of ``--rev``, ``--source``, or ``--base``,
964 4. If you do not specify any of ``--rev``, ``--source``, or ``--base``,
965 rebase will use ``--base .`` as above.
965 rebase will use ``--base .`` as above.
966
966
967 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
967 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
968 can be used in ``--dest``. Destination would be calculated per source
968 can be used in ``--dest``. Destination would be calculated per source
969 revision with ``SRC`` substituted by that single source revision and
969 revision with ``SRC`` substituted by that single source revision and
970 ``ALLSRC`` substituted by all source revisions.
970 ``ALLSRC`` substituted by all source revisions.
971
971
972 Rebase will destroy original changesets unless you use ``--keep``.
972 Rebase will destroy original changesets unless you use ``--keep``.
973 It will also move your bookmarks (even if you do).
973 It will also move your bookmarks (even if you do).
974
974
975 Some changesets may be dropped if they do not contribute changes
975 Some changesets may be dropped if they do not contribute changes
976 (e.g. merges from the destination branch).
976 (e.g. merges from the destination branch).
977
977
978 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
978 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
979 a named branch with two heads. You will need to explicitly specify source
979 a named branch with two heads. You will need to explicitly specify source
980 and/or destination.
980 and/or destination.
981
981
982 If you need to use a tool to automate merge/conflict decisions, you
982 If you need to use a tool to automate merge/conflict decisions, you
983 can specify one with ``--tool``, see :hg:`help merge-tools`.
983 can specify one with ``--tool``, see :hg:`help merge-tools`.
984 As a caveat: the tool will not be used to mediate when a file was
984 As a caveat: the tool will not be used to mediate when a file was
985 deleted, there is no hook presently available for this.
985 deleted, there is no hook presently available for this.
986
986
987 If a rebase is interrupted to manually resolve a conflict, it can be
987 If a rebase is interrupted to manually resolve a conflict, it can be
988 continued with --continue/-c, aborted with --abort/-a, or stopped with
988 continued with --continue/-c, aborted with --abort/-a, or stopped with
989 --stop.
989 --stop.
990
990
991 .. container:: verbose
991 .. container:: verbose
992
992
993 Examples:
993 Examples:
994
994
995 - move "local changes" (current commit back to branching point)
995 - move "local changes" (current commit back to branching point)
996 to the current branch tip after a pull::
996 to the current branch tip after a pull::
997
997
998 hg rebase
998 hg rebase
999
999
1000 - move a single changeset to the stable branch::
1000 - move a single changeset to the stable branch::
1001
1001
1002 hg rebase -r 5f493448 -d stable
1002 hg rebase -r 5f493448 -d stable
1003
1003
1004 - splice a commit and all its descendants onto another part of history::
1004 - splice a commit and all its descendants onto another part of history::
1005
1005
1006 hg rebase --source c0c3 --dest 4cf9
1006 hg rebase --source c0c3 --dest 4cf9
1007
1007
1008 - rebase everything on a branch marked by a bookmark onto the
1008 - rebase everything on a branch marked by a bookmark onto the
1009 default branch::
1009 default branch::
1010
1010
1011 hg rebase --base myfeature --dest default
1011 hg rebase --base myfeature --dest default
1012
1012
1013 - collapse a sequence of changes into a single commit::
1013 - collapse a sequence of changes into a single commit::
1014
1014
1015 hg rebase --collapse -r 1520:1525 -d .
1015 hg rebase --collapse -r 1520:1525 -d .
1016
1016
1017 - move a named branch while preserving its name::
1017 - move a named branch while preserving its name::
1018
1018
1019 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
1019 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
1020
1020
1021 - stabilize orphaned changesets so history looks linear::
1021 - stabilize orphaned changesets so history looks linear::
1022
1022
1023 hg rebase -r 'orphan()-obsolete()'\
1023 hg rebase -r 'orphan()-obsolete()'\
1024 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
1024 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
1025 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
1025 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
1026
1026
1027 Configuration Options:
1027 Configuration Options:
1028
1028
1029 You can make rebase require a destination if you set the following config
1029 You can make rebase require a destination if you set the following config
1030 option::
1030 option::
1031
1031
1032 [commands]
1032 [commands]
1033 rebase.requiredest = True
1033 rebase.requiredest = True
1034
1034
1035 By default, rebase will close the transaction after each commit. For
1035 By default, rebase will close the transaction after each commit. For
1036 performance purposes, you can configure rebase to use a single transaction
1036 performance purposes, you can configure rebase to use a single transaction
1037 across the entire rebase. WARNING: This setting introduces a significant
1037 across the entire rebase. WARNING: This setting introduces a significant
1038 risk of losing the work you've done in a rebase if the rebase aborts
1038 risk of losing the work you've done in a rebase if the rebase aborts
1039 unexpectedly::
1039 unexpectedly::
1040
1040
1041 [rebase]
1041 [rebase]
1042 singletransaction = True
1042 singletransaction = True
1043
1043
1044 By default, rebase writes to the working copy, but you can configure it to
1044 By default, rebase writes to the working copy, but you can configure it to
1045 run in-memory for better performance. When the rebase is not moving the
1045 run in-memory for better performance. When the rebase is not moving the
1046 parent(s) of the working copy (AKA the "currently checked out changesets"),
1046 parent(s) of the working copy (AKA the "currently checked out changesets"),
1047 this may also allow it to run even if the working copy is dirty::
1047 this may also allow it to run even if the working copy is dirty::
1048
1048
1049 [rebase]
1049 [rebase]
1050 experimental.inmemory = True
1050 experimental.inmemory = True
1051
1051
1052 Return Values:
1052 Return Values:
1053
1053
1054 Returns 0 on success, 1 if nothing to rebase or there are
1054 Returns 0 on success, 1 if nothing to rebase or there are
1055 unresolved conflicts.
1055 unresolved conflicts.
1056
1056
1057 """
1057 """
1058 inmemory = ui.configbool(b'rebase', b'experimental.inmemory')
1058 inmemory = ui.configbool(b'rebase', b'experimental.inmemory')
1059 action = cmdutil.check_at_most_one_arg(opts, 'abort', 'stop', 'continue')
1059 action = cmdutil.check_at_most_one_arg(opts, 'abort', 'stop', 'continue')
1060 if action:
1060 if action:
1061 cmdutil.check_incompatible_arguments(
1061 cmdutil.check_incompatible_arguments(
1062 opts, action, ['confirm', 'dry_run']
1062 opts, action, ['confirm', 'dry_run']
1063 )
1063 )
1064 cmdutil.check_incompatible_arguments(
1064 cmdutil.check_incompatible_arguments(
1065 opts, action, ['rev', 'source', 'base', 'dest']
1065 opts, action, ['rev', 'source', 'base', 'dest']
1066 )
1066 )
1067 cmdutil.check_at_most_one_arg(opts, 'confirm', 'dry_run')
1067 cmdutil.check_at_most_one_arg(opts, 'confirm', 'dry_run')
1068 cmdutil.check_at_most_one_arg(opts, 'rev', 'source', 'base')
1068 cmdutil.check_at_most_one_arg(opts, 'rev', 'source', 'base')
1069
1069
1070 if action or repo.currenttransaction() is not None:
1070 if action or repo.currenttransaction() is not None:
1071 # in-memory rebase is not compatible with resuming rebases.
1071 # in-memory rebase is not compatible with resuming rebases.
1072 # (Or if it is run within a transaction, since the restart logic can
1072 # (Or if it is run within a transaction, since the restart logic can
1073 # fail the entire transaction.)
1073 # fail the entire transaction.)
1074 inmemory = False
1074 inmemory = False
1075
1075
1076 if opts.get('auto_orphans'):
1076 if opts.get('auto_orphans'):
1077 disallowed_opts = set(opts) - {'auto_orphans'}
1077 disallowed_opts = set(opts) - {'auto_orphans'}
1078 cmdutil.check_incompatible_arguments(
1078 cmdutil.check_incompatible_arguments(
1079 opts, 'auto_orphans', disallowed_opts
1079 opts, 'auto_orphans', disallowed_opts
1080 )
1080 )
1081
1081
1082 userrevs = list(repo.revs(opts.get('auto_orphans')))
1082 userrevs = list(repo.revs(opts.get('auto_orphans')))
1083 opts['rev'] = [revsetlang.formatspec(b'%ld and orphan()', userrevs)]
1083 opts['rev'] = [revsetlang.formatspec(b'%ld and orphan()', userrevs)]
1084 opts['dest'] = b'_destautoorphanrebase(SRC)'
1084 opts['dest'] = b'_destautoorphanrebase(SRC)'
1085
1085
1086 if opts.get('dry_run') or opts.get('confirm'):
1086 if opts.get('dry_run') or opts.get('confirm'):
1087 return _dryrunrebase(ui, repo, action, opts)
1087 return _dryrunrebase(ui, repo, action, opts)
1088 elif action == 'stop':
1088 elif action == 'stop':
1089 rbsrt = rebaseruntime(repo, ui)
1089 rbsrt = rebaseruntime(repo, ui)
1090 with repo.wlock(), repo.lock():
1090 with repo.wlock(), repo.lock():
1091 rbsrt.restorestatus()
1091 rbsrt.restorestatus()
1092 if rbsrt.collapsef:
1092 if rbsrt.collapsef:
1093 raise error.StateError(_(b"cannot stop in --collapse session"))
1093 raise error.StateError(_(b"cannot stop in --collapse session"))
1094 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1094 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1095 if not (rbsrt.keepf or allowunstable):
1095 if not (rbsrt.keepf or allowunstable):
1096 raise error.StateError(
1096 raise error.StateError(
1097 _(
1097 _(
1098 b"cannot remove original changesets with"
1098 b"cannot remove original changesets with"
1099 b" unrebased descendants"
1099 b" unrebased descendants"
1100 ),
1100 ),
1101 hint=_(
1101 hint=_(
1102 b'either enable obsmarkers to allow unstable '
1102 b'either enable obsmarkers to allow unstable '
1103 b'revisions or use --keep to keep original '
1103 b'revisions or use --keep to keep original '
1104 b'changesets'
1104 b'changesets'
1105 ),
1105 ),
1106 )
1106 )
1107 # update to the current working revision
1107 # update to the current working revision
1108 # to clear interrupted merge
1108 # to clear interrupted merge
1109 mergemod.clean_update(repo[rbsrt.originalwd])
1109 mergemod.clean_update(repo[rbsrt.originalwd])
1110 rbsrt._finishrebase()
1110 rbsrt._finishrebase()
1111 return 0
1111 return 0
1112 elif inmemory:
1112 elif inmemory:
1113 try:
1113 try:
1114 # in-memory merge doesn't support conflicts, so if we hit any, abort
1114 # in-memory merge doesn't support conflicts, so if we hit any, abort
1115 # and re-run as an on-disk merge.
1115 # and re-run as an on-disk merge.
1116 overrides = {(b'rebase', b'singletransaction'): True}
1116 overrides = {(b'rebase', b'singletransaction'): True}
1117 with ui.configoverride(overrides, b'rebase'):
1117 with ui.configoverride(overrides, b'rebase'):
1118 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
1118 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
1119 except error.InMemoryMergeConflictsError:
1119 except error.InMemoryMergeConflictsError:
1120 if ui.configbool(b'devel', b'rebase.force-in-memory-merge'):
1120 if ui.configbool(b'devel', b'rebase.force-in-memory-merge'):
1121 raise
1121 raise
1122 ui.warn(
1122 ui.warn(
1123 _(
1123 _(
1124 b'hit merge conflicts; re-running rebase without in-memory'
1124 b'hit merge conflicts; re-running rebase without in-memory'
1125 b' merge\n'
1125 b' merge\n'
1126 )
1126 )
1127 )
1127 )
1128 clearstatus(repo)
1128 clearstatus(repo)
1129 clearcollapsemsg(repo)
1129 clearcollapsemsg(repo)
1130 return _dorebase(ui, repo, action, opts, inmemory=False)
1130 return _dorebase(ui, repo, action, opts, inmemory=False)
1131 else:
1131 else:
1132 return _dorebase(ui, repo, action, opts)
1132 return _dorebase(ui, repo, action, opts)
1133
1133
1134
1134
1135 def _dryrunrebase(ui, repo, action, opts):
1135 def _dryrunrebase(ui, repo, action, opts):
1136 rbsrt = rebaseruntime(repo, ui, inmemory=True, dryrun=True, opts=opts)
1136 rbsrt = rebaseruntime(repo, ui, inmemory=True, dryrun=True, opts=opts)
1137 confirm = opts.get('confirm')
1137 confirm = opts.get('confirm')
1138 if confirm:
1138 if confirm:
1139 ui.status(_(b'starting in-memory rebase\n'))
1139 ui.status(_(b'starting in-memory rebase\n'))
1140 else:
1140 else:
1141 ui.status(
1141 ui.status(
1142 _(b'starting dry-run rebase; repository will not be changed\n')
1142 _(b'starting dry-run rebase; repository will not be changed\n')
1143 )
1143 )
1144 with repo.wlock(), repo.lock():
1144 with repo.wlock(), repo.lock():
1145 needsabort = True
1145 needsabort = True
1146 try:
1146 try:
1147 overrides = {(b'rebase', b'singletransaction'): True}
1147 overrides = {(b'rebase', b'singletransaction'): True}
1148 with ui.configoverride(overrides, b'rebase'):
1148 with ui.configoverride(overrides, b'rebase'):
1149 res = _origrebase(
1149 res = _origrebase(
1150 ui,
1150 ui,
1151 repo,
1151 repo,
1152 action,
1152 action,
1153 opts,
1153 opts,
1154 rbsrt,
1154 rbsrt,
1155 )
1155 )
1156 if res == _nothingtorebase():
1156 if res == _nothingtorebase():
1157 needsabort = False
1157 needsabort = False
1158 return res
1158 return res
1159 except error.ConflictResolutionRequired:
1159 except error.ConflictResolutionRequired:
1160 ui.status(_(b'hit a merge conflict\n'))
1160 ui.status(_(b'hit a merge conflict\n'))
1161 return 1
1161 return 1
1162 except error.Abort:
1162 except error.Abort:
1163 needsabort = False
1163 needsabort = False
1164 raise
1164 raise
1165 else:
1165 else:
1166 if confirm:
1166 if confirm:
1167 ui.status(_(b'rebase completed successfully\n'))
1167 ui.status(_(b'rebase completed successfully\n'))
1168 if not ui.promptchoice(_(b'apply changes (yn)?$$ &Yes $$ &No')):
1168 if not ui.promptchoice(_(b'apply changes (yn)?$$ &Yes $$ &No')):
1169 # finish unfinished rebase
1169 # finish unfinished rebase
1170 rbsrt._finishrebase()
1170 rbsrt._finishrebase()
1171 else:
1171 else:
1172 rbsrt._prepareabortorcontinue(
1172 rbsrt._prepareabortorcontinue(
1173 isabort=True,
1173 isabort=True,
1174 backup=False,
1174 backup=False,
1175 suppwarns=True,
1175 suppwarns=True,
1176 confirm=confirm,
1176 confirm=confirm,
1177 )
1177 )
1178 needsabort = False
1178 needsabort = False
1179 else:
1179 else:
1180 ui.status(
1180 ui.status(
1181 _(
1181 _(
1182 b'dry-run rebase completed successfully; run without'
1182 b'dry-run rebase completed successfully; run without'
1183 b' -n/--dry-run to perform this rebase\n'
1183 b' -n/--dry-run to perform this rebase\n'
1184 )
1184 )
1185 )
1185 )
1186 return 0
1186 return 0
1187 finally:
1187 finally:
1188 if needsabort:
1188 if needsabort:
1189 # no need to store backup in case of dryrun
1189 # no need to store backup in case of dryrun
1190 rbsrt._prepareabortorcontinue(
1190 rbsrt._prepareabortorcontinue(
1191 isabort=True,
1191 isabort=True,
1192 backup=False,
1192 backup=False,
1193 suppwarns=True,
1193 suppwarns=True,
1194 dryrun=opts.get('dry_run'),
1194 dryrun=opts.get('dry_run'),
1195 )
1195 )
1196
1196
1197
1197
1198 def _dorebase(ui, repo, action, opts, inmemory=False):
1198 def _dorebase(ui, repo, action, opts, inmemory=False):
1199 rbsrt = rebaseruntime(repo, ui, inmemory, opts=opts)
1199 rbsrt = rebaseruntime(repo, ui, inmemory, opts=opts)
1200 return _origrebase(ui, repo, action, opts, rbsrt)
1200 return _origrebase(ui, repo, action, opts, rbsrt)
1201
1201
1202
1202
1203 def _origrebase(ui, repo, action, opts, rbsrt):
1203 def _origrebase(ui, repo, action, opts, rbsrt):
1204 assert action != 'stop'
1204 assert action != 'stop'
1205 with repo.wlock(), repo.lock():
1205 with repo.wlock(), repo.lock():
1206 if opts.get('interactive'):
1206 if opts.get('interactive'):
1207 try:
1207 try:
1208 if extensions.find(b'histedit'):
1208 if extensions.find(b'histedit'):
1209 enablehistedit = b''
1209 enablehistedit = b''
1210 except KeyError:
1210 except KeyError:
1211 enablehistedit = b" --config extensions.histedit="
1211 enablehistedit = b" --config extensions.histedit="
1212 help = b"hg%s help -e histedit" % enablehistedit
1212 help = b"hg%s help -e histedit" % enablehistedit
1213 msg = (
1213 msg = (
1214 _(
1214 _(
1215 b"interactive history editing is supported by the "
1215 b"interactive history editing is supported by the "
1216 b"'histedit' extension (see \"%s\")"
1216 b"'histedit' extension (see \"%s\")"
1217 )
1217 )
1218 % help
1218 % help
1219 )
1219 )
1220 raise error.InputError(msg)
1220 raise error.InputError(msg)
1221
1221
1222 if rbsrt.collapsemsg and not rbsrt.collapsef:
1222 if rbsrt.collapsemsg and not rbsrt.collapsef:
1223 raise error.InputError(
1223 raise error.InputError(
1224 _(b'message can only be specified with collapse')
1224 _(b'message can only be specified with collapse')
1225 )
1225 )
1226
1226
1227 if action:
1227 if action:
1228 if rbsrt.collapsef:
1228 if rbsrt.collapsef:
1229 raise error.InputError(
1229 raise error.InputError(
1230 _(b'cannot use collapse with continue or abort')
1230 _(b'cannot use collapse with continue or abort')
1231 )
1231 )
1232 if action == 'abort' and opts.get('tool', False):
1232 if action == 'abort' and opts.get('tool', False):
1233 ui.warn(_(b'tool option will be ignored\n'))
1233 ui.warn(_(b'tool option will be ignored\n'))
1234 if action == 'continue':
1234 if action == 'continue':
1235 ms = mergestatemod.mergestate.read(repo)
1235 ms = mergestatemod.mergestate.read(repo)
1236 mergeutil.checkunresolved(ms)
1236 mergeutil.checkunresolved(ms)
1237
1237
1238 retcode = rbsrt._prepareabortorcontinue(isabort=(action == 'abort'))
1238 retcode = rbsrt._prepareabortorcontinue(isabort=(action == 'abort'))
1239 if retcode is not None:
1239 if retcode is not None:
1240 return retcode
1240 return retcode
1241 else:
1241 else:
1242 # search default destination in this space
1242 # search default destination in this space
1243 # used in the 'hg pull --rebase' case, see issue 5214.
1243 # used in the 'hg pull --rebase' case, see issue 5214.
1244 destspace = opts.get('_destspace')
1244 destspace = opts.get('_destspace')
1245 destmap = _definedestmap(
1245 destmap = _definedestmap(
1246 ui,
1246 ui,
1247 repo,
1247 repo,
1248 rbsrt.inmemory,
1248 rbsrt.inmemory,
1249 opts.get('dest', None),
1249 opts.get('dest', None),
1250 opts.get('source', []),
1250 opts.get('source', []),
1251 opts.get('base', []),
1251 opts.get('base', []),
1252 opts.get('rev', []),
1252 opts.get('rev', []),
1253 destspace=destspace,
1253 destspace=destspace,
1254 )
1254 )
1255 retcode = rbsrt._preparenewrebase(destmap)
1255 retcode = rbsrt._preparenewrebase(destmap)
1256 if retcode is not None:
1256 if retcode is not None:
1257 return retcode
1257 return retcode
1258 storecollapsemsg(repo, rbsrt.collapsemsg)
1258 storecollapsemsg(repo, rbsrt.collapsemsg)
1259
1259
1260 tr = None
1260 tr = None
1261
1261
1262 singletr = ui.configbool(b'rebase', b'singletransaction')
1262 singletr = ui.configbool(b'rebase', b'singletransaction')
1263 if singletr:
1263 if singletr:
1264 tr = repo.transaction(b'rebase')
1264 tr = repo.transaction(b'rebase')
1265
1265
1266 # If `rebase.singletransaction` is enabled, wrap the entire operation in
1266 # If `rebase.singletransaction` is enabled, wrap the entire operation in
1267 # one transaction here. Otherwise, transactions are obtained when
1267 # one transaction here. Otherwise, transactions are obtained when
1268 # committing each node, which is slower but allows partial success.
1268 # committing each node, which is slower but allows partial success.
1269 with util.acceptintervention(tr):
1269 with util.acceptintervention(tr):
1270 rbsrt._performrebase(tr)
1270 rbsrt._performrebase(tr)
1271 if not rbsrt.dryrun:
1271 if not rbsrt.dryrun:
1272 rbsrt._finishrebase()
1272 rbsrt._finishrebase()
1273
1273
1274
1274
1275 def _definedestmap(ui, repo, inmemory, destf, srcf, basef, revf, destspace):
1275 def _definedestmap(ui, repo, inmemory, destf, srcf, basef, revf, destspace):
1276 """use revisions argument to define destmap {srcrev: destrev}"""
1276 """use revisions argument to define destmap {srcrev: destrev}"""
1277 if revf is None:
1277 if revf is None:
1278 revf = []
1278 revf = []
1279
1279
1280 # destspace is here to work around issues with `hg pull --rebase` see
1280 # destspace is here to work around issues with `hg pull --rebase` see
1281 # issue5214 for details
1281 # issue5214 for details
1282
1282
1283 cmdutil.checkunfinished(repo)
1283 cmdutil.checkunfinished(repo)
1284 if not inmemory:
1284 if not inmemory:
1285 cmdutil.bailifchanged(repo)
1285 cmdutil.bailifchanged(repo)
1286
1286
1287 if ui.configbool(b'commands', b'rebase.requiredest') and not destf:
1287 if ui.configbool(b'commands', b'rebase.requiredest') and not destf:
1288 raise error.InputError(
1288 raise error.InputError(
1289 _(b'you must specify a destination'),
1289 _(b'you must specify a destination'),
1290 hint=_(b'use: hg rebase -d REV'),
1290 hint=_(b'use: hg rebase -d REV'),
1291 )
1291 )
1292
1292
1293 dest = None
1293 dest = None
1294
1294
1295 if revf:
1295 if revf:
1296 rebaseset = logcmdutil.revrange(repo, revf)
1296 rebaseset = logcmdutil.revrange(repo, revf)
1297 if not rebaseset:
1297 if not rebaseset:
1298 ui.status(_(b'empty "rev" revision set - nothing to rebase\n'))
1298 ui.status(_(b'empty "rev" revision set - nothing to rebase\n'))
1299 return None
1299 return None
1300 elif srcf:
1300 elif srcf:
1301 src = logcmdutil.revrange(repo, srcf)
1301 src = logcmdutil.revrange(repo, srcf)
1302 if not src:
1302 if not src:
1303 ui.status(_(b'empty "source" revision set - nothing to rebase\n'))
1303 ui.status(_(b'empty "source" revision set - nothing to rebase\n'))
1304 return None
1304 return None
1305 # `+ (%ld)` to work around `wdir()::` being empty
1305 # `+ (%ld)` to work around `wdir()::` being empty
1306 rebaseset = repo.revs(b'(%ld):: + (%ld)', src, src)
1306 rebaseset = repo.revs(b'(%ld):: + (%ld)', src, src)
1307 else:
1307 else:
1308 base = logcmdutil.revrange(repo, basef or [b'.'])
1308 base = logcmdutil.revrange(repo, basef or [b'.'])
1309 if not base:
1309 if not base:
1310 ui.status(
1310 ui.status(
1311 _(b'empty "base" revision set - ' b"can't compute rebase set\n")
1311 _(b'empty "base" revision set - ' b"can't compute rebase set\n")
1312 )
1312 )
1313 return None
1313 return None
1314 if destf:
1314 if destf:
1315 # --base does not support multiple destinations
1315 # --base does not support multiple destinations
1316 dest = logcmdutil.revsingle(repo, destf)
1316 dest = logcmdutil.revsingle(repo, destf)
1317 else:
1317 else:
1318 dest = repo[_destrebase(repo, base, destspace=destspace)]
1318 dest = repo[_destrebase(repo, base, destspace=destspace)]
1319 destf = bytes(dest)
1319 destf = bytes(dest)
1320
1320
1321 roots = [] # selected children of branching points
1321 roots = [] # selected children of branching points
1322 bpbase = {} # {branchingpoint: [origbase]}
1322 bpbase = {} # {branchingpoint: [origbase]}
1323 for b in base: # group bases by branching points
1323 for b in base: # group bases by branching points
1324 bp = repo.revs(b'ancestor(%d, %d)', b, dest.rev()).first()
1324 bp = repo.revs(b'ancestor(%d, %d)', b, dest.rev()).first()
1325 bpbase[bp] = bpbase.get(bp, []) + [b]
1325 bpbase[bp] = bpbase.get(bp, []) + [b]
1326 if None in bpbase:
1326 if None in bpbase:
1327 # emulate the old behavior, showing "nothing to rebase" (a better
1327 # emulate the old behavior, showing "nothing to rebase" (a better
1328 # behavior may be abort with "cannot find branching point" error)
1328 # behavior may be abort with "cannot find branching point" error)
1329 bpbase.clear()
1329 bpbase.clear()
1330 for bp, bs in bpbase.items(): # calculate roots
1330 for bp, bs in bpbase.items(): # calculate roots
1331 roots += list(repo.revs(b'children(%d) & ancestors(%ld)', bp, bs))
1331 roots += list(repo.revs(b'children(%d) & ancestors(%ld)', bp, bs))
1332
1332
1333 rebaseset = repo.revs(b'%ld::', roots)
1333 rebaseset = repo.revs(b'%ld::', roots)
1334
1334
1335 if not rebaseset:
1335 if not rebaseset:
1336 # transform to list because smartsets are not comparable to
1336 # transform to list because smartsets are not comparable to
1337 # lists. This should be improved to honor laziness of
1337 # lists. This should be improved to honor laziness of
1338 # smartset.
1338 # smartset.
1339 if list(base) == [dest.rev()]:
1339 if list(base) == [dest.rev()]:
1340 if basef:
1340 if basef:
1341 ui.status(
1341 ui.status(
1342 _(
1342 _(
1343 b'nothing to rebase - %s is both "base"'
1343 b'nothing to rebase - %s is both "base"'
1344 b' and destination\n'
1344 b' and destination\n'
1345 )
1345 )
1346 % dest
1346 % dest
1347 )
1347 )
1348 else:
1348 else:
1349 ui.status(
1349 ui.status(
1350 _(
1350 _(
1351 b'nothing to rebase - working directory '
1351 b'nothing to rebase - working directory '
1352 b'parent is also destination\n'
1352 b'parent is also destination\n'
1353 )
1353 )
1354 )
1354 )
1355 elif not repo.revs(b'%ld - ::%d', base, dest.rev()):
1355 elif not repo.revs(b'%ld - ::%d', base, dest.rev()):
1356 if basef:
1356 if basef:
1357 ui.status(
1357 ui.status(
1358 _(
1358 _(
1359 b'nothing to rebase - "base" %s is '
1359 b'nothing to rebase - "base" %s is '
1360 b'already an ancestor of destination '
1360 b'already an ancestor of destination '
1361 b'%s\n'
1361 b'%s\n'
1362 )
1362 )
1363 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1363 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1364 )
1364 )
1365 else:
1365 else:
1366 ui.status(
1366 ui.status(
1367 _(
1367 _(
1368 b'nothing to rebase - working '
1368 b'nothing to rebase - working '
1369 b'directory parent is already an '
1369 b'directory parent is already an '
1370 b'ancestor of destination %s\n'
1370 b'ancestor of destination %s\n'
1371 )
1371 )
1372 % dest
1372 % dest
1373 )
1373 )
1374 else: # can it happen?
1374 else: # can it happen?
1375 ui.status(
1375 ui.status(
1376 _(b'nothing to rebase from %s to %s\n')
1376 _(b'nothing to rebase from %s to %s\n')
1377 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1377 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1378 )
1378 )
1379 return None
1379 return None
1380
1380
1381 if wdirrev in rebaseset:
1381 if wdirrev in rebaseset:
1382 raise error.InputError(_(b'cannot rebase the working copy'))
1382 raise error.InputError(_(b'cannot rebase the working copy'))
1383 rebasingwcp = repo[b'.'].rev() in rebaseset
1383 rebasingwcp = repo[b'.'].rev() in rebaseset
1384 ui.log(
1384 ui.log(
1385 b"rebase",
1385 b"rebase",
1386 b"rebasing working copy parent: %r\n",
1386 b"rebasing working copy parent: %r\n",
1387 rebasingwcp,
1387 rebasingwcp,
1388 rebase_rebasing_wcp=rebasingwcp,
1388 rebase_rebasing_wcp=rebasingwcp,
1389 )
1389 )
1390 if inmemory and rebasingwcp:
1390 if inmemory and rebasingwcp:
1391 # Check these since we did not before.
1391 # Check these since we did not before.
1392 cmdutil.checkunfinished(repo)
1392 cmdutil.checkunfinished(repo)
1393 cmdutil.bailifchanged(repo)
1393 cmdutil.bailifchanged(repo)
1394
1394
1395 if not destf:
1395 if not destf:
1396 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1396 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1397 destf = bytes(dest)
1397 destf = bytes(dest)
1398
1398
1399 allsrc = revsetlang.formatspec(b'%ld', rebaseset)
1399 allsrc = revsetlang.formatspec(b'%ld', rebaseset)
1400 alias = {b'ALLSRC': allsrc}
1400 alias = {b'ALLSRC': allsrc}
1401
1401
1402 if dest is None:
1402 if dest is None:
1403 try:
1403 try:
1404 # fast path: try to resolve dest without SRC alias
1404 # fast path: try to resolve dest without SRC alias
1405 dest = scmutil.revsingle(repo, destf, localalias=alias)
1405 dest = scmutil.revsingle(repo, destf, localalias=alias)
1406 except error.RepoLookupError:
1406 except error.RepoLookupError:
1407 # multi-dest path: resolve dest for each SRC separately
1407 # multi-dest path: resolve dest for each SRC separately
1408 destmap = {}
1408 destmap = {}
1409 for r in rebaseset:
1409 for r in rebaseset:
1410 alias[b'SRC'] = revsetlang.formatspec(b'%d', r)
1410 alias[b'SRC'] = revsetlang.formatspec(b'%d', r)
1411 # use repo.anyrevs instead of scmutil.revsingle because we
1411 # use repo.anyrevs instead of scmutil.revsingle because we
1412 # don't want to abort if destset is empty.
1412 # don't want to abort if destset is empty.
1413 destset = repo.anyrevs([destf], user=True, localalias=alias)
1413 destset = repo.anyrevs([destf], user=True, localalias=alias)
1414 size = len(destset)
1414 size = len(destset)
1415 if size == 1:
1415 if size == 1:
1416 destmap[r] = destset.first()
1416 destmap[r] = destset.first()
1417 elif size == 0:
1417 elif size == 0:
1418 ui.note(_(b'skipping %s - empty destination\n') % repo[r])
1418 ui.note(_(b'skipping %s - empty destination\n') % repo[r])
1419 else:
1419 else:
1420 raise error.InputError(
1420 raise error.InputError(
1421 _(b'rebase destination for %s is not unique') % repo[r]
1421 _(b'rebase destination for %s is not unique') % repo[r]
1422 )
1422 )
1423
1423
1424 if dest is not None:
1424 if dest is not None:
1425 # single-dest case: assign dest to each rev in rebaseset
1425 # single-dest case: assign dest to each rev in rebaseset
1426 destrev = dest.rev()
1426 destrev = dest.rev()
1427 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1427 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1428
1428
1429 if not destmap:
1429 if not destmap:
1430 ui.status(_(b'nothing to rebase - empty destination\n'))
1430 ui.status(_(b'nothing to rebase - empty destination\n'))
1431 return None
1431 return None
1432
1432
1433 return destmap
1433 return destmap
1434
1434
1435
1435
1436 def externalparent(repo, state, destancestors):
1436 def externalparent(repo, state, destancestors):
1437 """Return the revision that should be used as the second parent
1437 """Return the revision that should be used as the second parent
1438 when the revisions in state is collapsed on top of destancestors.
1438 when the revisions in state is collapsed on top of destancestors.
1439 Abort if there is more than one parent.
1439 Abort if there is more than one parent.
1440 """
1440 """
1441 parents = set()
1441 parents = set()
1442 source = min(state)
1442 source = min(state)
1443 for rev in state:
1443 for rev in state:
1444 if rev == source:
1444 if rev == source:
1445 continue
1445 continue
1446 for p in repo[rev].parents():
1446 for p in repo[rev].parents():
1447 if p.rev() not in state and p.rev() not in destancestors:
1447 if p.rev() not in state and p.rev() not in destancestors:
1448 parents.add(p.rev())
1448 parents.add(p.rev())
1449 if not parents:
1449 if not parents:
1450 return nullrev
1450 return nullrev
1451 if len(parents) == 1:
1451 if len(parents) == 1:
1452 return parents.pop()
1452 return parents.pop()
1453 raise error.StateError(
1453 raise error.StateError(
1454 _(
1454 _(
1455 b'unable to collapse on top of %d, there is more '
1455 b'unable to collapse on top of %d, there is more '
1456 b'than one external parent: %s'
1456 b'than one external parent: %s'
1457 )
1457 )
1458 % (max(destancestors), b', '.join(b"%d" % p for p in sorted(parents)))
1458 % (max(destancestors), b', '.join(b"%d" % p for p in sorted(parents)))
1459 )
1459 )
1460
1460
1461
1461
1462 def commitmemorynode(repo, wctx, editor, extra, user, date, commitmsg):
1462 def commitmemorynode(repo, wctx, editor, extra, user, date, commitmsg):
1463 """Commit the memory changes with parents p1 and p2.
1463 """Commit the memory changes with parents p1 and p2.
1464 Return node of committed revision."""
1464 Return node of committed revision."""
1465 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1465 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1466 # ``branch`` (used when passing ``--keepbranches``).
1466 # ``branch`` (used when passing ``--keepbranches``).
1467 branch = None
1467 branch = None
1468 if b'branch' in extra:
1468 if b'branch' in extra:
1469 branch = extra[b'branch']
1469 branch = extra[b'branch']
1470
1470
1471 # FIXME: We call _compact() because it's required to correctly detect
1471 # FIXME: We call _compact() because it's required to correctly detect
1472 # changed files. This was added to fix a regression shortly before the 5.5
1472 # changed files. This was added to fix a regression shortly before the 5.5
1473 # release. A proper fix will be done in the default branch.
1473 # release. A proper fix will be done in the default branch.
1474 wctx._compact()
1474 wctx._compact()
1475 memctx = wctx.tomemctx(
1475 memctx = wctx.tomemctx(
1476 commitmsg,
1476 commitmsg,
1477 date=date,
1477 date=date,
1478 extra=extra,
1478 extra=extra,
1479 user=user,
1479 user=user,
1480 branch=branch,
1480 branch=branch,
1481 editor=editor,
1481 editor=editor,
1482 )
1482 )
1483 if memctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'):
1483 if memctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'):
1484 return None
1484 return None
1485 commitres = repo.commitctx(memctx)
1485 commitres = repo.commitctx(memctx)
1486 wctx.clean() # Might be reused
1486 wctx.clean() # Might be reused
1487 return commitres
1487 return commitres
1488
1488
1489
1489
1490 def commitnode(repo, editor, extra, user, date, commitmsg):
1490 def commitnode(repo, editor, extra, user, date, commitmsg):
1491 """Commit the wd changes with parents p1 and p2.
1491 """Commit the wd changes with parents p1 and p2.
1492 Return node of committed revision."""
1492 Return node of committed revision."""
1493 tr = util.nullcontextmanager
1493 tr = util.nullcontextmanager
1494 if not repo.ui.configbool(b'rebase', b'singletransaction'):
1494 if not repo.ui.configbool(b'rebase', b'singletransaction'):
1495 tr = lambda: repo.transaction(b'rebase')
1495 tr = lambda: repo.transaction(b'rebase')
1496 with tr():
1496 with tr():
1497 # Commit might fail if unresolved files exist
1497 # Commit might fail if unresolved files exist
1498 newnode = repo.commit(
1498 newnode = repo.commit(
1499 text=commitmsg, user=user, date=date, extra=extra, editor=editor
1499 text=commitmsg, user=user, date=date, extra=extra, editor=editor
1500 )
1500 )
1501
1501
1502 repo.dirstate.setbranch(
1502 repo.dirstate.setbranch(
1503 repo[newnode].branch(), repo.currenttransaction()
1503 repo[newnode].branch(), repo.currenttransaction()
1504 )
1504 )
1505 return newnode
1505 return newnode
1506
1506
1507
1507
1508 def rebasenode(repo, rev, p1, p2, base, collapse, wctx):
1508 def rebasenode(repo, rev, p1, p2, base, collapse, wctx):
1509 """Rebase a single revision rev on top of p1 using base as merge ancestor"""
1509 """Rebase a single revision rev on top of p1 using base as merge ancestor"""
1510 # Merge phase
1510 # Merge phase
1511 # Update to destination and merge it with local
1511 # Update to destination and merge it with local
1512 p1ctx = repo[p1]
1512 p1ctx = repo[p1]
1513 if wctx.isinmemory():
1513 if wctx.isinmemory():
1514 wctx.setbase(p1ctx)
1514 wctx.setbase(p1ctx)
1515 scope = util.nullcontextmanager
1515 scope = util.nullcontextmanager
1516 else:
1516 else:
1517 if repo[b'.'].rev() != p1:
1517 if repo[b'.'].rev() != p1:
1518 repo.ui.debug(b" update to %d:%s\n" % (p1, p1ctx))
1518 repo.ui.debug(b" update to %d:%s\n" % (p1, p1ctx))
1519 mergemod.clean_update(p1ctx)
1519 mergemod.clean_update(p1ctx)
1520 else:
1520 else:
1521 repo.ui.debug(b" already in destination\n")
1521 repo.ui.debug(b" already in destination\n")
1522 scope = lambda: repo.dirstate.changing_parents(repo)
1522 scope = lambda: repo.dirstate.changing_parents(repo)
1523 # This is, alas, necessary to invalidate workingctx's manifest cache,
1523 # This is, alas, necessary to invalidate workingctx's manifest cache,
1524 # as well as other data we litter on it in other places.
1524 # as well as other data we litter on it in other places.
1525 wctx = repo[None]
1525 wctx = repo[None]
1526 repo.dirstate.write(repo.currenttransaction())
1526 repo.dirstate.write(repo.currenttransaction())
1527 ctx = repo[rev]
1527 ctx = repo[rev]
1528 repo.ui.debug(b" merge against %d:%s\n" % (rev, ctx))
1528 repo.ui.debug(b" merge against %d:%s\n" % (rev, ctx))
1529 if base is not None:
1529 if base is not None:
1530 repo.ui.debug(b" detach base %d:%s\n" % (base, repo[base]))
1530 repo.ui.debug(b" detach base %d:%s\n" % (base, repo[base]))
1531
1531
1532 with scope():
1532 with scope():
1533 # See explanation in merge.graft()
1533 # See explanation in merge.graft()
1534 mergeancestor = repo.changelog.isancestor(p1ctx.node(), ctx.node())
1534 mergeancestor = repo.changelog.isancestor(p1ctx.node(), ctx.node())
1535 stats = mergemod._update(
1535 stats = mergemod._update(
1536 repo,
1536 repo,
1537 rev,
1537 rev,
1538 branchmerge=True,
1538 branchmerge=True,
1539 force=True,
1539 force=True,
1540 ancestor=base,
1540 ancestor=base,
1541 mergeancestor=mergeancestor,
1541 mergeancestor=mergeancestor,
1542 labels=[b'dest', b'source', b'parent of source'],
1542 labels=[b'dest', b'source', b'parent of source'],
1543 wc=wctx,
1543 wc=wctx,
1544 )
1544 )
1545 wctx.setparents(p1ctx.node(), repo[p2].node())
1545 wctx.setparents(p1ctx.node(), repo[p2].node())
1546 if collapse:
1546 if collapse:
1547 copies.graftcopies(wctx, ctx, p1ctx)
1547 copies.graftcopies(wctx, ctx, p1ctx)
1548 else:
1548 else:
1549 # If we're not using --collapse, we need to
1549 # If we're not using --collapse, we need to
1550 # duplicate copies between the revision we're
1550 # duplicate copies between the revision we're
1551 # rebasing and its first parent.
1551 # rebasing and its first parent.
1552 copies.graftcopies(wctx, ctx, ctx.p1())
1552 copies.graftcopies(wctx, ctx, ctx.p1())
1553
1553
1554 if stats.unresolvedcount > 0:
1554 if stats.unresolvedcount > 0:
1555 if wctx.isinmemory():
1555 if wctx.isinmemory():
1556 raise error.InMemoryMergeConflictsError()
1556 raise error.InMemoryMergeConflictsError()
1557 else:
1557 else:
1558 raise error.ConflictResolutionRequired(b'rebase')
1558 raise error.ConflictResolutionRequired(b'rebase')
1559
1559
1560
1560
1561 def adjustdest(repo, rev, destmap, state, skipped):
1561 def adjustdest(repo, rev, destmap, state, skipped):
1562 r"""adjust rebase destination given the current rebase state
1562 r"""adjust rebase destination given the current rebase state
1563
1563
1564 rev is what is being rebased. Return a list of two revs, which are the
1564 rev is what is being rebased. Return a list of two revs, which are the
1565 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1565 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1566 nullrev, return dest without adjustment for it.
1566 nullrev, return dest without adjustment for it.
1567
1567
1568 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1568 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1569 to B1, and E's destination will be adjusted from F to B1.
1569 to B1, and E's destination will be adjusted from F to B1.
1570
1570
1571 B1 <- written during rebasing B
1571 B1 <- written during rebasing B
1572 |
1572 |
1573 F <- original destination of B, E
1573 F <- original destination of B, E
1574 |
1574 |
1575 | E <- rev, which is being rebased
1575 | E <- rev, which is being rebased
1576 | |
1576 | |
1577 | D <- prev, one parent of rev being checked
1577 | D <- prev, one parent of rev being checked
1578 | |
1578 | |
1579 | x <- skipped, ex. no successor or successor in (::dest)
1579 | x <- skipped, ex. no successor or successor in (::dest)
1580 | |
1580 | |
1581 | C <- rebased as C', different destination
1581 | C <- rebased as C', different destination
1582 | |
1582 | |
1583 | B <- rebased as B1 C'
1583 | B <- rebased as B1 C'
1584 |/ |
1584 |/ |
1585 A G <- destination of C, different
1585 A G <- destination of C, different
1586
1586
1587 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1587 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1588 first move C to C1, G to G1, and when it's checking H, the adjusted
1588 first move C to C1, G to G1, and when it's checking H, the adjusted
1589 destinations will be [C1, G1].
1589 destinations will be [C1, G1].
1590
1590
1591 H C1 G1
1591 H C1 G1
1592 /| | /
1592 /| | /
1593 F G |/
1593 F G |/
1594 K | | -> K
1594 K | | -> K
1595 | C D |
1595 | C D |
1596 | |/ |
1596 | |/ |
1597 | B | ...
1597 | B | ...
1598 |/ |/
1598 |/ |/
1599 A A
1599 A A
1600
1600
1601 Besides, adjust dest according to existing rebase information. For example,
1601 Besides, adjust dest according to existing rebase information. For example,
1602
1602
1603 B C D B needs to be rebased on top of C, C needs to be rebased on top
1603 B C D B needs to be rebased on top of C, C needs to be rebased on top
1604 \|/ of D. We will rebase C first.
1604 \|/ of D. We will rebase C first.
1605 A
1605 A
1606
1606
1607 C' After rebasing C, when considering B's destination, use C'
1607 C' After rebasing C, when considering B's destination, use C'
1608 | instead of the original C.
1608 | instead of the original C.
1609 B D
1609 B D
1610 \ /
1610 \ /
1611 A
1611 A
1612 """
1612 """
1613 # pick already rebased revs with same dest from state as interesting source
1613 # pick already rebased revs with same dest from state as interesting source
1614 dest = destmap[rev]
1614 dest = destmap[rev]
1615 source = [
1615 source = [
1616 s
1616 s
1617 for s, d in state.items()
1617 for s, d in state.items()
1618 if d > 0 and destmap[s] == dest and s not in skipped
1618 if d > 0 and destmap[s] == dest and s not in skipped
1619 ]
1619 ]
1620
1620
1621 result = []
1621 result = []
1622 for prev in repo.changelog.parentrevs(rev):
1622 for prev in repo.changelog.parentrevs(rev):
1623 adjusted = dest
1623 adjusted = dest
1624 if prev != nullrev:
1624 if prev != nullrev:
1625 candidate = repo.revs(b'max(%ld and (::%d))', source, prev).first()
1625 candidate = repo.revs(b'max(%ld and (::%d))', source, prev).first()
1626 if candidate is not None:
1626 if candidate is not None:
1627 adjusted = state[candidate]
1627 adjusted = state[candidate]
1628 if adjusted == dest and dest in state:
1628 if adjusted == dest and dest in state:
1629 adjusted = state[dest]
1629 adjusted = state[dest]
1630 if adjusted == revtodo:
1630 if adjusted == revtodo:
1631 # sortsource should produce an order that makes this impossible
1631 # sortsource should produce an order that makes this impossible
1632 raise error.ProgrammingError(
1632 raise error.ProgrammingError(
1633 b'rev %d should be rebased already at this time' % dest
1633 b'rev %d should be rebased already at this time' % dest
1634 )
1634 )
1635 result.append(adjusted)
1635 result.append(adjusted)
1636 return result
1636 return result
1637
1637
1638
1638
1639 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1639 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1640 """
1640 """
1641 Abort if rebase will create divergence or rebase is noop because of markers
1641 Abort if rebase will create divergence or rebase is noop because of markers
1642
1642
1643 `rebaseobsrevs`: set of obsolete revision in source
1643 `rebaseobsrevs`: set of obsolete revision in source
1644 `rebaseobsskipped`: set of revisions from source skipped because they have
1644 `rebaseobsskipped`: set of revisions from source skipped because they have
1645 successors in destination or no non-obsolete successor.
1645 successors in destination or no non-obsolete successor.
1646 """
1646 """
1647 # Obsolete node with successors not in dest leads to divergence
1647 # Obsolete node with successors not in dest leads to divergence
1648 divergenceok = obsolete.isenabled(repo, obsolete.allowdivergenceopt)
1648 divergenceok = obsolete.isenabled(repo, obsolete.allowdivergenceopt)
1649 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1649 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1650
1650
1651 if divergencebasecandidates and not divergenceok:
1651 if divergencebasecandidates and not divergenceok:
1652 divhashes = (bytes(repo[r]) for r in divergencebasecandidates)
1652 divhashes = (bytes(repo[r]) for r in divergencebasecandidates)
1653 msg = _(b"this rebase will cause divergences from: %s")
1653 msg = _(b"this rebase will cause divergences from: %s")
1654 h = _(
1654 h = _(
1655 b"to force the rebase please set "
1655 b"to force the rebase please set "
1656 b"experimental.evolution.allowdivergence=True"
1656 b"experimental.evolution.allowdivergence=True"
1657 )
1657 )
1658 raise error.StateError(msg % (b",".join(divhashes),), hint=h)
1658 raise error.StateError(msg % (b",".join(divhashes),), hint=h)
1659
1659
1660
1660
1661 def successorrevs(unfi, rev):
1661 def successorrevs(unfi, rev):
1662 """yield revision numbers for successors of rev"""
1662 """yield revision numbers for successors of rev"""
1663 assert unfi.filtername is None
1663 assert unfi.filtername is None
1664 get_rev = unfi.changelog.index.get_rev
1664 get_rev = unfi.changelog.index.get_rev
1665 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1665 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1666 r = get_rev(s)
1666 r = get_rev(s)
1667 if r is not None:
1667 if r is not None:
1668 yield r
1668 yield r
1669
1669
1670
1670
1671 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1671 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1672 """Return new parents and optionally a merge base for rev being rebased
1672 """Return new parents and optionally a merge base for rev being rebased
1673
1673
1674 The destination specified by "dest" cannot always be used directly because
1674 The destination specified by "dest" cannot always be used directly because
1675 previously rebase result could affect destination. For example,
1675 previously rebase result could affect destination. For example,
1676
1676
1677 D E rebase -r C+D+E -d B
1677 D E rebase -r C+D+E -d B
1678 |/ C will be rebased to C'
1678 |/ C will be rebased to C'
1679 B C D's new destination will be C' instead of B
1679 B C D's new destination will be C' instead of B
1680 |/ E's new destination will be C' instead of B
1680 |/ E's new destination will be C' instead of B
1681 A
1681 A
1682
1682
1683 The new parents of a merge is slightly more complicated. See the comment
1683 The new parents of a merge is slightly more complicated. See the comment
1684 block below.
1684 block below.
1685 """
1685 """
1686 # use unfiltered changelog since successorrevs may return filtered nodes
1686 # use unfiltered changelog since successorrevs may return filtered nodes
1687 assert repo.filtername is None
1687 assert repo.filtername is None
1688 cl = repo.changelog
1688 cl = repo.changelog
1689 isancestor = cl.isancestorrev
1689 isancestor = cl.isancestorrev
1690
1690
1691 dest = destmap[rev]
1691 dest = destmap[rev]
1692 oldps = repo.changelog.parentrevs(rev) # old parents
1692 oldps = repo.changelog.parentrevs(rev) # old parents
1693 newps = [nullrev, nullrev] # new parents
1693 newps = [nullrev, nullrev] # new parents
1694 dests = adjustdest(repo, rev, destmap, state, skipped)
1694 dests = adjustdest(repo, rev, destmap, state, skipped)
1695 bases = list(oldps) # merge base candidates, initially just old parents
1695 bases = list(oldps) # merge base candidates, initially just old parents
1696
1696
1697 if all(r == nullrev for r in oldps[1:]):
1697 if all(r == nullrev for r in oldps[1:]):
1698 # For non-merge changeset, just move p to adjusted dest as requested.
1698 # For non-merge changeset, just move p to adjusted dest as requested.
1699 newps[0] = dests[0]
1699 newps[0] = dests[0]
1700 else:
1700 else:
1701 # For merge changeset, if we move p to dests[i] unconditionally, both
1701 # For merge changeset, if we move p to dests[i] unconditionally, both
1702 # parents may change and the end result looks like "the merge loses a
1702 # parents may change and the end result looks like "the merge loses a
1703 # parent", which is a surprise. This is a limit because "--dest" only
1703 # parent", which is a surprise. This is a limit because "--dest" only
1704 # accepts one dest per src.
1704 # accepts one dest per src.
1705 #
1705 #
1706 # Therefore, only move p with reasonable conditions (in this order):
1706 # Therefore, only move p with reasonable conditions (in this order):
1707 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1707 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1708 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1708 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1709 #
1709 #
1710 # Comparing with adjustdest, the logic here does some additional work:
1710 # Comparing with adjustdest, the logic here does some additional work:
1711 # 1. decide which parents will not be moved towards dest
1711 # 1. decide which parents will not be moved towards dest
1712 # 2. if the above decision is "no", should a parent still be moved
1712 # 2. if the above decision is "no", should a parent still be moved
1713 # because it was rebased?
1713 # because it was rebased?
1714 #
1714 #
1715 # For example:
1715 # For example:
1716 #
1716 #
1717 # C # "rebase -r C -d D" is an error since none of the parents
1717 # C # "rebase -r C -d D" is an error since none of the parents
1718 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1718 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1719 # A B D # B (using rule "2."), since B will be rebased.
1719 # A B D # B (using rule "2."), since B will be rebased.
1720 #
1720 #
1721 # The loop tries to be not rely on the fact that a Mercurial node has
1721 # The loop tries to be not rely on the fact that a Mercurial node has
1722 # at most 2 parents.
1722 # at most 2 parents.
1723 for i, p in enumerate(oldps):
1723 for i, p in enumerate(oldps):
1724 np = p # new parent
1724 np = p # new parent
1725 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1725 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1726 np = dests[i]
1726 np = dests[i]
1727 elif p in state and state[p] > 0:
1727 elif p in state and state[p] > 0:
1728 np = state[p]
1728 np = state[p]
1729
1729
1730 # If one parent becomes an ancestor of the other, drop the ancestor
1730 # If one parent becomes an ancestor of the other, drop the ancestor
1731 for j, x in enumerate(newps[:i]):
1731 for j, x in enumerate(newps[:i]):
1732 if x == nullrev:
1732 if x == nullrev:
1733 continue
1733 continue
1734 if isancestor(np, x): # CASE-1
1734 if isancestor(np, x): # CASE-1
1735 np = nullrev
1735 np = nullrev
1736 elif isancestor(x, np): # CASE-2
1736 elif isancestor(x, np): # CASE-2
1737 newps[j] = np
1737 newps[j] = np
1738 np = nullrev
1738 np = nullrev
1739 # New parents forming an ancestor relationship does not
1739 # New parents forming an ancestor relationship does not
1740 # mean the old parents have a similar relationship. Do not
1740 # mean the old parents have a similar relationship. Do not
1741 # set bases[x] to nullrev.
1741 # set bases[x] to nullrev.
1742 bases[j], bases[i] = bases[i], bases[j]
1742 bases[j], bases[i] = bases[i], bases[j]
1743
1743
1744 newps[i] = np
1744 newps[i] = np
1745
1745
1746 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1746 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1747 # base. If only p2 changes, merging using unchanged p1 as merge base is
1747 # base. If only p2 changes, merging using unchanged p1 as merge base is
1748 # suboptimal. Therefore swap parents to make the merge sane.
1748 # suboptimal. Therefore swap parents to make the merge sane.
1749 if newps[1] != nullrev and oldps[0] == newps[0]:
1749 if newps[1] != nullrev and oldps[0] == newps[0]:
1750 assert len(newps) == 2 and len(oldps) == 2
1750 assert len(newps) == 2 and len(oldps) == 2
1751 newps.reverse()
1751 newps.reverse()
1752 bases.reverse()
1752 bases.reverse()
1753
1753
1754 # No parent change might be an error because we fail to make rev a
1754 # No parent change might be an error because we fail to make rev a
1755 # descendent of requested dest. This can happen, for example:
1755 # descendent of requested dest. This can happen, for example:
1756 #
1756 #
1757 # C # rebase -r C -d D
1757 # C # rebase -r C -d D
1758 # /| # None of A and B will be changed to D and rebase fails.
1758 # /| # None of A and B will be changed to D and rebase fails.
1759 # A B D
1759 # A B D
1760 if set(newps) == set(oldps) and dest not in newps:
1760 if set(newps) == set(oldps) and dest not in newps:
1761 raise error.InputError(
1761 raise error.InputError(
1762 _(
1762 _(
1763 b'cannot rebase %d:%s without '
1763 b'cannot rebase %d:%s without '
1764 b'moving at least one of its parents'
1764 b'moving at least one of its parents'
1765 )
1765 )
1766 % (rev, repo[rev])
1766 % (rev, repo[rev])
1767 )
1767 )
1768
1768
1769 # Source should not be ancestor of dest. The check here guarantees it's
1769 # Source should not be ancestor of dest. The check here guarantees it's
1770 # impossible. With multi-dest, the initial check does not cover complex
1770 # impossible. With multi-dest, the initial check does not cover complex
1771 # cases since we don't have abstractions to dry-run rebase cheaply.
1771 # cases since we don't have abstractions to dry-run rebase cheaply.
1772 if any(p != nullrev and isancestor(rev, p) for p in newps):
1772 if any(p != nullrev and isancestor(rev, p) for p in newps):
1773 raise error.InputError(_(b'source is ancestor of destination'))
1773 raise error.InputError(_(b'source is ancestor of destination'))
1774
1774
1775 # Check if the merge will contain unwanted changes. That may happen if
1775 # Check if the merge will contain unwanted changes. That may happen if
1776 # there are multiple special (non-changelog ancestor) merge bases, which
1776 # there are multiple special (non-changelog ancestor) merge bases, which
1777 # cannot be handled well by the 3-way merge algorithm. For example:
1777 # cannot be handled well by the 3-way merge algorithm. For example:
1778 #
1778 #
1779 # F
1779 # F
1780 # /|
1780 # /|
1781 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1781 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1782 # | | # as merge base, the difference between D and F will include
1782 # | | # as merge base, the difference between D and F will include
1783 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1783 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1784 # |/ # chosen, the rebased F will contain B.
1784 # |/ # chosen, the rebased F will contain B.
1785 # A Z
1785 # A Z
1786 #
1786 #
1787 # But our merge base candidates (D and E in above case) could still be
1787 # But our merge base candidates (D and E in above case) could still be
1788 # better than the default (ancestor(F, Z) == null). Therefore still
1788 # better than the default (ancestor(F, Z) == null). Therefore still
1789 # pick one (so choose p1 above).
1789 # pick one (so choose p1 above).
1790 if sum(1 for b in set(bases) if b != nullrev and b not in newps) > 1:
1790 if sum(1 for b in set(bases) if b != nullrev and b not in newps) > 1:
1791 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1791 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1792 for i, base in enumerate(bases):
1792 for i, base in enumerate(bases):
1793 if base == nullrev or base in newps:
1793 if base == nullrev or base in newps:
1794 continue
1794 continue
1795 # Revisions in the side (not chosen as merge base) branch that
1795 # Revisions in the side (not chosen as merge base) branch that
1796 # might contain "surprising" contents
1796 # might contain "surprising" contents
1797 other_bases = set(bases) - {base}
1797 other_bases = set(bases) - {base}
1798 siderevs = list(
1798 siderevs = list(
1799 repo.revs(b'(%ld %% (%d+%d))', other_bases, base, dest)
1799 repo.revs(b'(%ld %% (%d+%d))', other_bases, base, dest)
1800 )
1800 )
1801
1801
1802 # If those revisions are covered by rebaseset, the result is good.
1802 # If those revisions are covered by rebaseset, the result is good.
1803 # A merge in rebaseset would be considered to cover its ancestors.
1803 # A merge in rebaseset would be considered to cover its ancestors.
1804 if siderevs:
1804 if siderevs:
1805 rebaseset = [
1805 rebaseset = [
1806 r for r, d in state.items() if d > 0 and r not in obsskipped
1806 r for r, d in state.items() if d > 0 and r not in obsskipped
1807 ]
1807 ]
1808 merges = [
1808 merges = [
1809 r for r in rebaseset if cl.parentrevs(r)[1] != nullrev
1809 r for r in rebaseset if cl.parentrevs(r)[1] != nullrev
1810 ]
1810 ]
1811 unwanted[i] = list(
1811 unwanted[i] = list(
1812 repo.revs(
1812 repo.revs(
1813 b'%ld - (::%ld) - %ld', siderevs, merges, rebaseset
1813 b'%ld - (::%ld) - %ld', siderevs, merges, rebaseset
1814 )
1814 )
1815 )
1815 )
1816
1816
1817 if any(revs is not None for revs in unwanted):
1817 if any(revs is not None for revs in unwanted):
1818 # Choose a merge base that has a minimal number of unwanted revs.
1818 # Choose a merge base that has a minimal number of unwanted revs.
1819 l, i = min(
1819 l, i = min(
1820 (len(revs), i)
1820 (len(revs), i)
1821 for i, revs in enumerate(unwanted)
1821 for i, revs in enumerate(unwanted)
1822 if revs is not None
1822 if revs is not None
1823 )
1823 )
1824
1824
1825 # The merge will include unwanted revisions. Abort now. Revisit this if
1825 # The merge will include unwanted revisions. Abort now. Revisit this if
1826 # we have a more advanced merge algorithm that handles multiple bases.
1826 # we have a more advanced merge algorithm that handles multiple bases.
1827 if l > 0:
1827 if l > 0:
1828 unwanteddesc = _(b' or ').join(
1828 unwanteddesc = _(b' or ').join(
1829 (
1829 (
1830 b', '.join(b'%d:%s' % (r, repo[r]) for r in revs)
1830 b', '.join(b'%d:%s' % (r, repo[r]) for r in revs)
1831 for revs in unwanted
1831 for revs in unwanted
1832 if revs is not None
1832 if revs is not None
1833 )
1833 )
1834 )
1834 )
1835 raise error.InputError(
1835 raise error.InputError(
1836 _(b'rebasing %d:%s will include unwanted changes from %s')
1836 _(b'rebasing %d:%s will include unwanted changes from %s')
1837 % (rev, repo[rev], unwanteddesc)
1837 % (rev, repo[rev], unwanteddesc)
1838 )
1838 )
1839
1839
1840 # newps[0] should match merge base if possible. Currently, if newps[i]
1840 # newps[0] should match merge base if possible. Currently, if newps[i]
1841 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1841 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1842 # the other's ancestor. In that case, it's fine to not swap newps here.
1842 # the other's ancestor. In that case, it's fine to not swap newps here.
1843 # (see CASE-1 and CASE-2 above)
1843 # (see CASE-1 and CASE-2 above)
1844 if i != 0:
1844 if i != 0:
1845 if newps[i] != nullrev:
1845 if newps[i] != nullrev:
1846 newps[0], newps[i] = newps[i], newps[0]
1846 newps[0], newps[i] = newps[i], newps[0]
1847 bases[0], bases[i] = bases[i], bases[0]
1847 bases[0], bases[i] = bases[i], bases[0]
1848
1848
1849 # "rebasenode" updates to new p1, use the corresponding merge base.
1849 # "rebasenode" updates to new p1, use the corresponding merge base.
1850 base = bases[0]
1850 base = bases[0]
1851
1851
1852 repo.ui.debug(b" future parents are %d and %d\n" % tuple(newps))
1852 repo.ui.debug(b" future parents are %d and %d\n" % tuple(newps))
1853
1853
1854 return newps[0], newps[1], base
1854 return newps[0], newps[1], base
1855
1855
1856
1856
1857 def isagitpatch(repo, patchname):
1857 def isagitpatch(repo, patchname):
1858 """Return true if the given patch is in git format"""
1858 """Return true if the given patch is in git format"""
1859 mqpatch = os.path.join(repo.mq.path, patchname)
1859 mqpatch = os.path.join(repo.mq.path, patchname)
1860 for line in patch.linereader(open(mqpatch, b'rb')):
1860 for line in patch.linereader(open(mqpatch, b'rb')):
1861 if line.startswith(b'diff --git'):
1861 if line.startswith(b'diff --git'):
1862 return True
1862 return True
1863 return False
1863 return False
1864
1864
1865
1865
1866 def updatemq(repo, state, skipped, **opts):
1866 def updatemq(repo, state, skipped, **opts):
1867 """Update rebased mq patches - finalize and then import them"""
1867 """Update rebased mq patches - finalize and then import them"""
1868 mqrebase = {}
1868 mqrebase = {}
1869 mq = repo.mq
1869 mq = repo.mq
1870 original_series = mq.fullseries[:]
1870 original_series = mq.fullseries[:]
1871 skippedpatches = set()
1871 skippedpatches = set()
1872
1872
1873 for p in mq.applied:
1873 for p in mq.applied:
1874 rev = repo[p.node].rev()
1874 rev = repo[p.node].rev()
1875 if rev in state:
1875 if rev in state:
1876 repo.ui.debug(
1876 repo.ui.debug(
1877 b'revision %d is an mq patch (%s), finalize it.\n'
1877 b'revision %d is an mq patch (%s), finalize it.\n'
1878 % (rev, p.name)
1878 % (rev, p.name)
1879 )
1879 )
1880 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1880 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1881 else:
1881 else:
1882 # Applied but not rebased, not sure this should happen
1882 # Applied but not rebased, not sure this should happen
1883 skippedpatches.add(p.name)
1883 skippedpatches.add(p.name)
1884
1884
1885 if mqrebase:
1885 if mqrebase:
1886 mq.finish(repo, mqrebase.keys())
1886 mq.finish(repo, mqrebase.keys())
1887
1887
1888 # We must start import from the newest revision
1888 # We must start import from the newest revision
1889 for rev in sorted(mqrebase, reverse=True):
1889 for rev in sorted(mqrebase, reverse=True):
1890 if rev not in skipped:
1890 if rev not in skipped:
1891 name, isgit = mqrebase[rev]
1891 name, isgit = mqrebase[rev]
1892 repo.ui.note(
1892 repo.ui.note(
1893 _(b'updating mq patch %s to %d:%s\n')
1893 _(b'updating mq patch %s to %d:%s\n')
1894 % (name, state[rev], repo[state[rev]])
1894 % (name, state[rev], repo[state[rev]])
1895 )
1895 )
1896 mq.qimport(
1896 mq.qimport(
1897 repo,
1897 repo,
1898 (),
1898 (),
1899 patchname=name,
1899 patchname=name,
1900 git=isgit,
1900 git=isgit,
1901 rev=[b"%d" % state[rev]],
1901 rev=[b"%d" % state[rev]],
1902 )
1902 )
1903 else:
1903 else:
1904 # Rebased and skipped
1904 # Rebased and skipped
1905 skippedpatches.add(mqrebase[rev][0])
1905 skippedpatches.add(mqrebase[rev][0])
1906
1906
1907 # Patches were either applied and rebased and imported in
1907 # Patches were either applied and rebased and imported in
1908 # order, applied and removed or unapplied. Discard the removed
1908 # order, applied and removed or unapplied. Discard the removed
1909 # ones while preserving the original series order and guards.
1909 # ones while preserving the original series order and guards.
1910 newseries = [
1910 newseries = [
1911 s
1911 s
1912 for s in original_series
1912 for s in original_series
1913 if mq.guard_re.split(s, 1)[0] not in skippedpatches
1913 if mq.guard_re.split(s, 1)[0] not in skippedpatches
1914 ]
1914 ]
1915 mq.fullseries[:] = newseries
1915 mq.fullseries[:] = newseries
1916 mq.seriesdirty = True
1916 mq.seriesdirty = True
1917 mq.savedirty()
1917 mq.savedirty()
1918
1918
1919
1919
1920 def storecollapsemsg(repo, collapsemsg):
1920 def storecollapsemsg(repo, collapsemsg):
1921 """Store the collapse message to allow recovery"""
1921 """Store the collapse message to allow recovery"""
1922 collapsemsg = collapsemsg or b''
1922 collapsemsg = collapsemsg or b''
1923 f = repo.vfs(b"last-message.txt", b"w")
1923 f = repo.vfs(b"last-message.txt", b"w")
1924 f.write(b"%s\n" % collapsemsg)
1924 f.write(b"%s\n" % collapsemsg)
1925 f.close()
1925 f.close()
1926
1926
1927
1927
1928 def clearcollapsemsg(repo):
1928 def clearcollapsemsg(repo):
1929 """Remove collapse message file"""
1929 """Remove collapse message file"""
1930 repo.vfs.unlinkpath(b"last-message.txt", ignoremissing=True)
1930 repo.vfs.unlinkpath(b"last-message.txt", ignoremissing=True)
1931
1931
1932
1932
1933 def restorecollapsemsg(repo, isabort):
1933 def restorecollapsemsg(repo, isabort):
1934 """Restore previously stored collapse message"""
1934 """Restore previously stored collapse message"""
1935 try:
1935 try:
1936 f = repo.vfs(b"last-message.txt")
1936 f = repo.vfs(b"last-message.txt")
1937 collapsemsg = f.readline().strip()
1937 collapsemsg = f.readline().strip()
1938 f.close()
1938 f.close()
1939 except FileNotFoundError:
1939 except FileNotFoundError:
1940 if isabort:
1940 if isabort:
1941 # Oh well, just abort like normal
1941 # Oh well, just abort like normal
1942 collapsemsg = b''
1942 collapsemsg = b''
1943 else:
1943 else:
1944 raise error.Abort(_(b'missing .hg/last-message.txt for rebase'))
1944 raise error.Abort(_(b'missing .hg/last-message.txt for rebase'))
1945 return collapsemsg
1945 return collapsemsg
1946
1946
1947
1947
1948 def clearstatus(repo):
1948 def clearstatus(repo):
1949 """Remove the status files"""
1949 """Remove the status files"""
1950 # Make sure the active transaction won't write the state file
1950 # Make sure the active transaction won't write the state file
1951 tr = repo.currenttransaction()
1951 tr = repo.currenttransaction()
1952 if tr:
1952 if tr:
1953 tr.removefilegenerator(b'rebasestate')
1953 tr.removefilegenerator(b'rebasestate')
1954 repo.vfs.unlinkpath(b"rebasestate", ignoremissing=True)
1954 repo.vfs.unlinkpath(b"rebasestate", ignoremissing=True)
1955
1955
1956
1956
1957 def sortsource(destmap):
1957 def sortsource(destmap):
1958 """yield source revisions in an order that we only rebase things once
1958 """yield source revisions in an order that we only rebase things once
1959
1959
1960 If source and destination overlaps, we should filter out revisions
1960 If source and destination overlaps, we should filter out revisions
1961 depending on other revisions which hasn't been rebased yet.
1961 depending on other revisions which hasn't been rebased yet.
1962
1962
1963 Yield a sorted list of revisions each time.
1963 Yield a sorted list of revisions each time.
1964
1964
1965 For example, when rebasing A to B, B to C. This function yields [B], then
1965 For example, when rebasing A to B, B to C. This function yields [B], then
1966 [A], indicating B needs to be rebased first.
1966 [A], indicating B needs to be rebased first.
1967
1967
1968 Raise if there is a cycle so the rebase is impossible.
1968 Raise if there is a cycle so the rebase is impossible.
1969 """
1969 """
1970 srcset = set(destmap)
1970 srcset = set(destmap)
1971 while srcset:
1971 while srcset:
1972 srclist = sorted(srcset)
1972 srclist = sorted(srcset)
1973 result = []
1973 result = []
1974 for r in srclist:
1974 for r in srclist:
1975 if destmap[r] not in srcset:
1975 if destmap[r] not in srcset:
1976 result.append(r)
1976 result.append(r)
1977 if not result:
1977 if not result:
1978 raise error.InputError(_(b'source and destination form a cycle'))
1978 raise error.InputError(_(b'source and destination form a cycle'))
1979 srcset -= set(result)
1979 srcset -= set(result)
1980 yield result
1980 yield result
1981
1981
1982
1982
1983 def buildstate(repo, destmap, collapse):
1983 def buildstate(repo, destmap, collapse):
1984 """Define which revisions are going to be rebased and where
1984 """Define which revisions are going to be rebased and where
1985
1985
1986 repo: repo
1986 repo: repo
1987 destmap: {srcrev: destrev}
1987 destmap: {srcrev: destrev}
1988 """
1988 """
1989 rebaseset = destmap.keys()
1989 rebaseset = destmap.keys()
1990 originalwd = repo[b'.'].rev()
1990 originalwd = repo[b'.'].rev()
1991
1991
1992 # This check isn't strictly necessary, since mq detects commits over an
1992 # This check isn't strictly necessary, since mq detects commits over an
1993 # applied patch. But it prevents messing up the working directory when
1993 # applied patch. But it prevents messing up the working directory when
1994 # a partially completed rebase is blocked by mq.
1994 # a partially completed rebase is blocked by mq.
1995 if b'qtip' in repo.tags():
1995 if b'qtip' in repo.tags():
1996 mqapplied = {repo[s.node].rev() for s in repo.mq.applied}
1996 mqapplied = {repo[s.node].rev() for s in repo.mq.applied}
1997 if set(destmap.values()) & mqapplied:
1997 if set(destmap.values()) & mqapplied:
1998 raise error.StateError(_(b'cannot rebase onto an applied mq patch'))
1998 raise error.StateError(_(b'cannot rebase onto an applied mq patch'))
1999
1999
2000 # Get "cycle" error early by exhausting the generator.
2000 # Get "cycle" error early by exhausting the generator.
2001 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
2001 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
2002 if not sortedsrc:
2002 if not sortedsrc:
2003 raise error.InputError(_(b'no matching revisions'))
2003 raise error.InputError(_(b'no matching revisions'))
2004
2004
2005 # Only check the first batch of revisions to rebase not depending on other
2005 # Only check the first batch of revisions to rebase not depending on other
2006 # rebaseset. This means "source is ancestor of destination" for the second
2006 # rebaseset. This means "source is ancestor of destination" for the second
2007 # (and following) batches of revisions are not checked here. We rely on
2007 # (and following) batches of revisions are not checked here. We rely on
2008 # "defineparents" to do that check.
2008 # "defineparents" to do that check.
2009 roots = list(repo.set(b'roots(%ld)', sortedsrc[0]))
2009 roots = list(repo.set(b'roots(%ld)', sortedsrc[0]))
2010 if not roots:
2010 if not roots:
2011 raise error.InputError(_(b'no matching revisions'))
2011 raise error.InputError(_(b'no matching revisions'))
2012
2012
2013 def revof(r):
2013 def revof(r):
2014 return r.rev()
2014 return r.rev()
2015
2015
2016 roots = sorted(roots, key=revof)
2016 roots = sorted(roots, key=revof)
2017 state = dict.fromkeys(rebaseset, revtodo)
2017 state = dict.fromkeys(rebaseset, revtodo)
2018 emptyrebase = len(sortedsrc) == 1
2018 emptyrebase = len(sortedsrc) == 1
2019 for root in roots:
2019 for root in roots:
2020 dest = repo[destmap[root.rev()]]
2020 dest = repo[destmap[root.rev()]]
2021 commonbase = root.ancestor(dest)
2021 commonbase = root.ancestor(dest)
2022 if commonbase == root:
2022 if commonbase == root:
2023 raise error.InputError(_(b'source is ancestor of destination'))
2023 raise error.InputError(_(b'source is ancestor of destination'))
2024 if commonbase == dest:
2024 if commonbase == dest:
2025 wctx = repo[None]
2025 wctx = repo[None]
2026 if dest == wctx.p1():
2026 if dest == wctx.p1():
2027 # when rebasing to '.', it will use the current wd branch name
2027 # when rebasing to '.', it will use the current wd branch name
2028 samebranch = root.branch() == wctx.branch()
2028 samebranch = root.branch() == wctx.branch()
2029 else:
2029 else:
2030 samebranch = root.branch() == dest.branch()
2030 samebranch = root.branch() == dest.branch()
2031 if not collapse and samebranch and dest in root.parents():
2031 if not collapse and samebranch and dest in root.parents():
2032 # mark the revision as done by setting its new revision
2032 # mark the revision as done by setting its new revision
2033 # equal to its old (current) revisions
2033 # equal to its old (current) revisions
2034 state[root.rev()] = root.rev()
2034 state[root.rev()] = root.rev()
2035 repo.ui.debug(b'source is a child of destination\n')
2035 repo.ui.debug(b'source is a child of destination\n')
2036 continue
2036 continue
2037
2037
2038 emptyrebase = False
2038 emptyrebase = False
2039 repo.ui.debug(b'rebase onto %s starting from %s\n' % (dest, root))
2039 repo.ui.debug(b'rebase onto %s starting from %s\n' % (dest, root))
2040 if emptyrebase:
2040 if emptyrebase:
2041 return None
2041 return None
2042 for rev in sorted(state):
2042 for rev in sorted(state):
2043 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
2043 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
2044 # if all parents of this revision are done, then so is this revision
2044 # if all parents of this revision are done, then so is this revision
2045 if parents and all((state.get(p) == p for p in parents)):
2045 if parents and all((state.get(p) == p for p in parents)):
2046 state[rev] = rev
2046 state[rev] = rev
2047 return originalwd, destmap, state
2047 return originalwd, destmap, state
2048
2048
2049
2049
2050 def clearrebased(
2050 def clearrebased(
2051 ui,
2051 ui,
2052 repo,
2052 repo,
2053 destmap,
2053 destmap,
2054 state,
2054 state,
2055 skipped,
2055 skipped,
2056 collapsedas=None,
2056 collapsedas=None,
2057 keepf=False,
2057 keepf=False,
2058 fm=None,
2058 fm=None,
2059 backup=True,
2059 backup=True,
2060 ):
2060 ):
2061 """dispose of rebased revision at the end of the rebase
2061 """dispose of rebased revision at the end of the rebase
2062
2062
2063 If `collapsedas` is not None, the rebase was a collapse whose result if the
2063 If `collapsedas` is not None, the rebase was a collapse whose result if the
2064 `collapsedas` node.
2064 `collapsedas` node.
2065
2065
2066 If `keepf` is not True, the rebase has --keep set and no nodes should be
2066 If `keepf` is not True, the rebase has --keep set and no nodes should be
2067 removed (but bookmarks still need to be moved).
2067 removed (but bookmarks still need to be moved).
2068
2068
2069 If `backup` is False, no backup will be stored when stripping rebased
2069 If `backup` is False, no backup will be stored when stripping rebased
2070 revisions.
2070 revisions.
2071 """
2071 """
2072 tonode = repo.changelog.node
2072 tonode = repo.changelog.node
2073 replacements = {}
2073 replacements = {}
2074 moves = {}
2074 moves = {}
2075 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
2075 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
2076
2076
2077 collapsednodes = []
2077 collapsednodes = []
2078 for rev, newrev in sorted(state.items()):
2078 for rev, newrev in sorted(state.items()):
2079 if newrev >= 0 and newrev != rev:
2079 if newrev >= 0 and newrev != rev:
2080 oldnode = tonode(rev)
2080 oldnode = tonode(rev)
2081 newnode = collapsedas or tonode(newrev)
2081 newnode = collapsedas or tonode(newrev)
2082 moves[oldnode] = newnode
2082 moves[oldnode] = newnode
2083 succs = None
2083 succs = None
2084 if rev in skipped:
2084 if rev in skipped:
2085 if stripcleanup or not repo[rev].obsolete():
2085 if stripcleanup or not repo[rev].obsolete():
2086 succs = ()
2086 succs = ()
2087 elif collapsedas:
2087 elif collapsedas:
2088 collapsednodes.append(oldnode)
2088 collapsednodes.append(oldnode)
2089 else:
2089 else:
2090 succs = (newnode,)
2090 succs = (newnode,)
2091 if succs is not None:
2091 if succs is not None:
2092 replacements[(oldnode,)] = succs
2092 replacements[(oldnode,)] = succs
2093 if collapsednodes:
2093 if collapsednodes:
2094 replacements[tuple(collapsednodes)] = (collapsedas,)
2094 replacements[tuple(collapsednodes)] = (collapsedas,)
2095 if fm:
2095 if fm:
2096 hf = fm.hexfunc
2096 hf = fm.hexfunc
2097 fl = fm.formatlist
2097 fl = fm.formatlist
2098 fd = fm.formatdict
2098 fd = fm.formatdict
2099 changes = {}
2099 changes = {}
2100 for oldns, newn in replacements.items():
2100 for oldns, newn in replacements.items():
2101 for oldn in oldns:
2101 for oldn in oldns:
2102 changes[hf(oldn)] = fl([hf(n) for n in newn], name=b'node')
2102 changes[hf(oldn)] = fl([hf(n) for n in newn], name=b'node')
2103 nodechanges = fd(changes, key=b"oldnode", value=b"newnodes")
2103 nodechanges = fd(changes, key=b"oldnode", value=b"newnodes")
2104 fm.data(nodechanges=nodechanges)
2104 fm.data(nodechanges=nodechanges)
2105 if keepf:
2105 if keepf:
2106 replacements = {}
2106 replacements = {}
2107 scmutil.cleanupnodes(repo, replacements, b'rebase', moves, backup=backup)
2107 scmutil.cleanupnodes(repo, replacements, b'rebase', moves, backup=backup)
2108
2108
2109
2109
2110 def pullrebase(orig, ui, repo, *args, **opts):
2110 def pullrebase(orig, ui, repo, *args, **opts):
2111 """Call rebase after pull if the latter has been invoked with --rebase"""
2111 """Call rebase after pull if the latter has been invoked with --rebase"""
2112 if opts.get('rebase'):
2112 if opts.get('rebase'):
2113 if ui.configbool(b'commands', b'rebase.requiredest'):
2113 if ui.configbool(b'commands', b'rebase.requiredest'):
2114 msg = _(b'rebase destination required by configuration')
2114 msg = _(b'rebase destination required by configuration')
2115 hint = _(b'use hg pull followed by hg rebase -d DEST')
2115 hint = _(b'use hg pull followed by hg rebase -d DEST')
2116 raise error.InputError(msg, hint=hint)
2116 raise error.InputError(msg, hint=hint)
2117
2117
2118 with repo.wlock(), repo.lock():
2118 with repo.wlock(), repo.lock():
2119 if opts.get('update'):
2119 if opts.get('update'):
2120 del opts['update']
2120 del opts['update']
2121 ui.debug(
2121 ui.debug(
2122 b'--update and --rebase are not compatible, ignoring '
2122 b'--update and --rebase are not compatible, ignoring '
2123 b'the update flag\n'
2123 b'the update flag\n'
2124 )
2124 )
2125
2125
2126 cmdutil.checkunfinished(repo, skipmerge=True)
2126 cmdutil.checkunfinished(repo, skipmerge=True)
2127 cmdutil.bailifchanged(
2127 cmdutil.bailifchanged(
2128 repo,
2128 repo,
2129 hint=_(
2129 hint=_(
2130 b'cannot pull with rebase: '
2130 b'cannot pull with rebase: '
2131 b'please commit or shelve your changes first'
2131 b'please commit or shelve your changes first'
2132 ),
2132 ),
2133 )
2133 )
2134
2134
2135 revsprepull = len(repo)
2135 revsprepull = len(repo)
2136 origpostincoming = commands.postincoming
2136 origpostincoming = cmdutil.postincoming
2137
2137
2138 def _dummy(*args, **kwargs):
2138 def _dummy(*args, **kwargs):
2139 pass
2139 pass
2140
2140
2141 commands.postincoming = _dummy
2141 cmdutil.postincoming = _dummy
2142 try:
2142 try:
2143 ret = orig(ui, repo, *args, **opts)
2143 ret = orig(ui, repo, *args, **opts)
2144 finally:
2144 finally:
2145 commands.postincoming = origpostincoming
2145 cmdutil.postincoming = origpostincoming
2146 revspostpull = len(repo)
2146 revspostpull = len(repo)
2147 if revspostpull > revsprepull:
2147 if revspostpull > revsprepull:
2148 # --rev option from pull conflict with rebase own --rev
2148 # --rev option from pull conflict with rebase own --rev
2149 # dropping it
2149 # dropping it
2150 if 'rev' in opts:
2150 if 'rev' in opts:
2151 del opts['rev']
2151 del opts['rev']
2152 # positional argument from pull conflicts with rebase's own
2152 # positional argument from pull conflicts with rebase's own
2153 # --source.
2153 # --source.
2154 if 'source' in opts:
2154 if 'source' in opts:
2155 del opts['source']
2155 del opts['source']
2156 # revsprepull is the len of the repo, not revnum of tip.
2156 # revsprepull is the len of the repo, not revnum of tip.
2157 destspace = list(repo.changelog.revs(start=revsprepull))
2157 destspace = list(repo.changelog.revs(start=revsprepull))
2158 opts['_destspace'] = destspace
2158 opts['_destspace'] = destspace
2159 try:
2159 try:
2160 rebase(ui, repo, **opts)
2160 rebase(ui, repo, **opts)
2161 except error.NoMergeDestAbort:
2161 except error.NoMergeDestAbort:
2162 # we can maybe update instead
2162 # we can maybe update instead
2163 rev, _a, _b = destutil.destupdate(repo)
2163 rev, _a, _b = destutil.destupdate(repo)
2164 if rev == repo[b'.'].rev():
2164 if rev == repo[b'.'].rev():
2165 ui.status(_(b'nothing to rebase\n'))
2165 ui.status(_(b'nothing to rebase\n'))
2166 else:
2166 else:
2167 ui.status(_(b'nothing to rebase - updating instead\n'))
2167 ui.status(_(b'nothing to rebase - updating instead\n'))
2168 # not passing argument to get the bare update behavior
2168 # not passing argument to get the bare update behavior
2169 # with warning and trumpets
2169 # with warning and trumpets
2170 commands.update(ui, repo)
2170 commands.update(ui, repo)
2171 else:
2171 else:
2172 if opts.get('tool'):
2172 if opts.get('tool'):
2173 raise error.InputError(_(b'--tool can only be used with --rebase'))
2173 raise error.InputError(_(b'--tool can only be used with --rebase'))
2174 ret = orig(ui, repo, *args, **opts)
2174 ret = orig(ui, repo, *args, **opts)
2175
2175
2176 return ret
2176 return ret
2177
2177
2178
2178
2179 def _compute_obsolete_sets(repo, rebaseobsrevs, destmap):
2179 def _compute_obsolete_sets(repo, rebaseobsrevs, destmap):
2180 """Figure out what to do about about obsolete revisions
2180 """Figure out what to do about about obsolete revisions
2181
2181
2182 `obsolete_with_successor_in_destination` is a mapping mapping obsolete => successor for all
2182 `obsolete_with_successor_in_destination` is a mapping mapping obsolete => successor for all
2183 obsolete nodes to be rebased given in `rebaseobsrevs`.
2183 obsolete nodes to be rebased given in `rebaseobsrevs`.
2184
2184
2185 `obsolete_with_successor_in_rebase_set` is a set with obsolete revisions,
2185 `obsolete_with_successor_in_rebase_set` is a set with obsolete revisions,
2186 without a successor in destination, that would cause divergence.
2186 without a successor in destination, that would cause divergence.
2187 """
2187 """
2188 obsolete_with_successor_in_destination = {}
2188 obsolete_with_successor_in_destination = {}
2189 obsolete_with_successor_in_rebase_set = set()
2189 obsolete_with_successor_in_rebase_set = set()
2190
2190
2191 cl = repo.changelog
2191 cl = repo.changelog
2192 get_rev = cl.index.get_rev
2192 get_rev = cl.index.get_rev
2193 extinctrevs = set(repo.revs(b'extinct()'))
2193 extinctrevs = set(repo.revs(b'extinct()'))
2194 for srcrev in rebaseobsrevs:
2194 for srcrev in rebaseobsrevs:
2195 srcnode = cl.node(srcrev)
2195 srcnode = cl.node(srcrev)
2196 # XXX: more advanced APIs are required to handle split correctly
2196 # XXX: more advanced APIs are required to handle split correctly
2197 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
2197 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
2198 # obsutil.allsuccessors includes node itself
2198 # obsutil.allsuccessors includes node itself
2199 successors.remove(srcnode)
2199 successors.remove(srcnode)
2200 succrevs = {get_rev(s) for s in successors}
2200 succrevs = {get_rev(s) for s in successors}
2201 succrevs.discard(None)
2201 succrevs.discard(None)
2202 if not successors or succrevs.issubset(extinctrevs):
2202 if not successors or succrevs.issubset(extinctrevs):
2203 # no successor, or all successors are extinct
2203 # no successor, or all successors are extinct
2204 obsolete_with_successor_in_destination[srcrev] = None
2204 obsolete_with_successor_in_destination[srcrev] = None
2205 else:
2205 else:
2206 dstrev = destmap[srcrev]
2206 dstrev = destmap[srcrev]
2207 for succrev in succrevs:
2207 for succrev in succrevs:
2208 if cl.isancestorrev(succrev, dstrev):
2208 if cl.isancestorrev(succrev, dstrev):
2209 obsolete_with_successor_in_destination[srcrev] = succrev
2209 obsolete_with_successor_in_destination[srcrev] = succrev
2210 break
2210 break
2211 else:
2211 else:
2212 # If 'srcrev' has a successor in rebase set but none in
2212 # If 'srcrev' has a successor in rebase set but none in
2213 # destination (which would be catched above), we shall skip it
2213 # destination (which would be catched above), we shall skip it
2214 # and its descendants to avoid divergence.
2214 # and its descendants to avoid divergence.
2215 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
2215 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
2216 obsolete_with_successor_in_rebase_set.add(srcrev)
2216 obsolete_with_successor_in_rebase_set.add(srcrev)
2217
2217
2218 return (
2218 return (
2219 obsolete_with_successor_in_destination,
2219 obsolete_with_successor_in_destination,
2220 obsolete_with_successor_in_rebase_set,
2220 obsolete_with_successor_in_rebase_set,
2221 )
2221 )
2222
2222
2223
2223
2224 def abortrebase(ui, repo):
2224 def abortrebase(ui, repo):
2225 with repo.wlock(), repo.lock():
2225 with repo.wlock(), repo.lock():
2226 rbsrt = rebaseruntime(repo, ui)
2226 rbsrt = rebaseruntime(repo, ui)
2227 rbsrt._prepareabortorcontinue(isabort=True)
2227 rbsrt._prepareabortorcontinue(isabort=True)
2228
2228
2229
2229
2230 def continuerebase(ui, repo):
2230 def continuerebase(ui, repo):
2231 with repo.wlock(), repo.lock():
2231 with repo.wlock(), repo.lock():
2232 rbsrt = rebaseruntime(repo, ui)
2232 rbsrt = rebaseruntime(repo, ui)
2233 ms = mergestatemod.mergestate.read(repo)
2233 ms = mergestatemod.mergestate.read(repo)
2234 mergeutil.checkunresolved(ms)
2234 mergeutil.checkunresolved(ms)
2235 retcode = rbsrt._prepareabortorcontinue(isabort=False)
2235 retcode = rbsrt._prepareabortorcontinue(isabort=False)
2236 if retcode is not None:
2236 if retcode is not None:
2237 return retcode
2237 return retcode
2238 rbsrt._performrebase(None)
2238 rbsrt._performrebase(None)
2239 rbsrt._finishrebase()
2239 rbsrt._finishrebase()
2240
2240
2241
2241
2242 def summaryhook(ui, repo):
2242 def summaryhook(ui, repo):
2243 if not repo.vfs.exists(b'rebasestate'):
2243 if not repo.vfs.exists(b'rebasestate'):
2244 return
2244 return
2245 try:
2245 try:
2246 rbsrt = rebaseruntime(repo, ui, {})
2246 rbsrt = rebaseruntime(repo, ui, {})
2247 rbsrt.restorestatus()
2247 rbsrt.restorestatus()
2248 state = rbsrt.state
2248 state = rbsrt.state
2249 except error.RepoLookupError:
2249 except error.RepoLookupError:
2250 # i18n: column positioning for "hg summary"
2250 # i18n: column positioning for "hg summary"
2251 msg = _(b'rebase: (use "hg rebase --abort" to clear broken state)\n')
2251 msg = _(b'rebase: (use "hg rebase --abort" to clear broken state)\n')
2252 ui.write(msg)
2252 ui.write(msg)
2253 return
2253 return
2254 numrebased = len([i for i in state.values() if i >= 0])
2254 numrebased = len([i for i in state.values() if i >= 0])
2255 # i18n: column positioning for "hg summary"
2255 # i18n: column positioning for "hg summary"
2256 ui.write(
2256 ui.write(
2257 _(b'rebase: %s, %s (rebase --continue)\n')
2257 _(b'rebase: %s, %s (rebase --continue)\n')
2258 % (
2258 % (
2259 ui.label(_(b'%d rebased'), b'rebase.rebased') % numrebased,
2259 ui.label(_(b'%d rebased'), b'rebase.rebased') % numrebased,
2260 ui.label(_(b'%d remaining'), b'rebase.remaining')
2260 ui.label(_(b'%d remaining'), b'rebase.remaining')
2261 % (len(state) - numrebased),
2261 % (len(state) - numrebased),
2262 )
2262 )
2263 )
2263 )
2264
2264
2265
2265
2266 def uisetup(ui):
2266 def uisetup(ui):
2267 # Replace pull with a decorator to provide --rebase option
2267 # Replace pull with a decorator to provide --rebase option
2268 entry = extensions.wrapcommand(commands.table, b'pull', pullrebase)
2268 entry = extensions.wrapcommand(commands.table, b'pull', pullrebase)
2269 entry[1].append(
2269 entry[1].append(
2270 (b'', b'rebase', None, _(b"rebase working directory to branch head"))
2270 (b'', b'rebase', None, _(b"rebase working directory to branch head"))
2271 )
2271 )
2272 entry[1].append((b't', b'tool', b'', _(b"specify merge tool for rebase")))
2272 entry[1].append((b't', b'tool', b'', _(b"specify merge tool for rebase")))
2273 cmdutil.summaryhooks.add(b'rebase', summaryhook)
2273 cmdutil.summaryhooks.add(b'rebase', summaryhook)
2274 statemod.addunfinished(
2274 statemod.addunfinished(
2275 b'rebase',
2275 b'rebase',
2276 fname=b'rebasestate',
2276 fname=b'rebasestate',
2277 stopflag=True,
2277 stopflag=True,
2278 continueflag=True,
2278 continueflag=True,
2279 abortfunc=abortrebase,
2279 abortfunc=abortrebase,
2280 continuefunc=continuerebase,
2280 continuefunc=continuerebase,
2281 )
2281 )
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now